Distributed firewall

Last updated

A distributed firewall is a security application on a host machine of a network that protects the servers and user machines of its enterprise's networks against unwanted intrusion. A firewall is a system or group of systems (router, proxy, or gateway) that implements a set of security rules to enforce access control between two networks to protect the "inside" network from the "outside" network. They filter all traffic regardless of its origin—the Internet or the internal network. Usually deployed behind the traditional firewall, they provide a second layer of defense. The advantages of the distributed firewall allow security rules (policies) to be defined and pushed out on an enterprise-wide basis, which is necessary for larger enterprises.

Contents

Basic Working

Distributed firewalls are often kernel-mode applications that sit at the bottom of the OSI stack in the operating system. They filter all traffic regardless of its origin—the Internet or the internal network. They treat both the Internet and the internal network as "unfriendly". They guard the individual machine in the same way that the perimeter firewall guards the overall network. Distributed firewall function rests on three notions:

The basic idea is simple. A compiler translates the policy language into some internal format. The system management software distributes this policy file to all hosts that are protected by the firewall. And incoming packets are accepted or rejected by each "inside" host, according to both the policy and the cryptographically verified identity of each sender.

Features

Central Management System

The security policy of distributed firewalls are defined centrally, and the enforcement of the policy takes place at each endpoint (hosts, routers, etc.) Centralized management is the ability to populate servers and end-users machines, to configure and "push out" consistent security policies, which helps to maximize limited resources. The ability to gather reports and maintain updates centrally makes distributed security practical. This feature of distributed firewalls helps in two ways. Firstly, remote end-user machines can be secured. Secondly, they secure critical servers on the network preventing intrusion by malicious code and "jailing" other such code by not letting the protected server be used as a launchpad for expanded attacks.

Policy Transmission System

The distribution of the policy, or security rules, can be different and varies with the implementation. It can be either directly pushed to end systems, or pulled when necessary.

Pull technique

In the pull technique, the hosts, while booting up, notify the central management server to check whether the central management server is up and active. It registers with the central management server and requests the policies it should implement. The central management server then provides the host with its security policies.

Push Technique

The push technique is used when the policies are updated on the central-management side by the network administrator, and the hosts have to be updated immediately. This push technology ensures that the hosts always have the updated policies at any time. The policy language defines which inbound and outbound connections on any component of the network policy domain are allowed, and can affect policy decisions on any layer of the network, whether they are rejecting or passing certain packets or enforcing policies at the Application Layer of the OSI stack.

Host-end Implementation

Conventional firewalls rely on controlling entry points to function, or more precisely, rely on the assumption that everyone on one side of the entry point—the firewall—is to be trusted, and that anyone on the other side is, at least potentially, an enemy. Distributed firewalls work by enabling only essential traffic into the machine they protect, prohibiting other types of traffic to prevent unwanted intrusions. The security policies transmitted from the central management server also have to be implemented by the host. The host-end part of the distributed firewall does not provide any administrative control for the network administrator to control the implementation of policies. The host allows traffic based on the security rules it has implemented.

End-to-end Encryption

End-to-end encryption is a threat to conventional firewalls, since the firewall generally does not have the necessary keys to peek through the encryption. Distributed firewalls use the implementation technique end-to-end IPSEC. [1] IPSEC is a protocol suite, recently standardized by the IETF, which provides network-layer security services such as packet confidentiality, authentication, data integrity, replay protection, and automated key management. This is an artifact of firewall deployment: internal traffic that is not seen by the firewall cannot be filtered; as a result, internal users can mount attacks on other users and networks without the firewall being able to intervene. Large networks today tend to have a large number of entry points. Furthermore, many sites employ internal firewalls to provide some form of compartmentalization. This makes administration particularly difficult, both from a practical point of view and with regard to policy consistency, since no unified and comprehensive management mechanism exists. In end-to-end IPSEC, each incoming packet is associated with a certificate; the access granted to that packet is determined by the rights granted to that certificate. [1] If the certificate name is different, or if there is no IPSEC protection, the packet will be dropped as unauthorized. Given that access rights in a strong distributed firewall are tied to certificates, access rights can be limited by changing the set of certificates accepted. Only hosts with newer certificates are then considered to be "inside"; if the change is not installed, the machine will have fewer privileges. [1]

Network Topology

Distributed firewalls can protect hosts that are not within a topological boundary. System management packages are used to administer individual machines, so security administrators define security policy in terms of host identifiers and policy can be enforced by each individual host. Conventional firewall can only enforce a policy on traffic that traverses it, so traffic exchanged among nodes in the protected network cannot be controlled, which gives an attacker that is already an insider or can somehow bypass the firewall and establish a new, unauthorized entry point to the network without the administrator's knowledge and consent. For conventional firewalls, protocols such as RealAudio are difficult to process, because conventional firewalls lacks certain knowledge that is readily available at the endpoints. [1] Due to the increasing line speeds and the more computation-intensive protocols that a firewall must support, traditional firewalls tend to become congestion points. This gap between processing and networking speeds is likely to increase, because as computers (and hence firewalls) are getting faster, the combination of more complex protocols and the tremendous increase in the amount of data that must be passed through the firewall has been and likely will continue to outpace Moore's law.

Effectiveness

Service exposure and port scanning

Distributed firewalls are excellent at rejecting connection requests for inappropriate services. They typically drop such requests at the host, but alternatively, they may instead send back a response requesting that the connection be authenticated, which in turn gives notice of the existence of the host. Unlike conventional firewalls built on pure packet filters which cannot reject some "stealth scans" very well, distributed firewalls will reassemble packets from a port scanner and then reject it.

IP address spoofing

These attacks can be dealt with at the host by distributed firewalls with corresponding rules for discarding packets from inside the network policy domain. Distributed firewalls can use cryptographic mechanisms to prevent attacks based on forged source addresses, under the assumption that the trusted repository containing all necessary credentials has not been subject to compromise in itself.

Malicious software

The distributed firewall's framework and policy language, which allows for a policy decision on the application level, can circumvent a wide variety of threats residing in the application and intermediate level of communication traffic. In complex, resource-consuming situations where decisions must be made on code like Java, distributed firewalls can placate threats under the condition that contents of such communication packets can be interpreted semantically by the policy verifying mechanisms. Stateful inspection of packets shows up to be easily adapted to these requirements and allows for finer granularity in decision making. Policy enforcement of distributed firewalls is also not compromised when malicious code contents are completely disguised with the use of virtual private networks and enciphered communication traffic to the screening unit at the network perimeter, unlike conventional firewalls.

Intrusion detection

Distributed firewalls can detect attempted intrusions, but may have difficulty with probe collection. Each individual host in a network has to notice probes and forward them to some central location for processing and correlation. The former problem is not hard; many hosts already log such attempts. The collection is more problematic, especially at times of poor connectivity to the central site. There is also the risk of coordinated attacks in effect, causing a denial-of-service attack against the central machine.

Insider attacks

A distributed firewall's independence on topological constraints supports the enforcement of policies, whether hosts are members or outsiders of the overall policy domain. They base their decisions on authenticating mechanisms which are not inherent characteristics of the network's layout. Moreover, compromise of an endpoint either by a legitimate user or intruder will not weaken the overall network in a way that leads directly to compromise of other machines, given the fact that the deployment of virtual private networks prevents sniffing of communication traffic in which the attacked machine is not involved. But on the end-point itself, assuming that a machine has been taken over by an adversary must lead to the conclusion that the policy enforcement mechanisms themselves may be broken. The installation of backdoors on this machine can be done quite easily once the security mechanisms are flawed, and with the lack of a perimeter firewall, there is no trusted entity which might prevent arbitrary traffic entering or leaving the compromised host. Additionally, tools can be used that allow tunneling of another application's communication, and can not be prevented without proper knowledge of the decrypting credentials; moreover, given the fact that an attack has been performed successfully, the verifying mechanisms of the machine themselves may not be trusted anymore.

User Cooperation

At first glance, the biggest weakness of distributed firewalls is their greater susceptibility to lack of cooperation by users. Distributed firewalls can reduce the threat of actual attacks by insiders, simply by making it easier to set up smaller groups of users. Thus, one can restrict access to a file server to only those users who need it, rather than letting anyone inside the company have access. It is also worth expending some effort to prevent casual subversion of policies. Policies could be digitally signed, and verified by a frequently-changing key in an awkward-to-replace location. For more stringent protections, the policy enforcement can be incorporated into a tamper-resistant network card.

Related Research Articles

<span class="mw-page-title-main">Denial-of-service attack</span> Cyber attack disrupting service by overloading the provider of the service

In computing, a denial-of-service attack is a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to a network. Denial of service is typically accomplished by flooding the targeted machine or resource with superfluous requests in an attempt to overload systems and prevent some or all legitimate requests from being fulfilled.

In computing, Internet Protocol Security (IPsec) is a secure network protocol suite that authenticates and encrypts packets of data to provide secure encrypted communication between two computers over an Internet Protocol network. It is used in virtual private networks (VPNs).

<span class="mw-page-title-main">Intrusion detection system</span> Network protection device or software

An intrusion detection system is a device or software application that monitors a network or systems for malicious activity or policy violations. Any intrusion activity or violation is typically reported either to an administrator or collected centrally using a security information and event management (SIEM) system. A SIEM system combines outputs from multiple sources and uses alarm filtering techniques to distinguish malicious activity from false alarms.

A virtual private network (VPN) is a mechanism for creating a secure connection between a computing device and a computer network, or between two networks, using an insecure communication medium such as the public Internet.

In computing, Internet Key Exchange is the protocol used to set up a security association (SA) in the IPsec protocol suite. IKE builds upon the Oakley protocol and ISAKMP. IKE uses X.509 certificates for authentication ‒ either pre-shared or distributed using DNS ‒ and a Diffie–Hellman key exchange to set up a shared session secret from which cryptographic keys are derived. In addition, a security policy for every peer which will connect must be manually maintained.

Unix security refers to the means of securing a Unix or Unix-like operating system. A secure environment is achieved not only by the design concepts of these operating systems, but also through vigilant user and administrative practices.

In computer networking, Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol used to support virtual private networks (VPNs) or as part of the delivery of services by ISPs. It uses encryption ('hiding') only for its own control messages, and does not provide any encryption or confidentiality of content by itself. Rather, it provides a tunnel for Layer 2, and the tunnel itself may be passed over a Layer 3 encryption protocol such as IPsec.

Deep packet inspection (DPI) is a type of data processing that inspects in detail the data being sent over a computer network, and may take actions such as alerting, blocking, re-routing, or logging it accordingly. Deep packet inspection is often used for baselining application behavior, analyzing network usage, troubleshooting network performance, ensuring that data is in the correct format, checking for malicious code, eavesdropping, and internet censorship, among other purposes. There are multiple headers for IP packets; network equipment only needs to use the first of these for normal operation, but use of the second header is normally considered to be shallow packet inspection despite this definition.

<span class="mw-page-title-main">Internet security</span> Branch of computer security

Internet security is a branch of computer security. It encompasses the Internet, browser security, web site security, and network security as it applies to other applications or operating systems as a whole. Its objective is to establish rules and measures to use against attacks over the Internet. The Internet is an inherently insecure channel for information exchange, with high risk of intrusion or fraud, such as phishing, online viruses, trojans, ransomware and worms.

In computer networking, port knocking is a method of externally opening ports on a firewall by generating a connection attempt on a set of prespecified closed ports. Once a correct sequence of connection attempts is received, the firewall rules are dynamically modified to allow the host which sent the connection attempts to connect over specific port(s). A variant called single packet authorization (SPA) exists, where only a single "knock" is needed, consisting of an encrypted packet.

In computer networks, a tunneling protocol is a communication protocol which allows for the movement of data from one network to another. It involves allowing private network communications to be sent across a public network through a process called encapsulation.

<span class="mw-page-title-main">Wireless security</span> Aspect of wireless networks

Wireless security is the prevention of unauthorized access or damage to computers or data using wireless networks, which include Wi-Fi networks. The term may also refer to the protection of the wireless network itself from adversaries seeking to damage the confidentiality, integrity, or availability of the network. The most common type is Wi-Fi security, which includes Wired Equivalent Privacy (WEP) and Wi-Fi Protected Access (WPA). WEP is an old IEEE 802.11 standard from 1997. It is a notoriously weak security standard: the password it uses can often be cracked in a few minutes with a basic laptop computer and widely available software tools. WEP was superseded in 2003 by WPA, a quick alternative at the time to improve security over WEP. The current standard is WPA2; some hardware cannot support WPA2 without firmware upgrade or replacement. WPA2 uses an encryption device that encrypts the network with a 256-bit key; the longer key length improves security over WEP. Enterprises often enforce security using a certificate-based system to authenticate the connecting device, following the standard 802.11X.

Network access control (NAC) is an approach to computer security that attempts to unify endpoint security technology, user or system authentication and network security enforcement.

An application-level gateway is a security component that augments a firewall or NAT employed in a computer network. It allows customized NAT traversal filters to be plugged into the gateway to support address and port translation for certain application layer "control/data" protocols such as FTP, BitTorrent, SIP, RTSP, file transfer in IM applications. In order for these protocols to work through NAT or a firewall, either the application has to know about an address/port number combination that allows incoming packets, or the NAT has to monitor the control traffic and open up port mappings dynamically as required. Legitimate application data can thus be passed through the security checks of the firewall or NAT that would have otherwise restricted the traffic for not meeting its limited filter criteria.

In computing, a wireless intrusion prevention system (WIPS) is a network device that monitors the radio spectrum for the presence of unauthorized access points (intrusion detection), and can automatically take countermeasures (intrusion prevention).

There are a number of security and safety features new to Windows Vista, most of which are not available in any prior Microsoft Windows operating system release.

In computing, Microsoft's Windows Vista and Windows Server 2008 introduced in 2007/2008 a new networking stack named Next Generation TCP/IP stack, to improve on the previous stack in several ways. The stack includes native implementation of IPv6, as well as a complete overhaul of IPv4. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after a change in settings. The new stack, implemented as a dual-stack model, depends on a strong host-model and features an infrastructure to enable more modular components that one can dynamically insert and remove.

<span class="mw-page-title-main">Firewall (computing)</span> Software or hardware-based network security system

In computing, a firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules. A firewall typically establishes a barrier between a trusted network and an untrusted network, such as the Internet.

The following outline is provided as an overview of and topical guide to computer security:

Data center security is the set of policies, precautions and practices adopted at a data center to avoid unauthorized access and manipulation of its resources. The data center houses the enterprise applications and data, hence why providing a proper security system is critical. Denial of service (DoS), theft of confidential information, data alteration, and data loss are some of the common security problems afflicting data center environments.

References

  1. 1 2 3 4 Bellovin, M. Steven "Distributed Firewalls", login, November 1999, pp. 39–47 https://www.cs.columbia.edu/~smb/papers/distfw.pdf

Books

  1. Sonnenreich, Wes, and Tom Yates, Building Linux and OpenBSD Firewalls, Singapore: Addison Wiley
  2. Zwicky, D. Elizabeth, Simon Cooper, Brent D. Chapman, Building Internet Firewalls O'Reilly Publications
  3. Strebe, Firewalls 24 Seven, BPB Publishers

White papers and reports

  1. Dr. Hancock, Bill "Host-Resident Firewalls: Defending Windows NT/2000 Servers and Desktops from Network Attacks"
  2. Bellovin, S.M. and W.R. Cheswick, "Firewalls and Internet Security: Repelling the Wily Hacker", Addison-Wesley, 1994.
  3. Ioannidis, S. and Keromytis, A.D., and Bellovin, S.M. and J.M. Smith, "Implementing a Distributed Firewall", Proceedings of Computer and Communications Security (CCS), pp. 190–199, November 2000, Athens, Greece.