Client honeypot

Last updated

Honeypots are security devices whose value lie in being probed and compromised. Traditional honeypots are servers (or devices that expose server services) that wait passively to be attacked. Client Honeypots are active security devices in search of malicious servers that attack clients. The client honeypot poses as a client and interacts with the server to examine whether an attack has occurred. Often the focus of client honeypots is on web browsers, but any client that interacts with servers can be part of a client honeypot (for example ftp, email, ssh, etc.).

Contents

There are several terms that are used to describe client honeypots. Besides client honeypot, which is the generic classification, honeyclient is the other term that is generally used and accepted. However, there is a subtlety here, as "honeyclient" is actually a homograph that could also refer to the first known open source client honeypot implementation (see below), although this should be clear from the context.

Architecture

A client honeypot is composed of three components. The first component, a queuer, is responsible for creating a list of servers for the client to visit. This list can be created, for example, through crawling. The second component is the client itself, which is able to make a requests to servers identified by the queuer. After the interaction with the server has taken place, the third component, an analysis engine, is responsible for determining whether an attack has taken place on the client honeypot.

In addition to these components, client honeypots are usually equipped with some sort of containment strategy to prevent successful attacks from spreading beyond the client honeypot. This is usually achieved through the use of firewalls and virtual machine sandboxes.

Analogous to traditional server honeypots, client honeypots are mainly classified by their interaction level: high or low; which denotes the level of functional interaction the server can utilize on the client honeypot. In addition to this there are also newly hybrid approaches which denotes the usage of both high and low interaction detection techniques.

High interaction

High interaction client honeypots are fully functional systems comparable to real systems with real clients. As such, no functional limitations (besides the containment strategy) exist on high interaction client honeypots. Attacks on high interaction client honeypots are detected via inspection of the state of the system after a server has been interacted with. The detection of changes to the client honeypot may indicate the occurrence of an attack against that has exploited a vulnerability of the client. An example of such a change is the presence of a new or altered file.

High interaction client honeypots are very effective at detecting unknown attacks on clients. However, the tradeoff for this accuracy is a performance hit from the amount of system state that has to be monitored to make an attack assessment. Also, this detection mechanism is prone to various forms of evasion by the exploit. For example, an attack could delay the exploit from immediately triggering (time bombs) or could trigger upon a particular set of conditions or actions (logic bombs). Since no immediate, detectable state change occurred, the client honeypot is likely to incorrectly classify the server as safe even though it did successfully perform its attack on the client. Finally, if the client honeypots are running in virtual machines, then an exploit may try to detect the presence of the virtual environment and cease from triggering or behave differently.

Capture-HPC

Capture is a high interaction client honeypot developed by researchers at Victoria University of Wellington, NZ. Capture differs from existing client honeypots in various ways. First, it is designed to be fast. State changes are being detected using an event based model allowing to react to state changes as they occur. Second, Capture is designed to be scalable. A central Capture server is able to control numerous clients across a network. Third, Capture is supposed to be a framework that allows to utilize different clients. The initial version of Capture supports Internet Explorer, but the current version supports all major browsers (Internet Explorer, Firefox, Opera, Safari) as well as other HTTP aware client applications, such as office applications and media players.

HoneyClient

HoneyClient is a web browser based (IE/FireFox) high interaction client honeypot designed by Kathy Wang in 2004 and subsequently developed at MITRE. It was the first open source client honeypot and is a mix of Perl, C++, and Ruby. HoneyClient is state-based and detects attacks on Windows clients by monitoring files, process events, and registry entries. It has integrated the Capture-HPC real-time integrity checker to perform this detection. HoneyClient also contains a crawler, so it can be seeded with a list of initial URLs from which to start and can then continue to traverse web sites in search of client-side malware.

HoneyMonkey (dead since 2010)

HoneyMonkey is a web browser based (IE) high interaction client honeypot implemented by Microsoft in 2005. It is not available for download. HoneyMonkey is state based and detects attacks on clients by monitoring files, registry, and processes. A unique characteristic of HoneyMonkey is its layered approach to interacting with servers in order to identify zero-day exploits. HoneyMonkey initially crawls the web with a vulnerable configuration. Once an attack has been identified, the server is reexamined with a fully patched configuration. If the attack is still detected, one can conclude that the attack utilizes an exploit for which no patch has been publicly released yet and therefore is quite dangerous.

SHELIA (dead since 2009)

Shelia is a high interaction client honeypot developed by Joan Robert Rocaspana at Vrije Universiteit Amsterdam. It integrates with an email reader and processes each email it receives (URLs & attachments). Depending on the type of URL or attachment received, it opens a different client application (e.g. browser, office application, etc.) It monitors whether executable instructions are executed in data area of memory (which would indicate a buffer overflow exploit has been triggered). With such an approach, SHELIA is not only able to detect exploits, but is able to actually ward off exploits from triggering.

UW Spycrawler

The Spycrawler developed at the University of Washington is yet another browser based (Mozilla) high interaction client honeypot developed by Moshchuk et al. in 2005. This client honeypot is not available for download. The Spycrawler is state based and detects attacks on clients by monitoring files, processes, registry, and browser crashes. Spycrawlers detection mechanism is event based. Further, it increases the passage of time of the virtual machine the Spycrawler is operating in to overcome (or rather reduce the impact of) time bombs.

Web Exploit Finder

WEF is an implementation of an automatic drive-by-download – detection in a virtualized environment, developed by Thomas Müller, Benjamin Mack and Mehmet Arziman, three students from the Hochschule der Medien (HdM), Stuttgart during the summer term in 2006. WEF can be used as an active HoneyNet with a complete virtualization architecture underneath for rollbacks of compromised virtualized machines.

Low interaction

Low interaction client honeypots differ from high interaction client honeypots in that they do not utilize an entire real system, but rather use lightweight or simulated clients to interact with the server. (in the browser world, they are similar to web crawlers). Responses from servers are examined directly to assess whether an attack has taken place. This could be done, for example, by examining the response for the presence of malicious strings.

Low interaction client honeypots are easier to deploy and operate than high interaction client honeypots and also perform better. However, they are likely to have a lower detection rate since attacks have to be known to the client honeypot in order for it to detect them; new attacks are likely to go unnoticed. They also suffer from the problem of evasion by exploits, which may be exacerbated due to their simplicity, thus making it easier for an exploit to detect the presence of the client honeypot.

HoneyC

HoneyC is a low interaction client honeypot developed at Victoria University of Wellington by Christian Seifert in 2006. HoneyC is a platform independent open source framework written in Ruby. It currently concentrates driving a web browser simulator to interact with servers. Malicious servers are detected by statically examining the web server's response for malicious strings through the usage of Snort signatures.

Monkey-Spider (dead since 2008)

Monkey-Spider is a low-interaction client honeypot initially developed at the University of Mannheim by Ali Ikinci. Monkey-Spider is a crawler based client honeypot initially utilizing anti-virus solutions to detect malware. It is claimed to be fast and expandable with other detection mechanisms. The work has started as a diploma thesis and is continued and released as Free Software under the GPL.

PhoneyC (dead since 2015)

PhoneyC is a low-interaction client developed by Jose Nazario. PhoneyC mimics legitimate web browsers and can understand dynamic content by de-obfuscating malicious content for detection. Furthermore, PhoneyC emulates specific vulnerabilities to pinpoint the attack vector. PhoneyC is a modular framework that enables the study of malicious HTTP pages and understands modern vulnerabilities and attacker techniques.

SpyBye

SpyBye is a low interaction client honeypot developed by Niels Provos. SpyBye allows a web master to determine whether a web site is malicious by a set of heuristics and scanning of content against the ClamAV engine.

Thug

Thug is a low-interaction client honeypot developed by Angelo Dell'Aera. Thug emulates the behaviour of a web browser and is focused on detection of malicious web pages. The tool uses Google V8 Javascript engine and implements its own Document Object Model (DOM). The most important and unique features of Thug are: the ActiveX controls handling module (vulnerability module), and static + dynamic analysis capabilities (using Abstract Syntax Tree and Libemu shellcode analyser). Thug is written in Python under GNU General Public License.

YALIH

YALIH (Yet Another Low Interaction Honeyclient) is a low Interaction Client honeypot developed by Masood Mansoori from the honeynet chapter of the Victoria University of Wellington, New Zealand and designed to detect malicious websites through signature and pattern matching techniques. YALIH has the capability to collect suspicious URLs from malicious website databases, Bing API, inbox and SPAM folder through POP3 and IMAP protocol. It can perform Javascript extraction, de-obfuscation and de-minification of scripts embedded within a website and can emulate referrer, browser agents and handle redirection, cookies and sessions. Its visitor agent is capable of fetching a website from multiple locations to bypass geo-location and IP cloaking attacks. YALIH can also generate automated signatures to detect variations of an attack. YALIH is available as an open source project.

miniC

miniC is a low interaction client honeypot based on wget retriever and Yara engine. It is designed to be light, fast and suitable for retrieval of a large number of websites. miniC allows to set and simulate referrer, user-agent, accept_language and few other variables. miniC was designed at New Zealand Honeynet chapter of the Victoria University of Wellington.

Hybrid Client Honeypots

Hybrid client honeypots combine both low and high interaction client honeypots to gain from the advantages of both approaches.

HoneySpider (dead since 2013)

The HoneySpider network is a hybrid client honeypot developed as a joint venture between NASK/CERT Polska, GOVCERT.NL  [ nl ] [1] and SURFnet. [2] The projects goal is to develop a complete client honeypot system, based on existing client honeypot solutions and a crawler specially for the bulk processing of URLs.

Related Research Articles

In cryptography and computer security, a man-in-the-middle (MITM) attack, or on-path attack, is a cyberattack where the attacker secretly relays and possibly alters the communications between two parties who believe that they are directly communicating with each other, as the attacker has inserted themselves between the two user parties.

Transport Layer Security (TLS) is a cryptographic protocol designed to provide communications security over a computer network, such as the Internet. The protocol is widely used in applications such as email, instant messaging, and voice over IP, but its use in securing HTTPS remains the most publicly visible.

Cross-site scripting (XSS) is a type of security vulnerability that can be found in some web applications. XSS attacks enable attackers to inject client-side scripts into web pages viewed by other users. A cross-site scripting vulnerability may be used by attackers to bypass access controls such as the same-origin policy. During the second half of 2007, XSSed documented 11,253 site-specific cross-site vulnerabilities, compared to 2,134 "traditional" vulnerabilities documented by Symantec. XSS effects vary in range from petty nuisance to significant security risk, depending on the sensitivity of the data handled by the vulnerable site and the nature of any security mitigation implemented by the site's owner network.

<span class="mw-page-title-main">Honeypot (computing)</span> Computer security mechanism

In computer terminology, a honeypot is a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems. Generally, a honeypot consists of data that appears to be a legitimate part of the site which contains information or resources of value to attackers. It is actually isolated, monitored, and capable of blocking or analyzing the attackers. This is similar to police sting operations, colloquially known as "baiting" a suspect.

Inline linking is the use of a linked object, often an image, on one site by a web page belonging to a second site. One site is said to have an inline link to the other site where the object is located.

Network security consists of the policies, processes and practices adopted to prevent, detect and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs: conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: it secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password.

<span class="mw-page-title-main">Botnet</span> Collection of compromised internet-connected devices controlled by a third party

A botnet is a group of Internet-connected devices, each of which runs one or more bots. Botnets can be used to perform distributed denial-of-service (DDoS) attacks, steal data, send spam, and allow the attacker to access the device and its connection. The owner can control the botnet using command and control (C&C) software. The word "botnet" is a portmanteau of the words "robot" and "network". The term is usually used with a negative or malicious connotation.

Code injection is a class of computer security exploit in which vulnerable computer programs or system processes fail to correctly handle external data, such as user input, leading to the program misinterpreting the data as a command that should be executed. An attacker utilizing this method thereby "injects" code into the program while it is running. Successful exploitation of a code injection vulnerability can result in data breaches, access to restricted or critical computer systems, and the spread of malware.

In computer security, a drive-by download is the unintended download of software, typically malicious software. The term "drive-by download" usually refers to a download which was authorized by a user without understanding what is being downloaded, such as in the case of a Trojan horse. In other cases, the term may simply refer to a download which occurs without a user's knowledge. Common types of files distributed in drive-by download attacks include computer viruses, spyware, or crimeware.

HoneyMonkey, short for Strider HoneyMonkey Exploit Detection System, is a Microsoft Research honeypot. The implementation uses a network of computers to crawl the World Wide Web searching for websites that use browser exploits to install malware on the HoneyMonkey computer. A snapshot of the memory, executables and registry of the honeypot computer is recorded before crawling a site. After visiting the site, the state of memory, executables, and registry is recorded and compared to the previous snapshot. The changes are analyzed to determine if the visited site installed any malware onto the client honeypot computer.

The Honeynet Project is an international cybersecurity non-profit research organization that investigates new cyber attacks and develops open-source tools to help improve Internet security by tracking hackers' behavioral patterns.

<span class="mw-page-title-main">Fast flux</span> DNS evasion technique against origin server fingerprinting.

Fast flux is a domain name system (DNS) based evasion technique used by cyber criminals to hide phishing and malware delivery websites behind an ever-changing network of compromised hosts acting as reverse proxies to the backend botnet master—a bulletproof autonomous system. It can also refer to the combination of peer-to-peer networking, distributed command and control, web-based load balancing and proxy redirection used to make malware networks more resistant to discovery and counter-measures.

A web threat is any threat that uses the World Wide Web to facilitate cybercrime. Web threats use multiple types of malware and fraud, all of which utilize HTTP or HTTPS protocols, but may also employ other protocols and components, such as links in email or IM, or malware attachments or on servers that access the Web. They benefit cybercriminals by stealing information for subsequent sale and help absorb infected PCs into botnets.

Mobile security, or mobile device security, is the protection of smartphones, tablets, and laptops from threats associated with wireless computing. It has become increasingly important in mobile computing. The security of personal and business information now stored on smartphones is of particular concern.

Cross-site request forgery, also known as one-click attack or session riding and abbreviated as CSRF or XSRF, is a type of malicious exploit of a website or web application where unauthorized commands are submitted from a user that the web application trusts. There are many ways in which a malicious website can transmit such commands; specially-crafted image tags, hidden forms, and JavaScript fetch or XMLHttpRequests, for example, can all work without the user's interaction or even knowledge. Unlike cross-site scripting (XSS), which exploits the trust a user has for a particular site, CSRF exploits the trust that a site has in a user's browser. In a CSRF attack, an innocent end user is tricked by an attacker into submitting a web request that they did not intend. This may cause actions to be performed on the website that can include inadvertent client or server data leakage, change of session state, or manipulation of an end user's account.

Content Security Policy (CSP) is a computer security standard introduced to prevent cross-site scripting (XSS), clickjacking and other code injection attacks resulting from execution of malicious content in the trusted web page context. It is a Candidate Recommendation of the W3C working group on Web Application Security, widely supported by modern web browsers. CSP provides a standard method for website owners to declare approved origins of content that browsers should be allowed to load on that website—covered types are JavaScript, CSS, HTML frames, web workers, fonts, images, embeddable objects such as Java applets, ActiveX, audio and video files, and other HTML5 features.

Endpoint security or endpoint protection is an approach to the protection of computer networks that are remotely bridged to client devices. The connection of endpoint devices such as laptops, tablets, mobile phones, and other wireless devices to corporate networks creates attack paths for security threats. Endpoint security attempts to ensure that such devices follow compliance to standards.

Browser isolation is a cybersecurity model which aims to physically isolate an internet user's browsing activity away from their local networks and infrastructure. Browser isolation technologies approach this model in different ways, but they all seek to achieve the same goal, effective isolation of the web browser and a user's browsing activity as a method of securing web browsers from browser-based security exploits, as well as web-borne threats such as ransomware and other malware. When a browser isolation technology is delivered to its customers as a cloud hosted service, this is known as remote browser isolation (RBI), a model which enables organizations to deploy a browser isolation solution to their users without managing the associated server infrastructure. There are also client side approaches to browser isolation, based on client-side hypervisors, which do not depend on servers in order to isolate their users browsing activity and the associated risks, instead the activity is virtually isolated on the local host machine. Client-side solutions break the security through physical isolation model, but they do allow the user to avoid the server overhead costs associated with remote browser isolation solutions.

A web shell is a shell-like interface that enables a web server to be remotely accessed, often for the purposes of cyberattacks. A web shell is unique in that a web browser is used to interact with it.

References

  1. NCSC (14 May 2013). "Nationaal Cyber Security Centrum - NCSC". www.ncsc.nl.
  2. "SURF is de ICT-coöperatie van onderwijs en onderzoek". SURF.nl.

Literature

Papers

Presentations

Sites