Parasitic computing

Last updated

Parasitic computing is a technique where a program in normal authorized interactions with another program manages to get the other program to perform computations of a complex nature without exploiting vulnerabilities to execute attacker-supplied code on the latter. It is, in a sense, a security exploit in that the program implementing the parasitic computing has no authority to consume resources made available to the other program.

It was first proposed by Albert-Laszlo Barabasi, Vincent W. Freeh, Hawoong Jeong & Jay B. Brockman from University of Notre Dame, Indiana, USA, in 2001. [1] The example given by the original paper was two computers communicating over the Internet, under disguise of a standard communications session. The first computer is attempting to solve a large and extremely difficult 3-SAT problem; it has decomposed the original 3-SAT problem in a considerable number of smaller problems. Each of these smaller problems is then encoded as a relation between a checksum and a packet such that whether the checksum is accurate or not is also the answer to that smaller problem. The packet/checksum is then sent to another computer. This computer will, as part of receiving the packet and deciding whether it is valid and well-formed, create a checksum of the packet and see whether it is identical to the provided checksum. If the checksum is invalid, it will then request a new packet from the original computer. The original computer now knows the answer to that smaller problem based on the second computer's response, and can transmit a fresh packet embodying a different sub-problem. Eventually, all the sub-problems will be answered and the final answer easily calculated.

The example is based on an exploit of the Transmission Control Protocol (TCP), used for internet connections, so in the end, the target computer(s) is unaware that it has performed computation for the benefit of the other computer, or even done anything besides have a normal TCP/IP session.

The proof-of-concept is obviously extremely inefficient as the amount of computation necessary to merely send the packets in the first place easily exceeds the computations leeched from the other program; the 3-SAT problem would be solved much more quickly if just analyzed locally. In addition, in practice packets would probably have to be retransmitted occasionally when real checksum errors and network problems occur. However, parasitic computing on the level of checksums is a demonstration of the concept. The authors suggest that as one moves up the application stack, there might come a point where there is a net computational gain to the parasite - perhaps one could break down interesting problems into queries of complex cryptographic protocols using public keys. If there was a net gain, one could in theory use a number of control nodes for which many hosts on the Internet form a distributed computing network completely unawares.

Students of the University of Applied Sciences, Bern, Switzerland, extended this concept into a programmable virtual machine in 2002. [2]

See also

Related Research Articles

The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved.

The Internet Control Message Protocol (ICMP) is a supporting protocol in the Internet protocol suite. It is used by network devices, including routers, to send error messages and operational information indicating success or failure when communicating with another IP address. For example, an error is indicated when a requested service is not available or that a host or router could not be reached. ICMP differs from transport protocols such as TCP and UDP in that it is not typically used to exchange data between systems, nor is it regularly employed by end-user network applications.

The Internet Protocol (IP) is the network layer communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet.

The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.

In computer networking, the User Datagram Protocol (UDP) is one of the core communication protocols of the Internet protocol suite used to send messages to other hosts on an Internet Protocol (IP) network. Within an IP network, UDP does not require prior communication to set up communication channels or data paths.

A datagram is a basic transfer unit associated with a packet-switched network. Datagrams are typically structured in header and payload sections. Datagrams provide a connectionless communication service across a packet-switched network. The delivery, arrival time, and order of arrival of datagrams need not be guaranteed by the network.

<span class="mw-page-title-main">Network address translation</span> Technique for making connections between IP address spaces

Network address translation (NAT) is a method of mapping an IP address space into another by modifying network address information in the IP header of packets while they are in transit across a traffic routing device. The technique was initially used to bypass the need to assign a new address to every host when a network was moved, or when the upstream Internet service provider was replaced but could not route the network's address space. It has become a popular and essential tool in conserving global address space in the face of IPv4 address exhaustion. One Internet-routable IP address of a NAT gateway can be used for an entire private network.

<span class="mw-page-title-main">ARPANET</span> Early packet switching network (1969–1990)

The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first computer networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency of the United States Department of Defense.

<span class="mw-page-title-main">Robert Kahn (computer scientist)</span> American computer scientist and Internet pioneer (born 1938)

Bob Kahn is an American electrical engineer who, along with Vint Cerf, first proposed the Transmission Control Protocol (TCP) and the Internet Protocol (IP), the fundamental communication protocols at the heart of the Internet.

In computer networking, the Datagram Congestion Control Protocol (DCCP) is a message-oriented transport layer protocol. DCCP implements reliable connection setup, teardown, Explicit Congestion Notification (ECN), congestion control, and feature negotiation. The IETF published DCCP as RFC 4340, a proposed standard, in March 2006. RFC 4336 provides an introduction.

<span class="mw-page-title-main">Albert-László Barabási</span> Hungarian-American physicist (born 1967)

Albert-László Barabási is a Romanian-born Hungarian-American physicist, best known for his discoveries in network science and network medicine.

A ping of death is a type of attack on a computer system that involves sending a malformed or otherwise malicious ping to a computer. In this attack, a host sends hundreds of ping requests with a packet size that is large or illegal to another host to try to take it offline or to keep it preoccupied responding with ICMP Echo replies.

The PARC Universal Packet was one of the two earliest internetworking protocol suites; it was created by researchers at Xerox PARC in the mid-1970s. The entire suite provided routing and packet delivery, as well as higher-level functions such as a reliable byte stream, along with numerous applications.

Retransmission, essentially identical with automatic repeat request (ARQ), is the resending of packets which have been either damaged or lost. Retransmission is one of the basic mechanisms used by protocols operating over a packet switched computer network to provide reliable communication.

UDP-Lite is a connectionless protocol that allows a potentially damaged data payload to be delivered to an application rather than being discarded by the receiving station. This is useful as it allows decisions about the integrity of the data to be made in the application layer, where the significance of the bits is understood. UDP-Lite is described in RFC 3828.

Packet injection in computer networking, is the process of interfering with an established network connection by means of constructing packets to appear as if they are part of the normal communication stream. The packet injection process allows an unknown third party to disrupt or intercept packets from the consenting parties that are communicating, which can lead to degradation or blockage of users' ability to utilize certain network services or protocols. Packet injection is commonly used in man-in-the-middle attacks and denial-of-service attacks.

The Internet checksum, also called the IPv4 header checksum is a checksum used in version 4 of the Internet Protocol (IPv4) to detect corruption in the header of IPv4 packets. It is carried in the IP packet header, and represents the 16-bit result of summation of the header words.

A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.

The Protocol Wars were a long-running debate in computer science that occurred from the 1970s to the 1990s, when engineers, organizations and nations became polarized over the issue of which communication protocol would result in the best and most robust networks. This culminated in the Internet–OSI Standards War in the 1980s and early 1990s, which was ultimately "won" by the Internet protocol suite (TCP/IP) by the mid-1990s when it became the dominant protocol suite through rapid adoption of the Internet.

References

  1. "Parasitic Computing". Archived from the original on 3 October 2001.
  2. "Diploma thesis on Parasitic Computing". Archived from the original on 2013-11-24. Retrieved 2014-11-17.