Part of a series about |
Net neutrality |
---|
Topics and issues |
By country or region |
The end-to-end principle is a design framework in computer networking. In networks designed according to this principle, guaranteeing certain application-specific features, such as reliability and security, requires that they reside in the communicating end nodes of the network. Intermediary nodes, such as gateways and routers, that exist to establish the network, may implement these to improve efficiency but cannot guarantee end-to-end correctness.
The essence of what would later be called the end-to-end principle was contained in the work of Donald Davies on packet-switched networks in the 1960s. Louis Pouzin pioneered the use of the end-to-end strategy in the CYCLADES network in the 1970s. [1] The principle was first articulated explicitly in 1981 by Saltzer, Reed, and Clark. [2] [lower-alpha 1] The meaning of the end-to-end principle has been continuously reinterpreted ever since its initial articulation. Also, noteworthy formulations of the end-to-end principle can be found before the seminal 1981 Saltzer, Reed, and Clark paper. [3]
A basic premise of the principle is that the payoffs from adding certain features required by the end application to the communication subsystem quickly diminish. The end hosts have to implement these functions for correctness. [lower-alpha 2] Implementing a specific function incurs some resource penalties regardless of whether the function is used or not, and implementing a specific function in the network adds these penalties to all clients, whether they need the function or not.
The fundamental notion behind the end-to-end principle is that for two processes communicating with each other via some communication means, the reliability obtained from that means cannot be expected to be perfectly aligned with the reliability requirements of the processes. In particular, meeting or exceeding very high-reliability requirements of communicating processes separated by networks of nontrivial size is more costly than obtaining the required degree of reliability by positive end-to-end acknowledgments and retransmissions (referred to as PAR or ARQ). [lower-alpha 3] Put differently, it is far easier to obtain reliability beyond a certain margin by mechanisms in the end hosts of a network rather than in the intermediary nodes, [lower-alpha 4] especially when the latter are beyond the control of, and not accountable to, the former. [lower-alpha 5] Positive end-to-end acknowledgments with infinite retries can obtain arbitrarily high reliability from any network with a higher than zero probability of successfully transmitting data from one end to another. [lower-alpha 6]
The end-to-end principle does not extend to functions beyond end-to-end error control and correction, and security. E.g., no straightforward end-to-end arguments can be made for communication parameters such as latency and throughput. In a 2001 paper, Blumenthal and Clark note: "[F]rom the beginning, the end-to-end arguments revolved around requirements that could be implemented correctly at the endpoints; if implementation inside the network is the only way to accomplish the requirement, then an end-to-end argument isn't appropriate in the first place." [7] : 80
The end-to-end principle is closely related, and sometimes seen as a direct precursor, to the principle of net neutrality. [8]
In the 1960s, Paul Baran and Donald Davies, in their pre-ARPANET elaborations of networking, made comments about reliability. Baran's 1964 paper states: "Reliability and raw error rates are secondary. The network must be built with the expectation of heavy damage anyway. Powerful error removal methods exist." [9] : 5 Going further, Davies captured the essence of the end-to-end principle; in his 1967 paper, he stated that users of the network will provide themselves with error control: "It is thought that all users of the network will provide themselves with some kind of error control and that without difficulty this could be made to show up a missing packet. Because of this, loss of packets, if it is sufficiently rare, can be tolerated." [10] : 2.3
The ARPANET was the first large-scale general-purpose packet switching network – implementing several of the concepts previously articulated by Baran and Davies. [11] [12]
Davies built a local-area network with a single packet switch and worked on the simulation of wide-area datagram networks. [13] [14] [15] Building on these ideas, and seeking to improve on the implementation in the ARPANET, [15] Louis Pouzin's CYCLADES network was the first to implement datagrams in a wide-area network and make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. [1] Concepts implemented in this network feature in TCP/IP architecture. [16]
The ARPANET demonstrated several important aspects of the end-to-end principle.
Internet Protocol (IP) is a connectionless datagram service with no delivery guarantees. On the Internet, IP is used for nearly all communications. End-to-end acknowledgment and retransmission is the responsibility of the connection-oriented Transmission Control Protocol (TCP) which sits on top of IP. The functional split between IP and TCP exemplifies the proper application of the end-to-end principle to transport protocol design.
An example of the end-to-end principle is that of an arbitrarily reliable file transfer between two endpoints in a distributed network of a varying, nontrivial size: [3] The only way two endpoints can obtain a completely reliable transfer is by transmitting and acknowledging a checksum for the entire data stream; in such a setting, lesser checksum and acknowledgment (ACK/NACK) protocols are justified only for the purpose of optimizing performance –they are useful to the vast majority of clients, but are not enough to fulfill the reliability requirement of this particular application. A thorough checksum is hence best done at the endpoints, and the network maintains a relatively low level of complexity and reasonable performance for all clients. [3]
The most important limitation of the end-to-end principle is that its basic premise, placing functions in the application endpoints rather than in the intermediary nodes, is not trivial to implement.
An example of the limitations of the end-to-end principle exists in mobile devices, for instance with mobile IPv6. [27] Pushing service-specific complexity to the endpoints can cause issues with mobile devices if the device has unreliable access to network channels. [28]
Further problems can be seen with a decrease in network transparency from the addition of network address translation (NAT), which IPv4 relies on to combat address exhaustion. [29] With the introduction of IPv6, users once again have unique identifiers, allowing for true end-to-end connectivity. Unique identifiers may be based on a physical address, or can be generated randomly by the host. [30]
The end-to-end principle advocates pushing coordination-related functionality ever higher, ultimately into the application layer. The premise is that application-level information enables flexible coordination between the application endpoints and yields better performance because the coordination would be exactly what is needed. This leads to the idea of modeling each application via its own application-specific protocol that supports the desired coordination between its endpoints while assuming only a simple lower-layer communication service. Broadly, this idea is known as application semantics (meaning).
Multiagent systems offers approaches based on application semantics that enable conveniently implementing distributed applications without requiring message ordering and delivery guarantees from the underlying communication services. A basic idea in these approaches is to model the coordination between application endpoints via an information protocol [31] and then implement the endpoints (agents) based on the protocol. Information protocols can be enacted over lossy, unordered communication services. A middleware based on information protocols and the associated programming model abstracts away message receptions from the underlying network and enables endpoint programmers to focus on the business logic for sending messages.
Internetworking is the practice of interconnecting multiple computer networks, such that any pair of hosts in the connected networks can exchange messages irrespective of their hardware-level networking technology. The resulting system of interconnected networks is called an internetwork, or simply an internet.
The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.
A datagram is a basic transfer unit associated with a packet-switched network. Datagrams are typically structured in header and payload sections. Datagrams provide a connectionless communication service across a packet-switched network. The delivery, arrival time, and order of arrival of datagrams need not be guaranteed by the network.
In telecommunications, packet switching is a method of grouping data into short messages in fixed format, i.e. packets, that are transmitted over a digital network. Packets are made of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination, where the payload is extracted and used by an operating system, application software, or higher layer protocols. Packet switching is the primary basis for data communications in computer networks worldwide.
The Advanced Research Projects Agency Network (ARPANET) was the first wide-area packet-switched network with distributed control and one of the first computer networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency of the United States Department of Defense.
The Network Control Protocol (NCP) was a communication protocol for a computer network in the 1970s and early 1980s. It provided the transport layer of the protocol stack running on host computers of the ARPANET, the predecessor to the modern Internet.
Donald Watts Davies, was a Welsh computer scientist and Internet pioneer who was employed at the UK National Physical Laboratory (NPL).
The CYCLADES computer network was a French research network created in the early 1970s. It was one of the pioneering networks experimenting with the concept of packet switching and, unlike the ARPANET, was explicitly designed to facilitate internetworking.
The Interface Message Processor (IMP) was the packet switching node used to interconnect participant networks to the ARPANET from the late 1960s to 1989. It was the first generation of gateways, which are known today as routers. An IMP was a ruggedized Honeywell DDP-516 minicomputer with special-purpose interfaces and software. In later years the IMPs were made from the non-ruggedized Honeywell 316 which could handle two-thirds of the communication traffic at approximately one-half the cost. An IMP requires the connection to a host computer via a special bit-serial interface, defined in BBN Report 1822. The IMP software and the ARPA network communications protocol running on the IMPs was discussed in RFC 1, the first of a series of standardization documents published by what later became the Internet Engineering Task Force (IETF).
In computer networking, a reliable protocol is a communication protocol that notifies the sender whether or not the delivery of data to intended recipients was successful. Reliability is a synonym for assurance, which is the term used by the ITU and ATM Forum.
Larry Roberts was an American computer scientist and Internet pioneer.
A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.
IEEE Internet Award is a Technical Field Award established by the IEEE in June 1999. The award is sponsored by Nokia Corporation. It may be presented annually to an individual or up to three recipients, for exceptional contributions to the advancement of Internet technology for network architecture, mobility and/or end-use applications. Awardees receive a bronze medal, certificate, and honorarium.
A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.
The NPL network, or NPL Data Communications Network, was a local area computer network operated by a team from the National Physical Laboratory (NPL) in London that pioneered the concept of packet switching.
SATNET, also known as the Atlantic Packet Satellite Network, was an early satellite network that formed an initial segment of the Internet. It was implemented by BBN Technologies under the direction of ARPA.
The International Network Working Group (INWG) was a group of prominent computer science researchers in the 1970s who studied and developed standards and protocols for interconnection of computer networks. Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, its goal was to develop an international standard protocol for internetworking. INWG became a subcommittee of the International Federation for Information Processing (IFIP) the following year. Concepts developed by members of the group contributed to the Protocol for Packet Network Intercommunication proposed by Vint Cerf and Bob Kahn in 1974 and the Transmission Control Protocol and Internet Protocol (TCP/IP) that emerged later.
The Protocol Wars were a long-running debate in computer science that occurred from the 1970s to the 1990s, when engineers, organizations and nations became polarized over the issue of which communication protocol would result in the best and most robust networks. This culminated in the Internet–OSI Standards War in the 1980s and early 1990s, which was ultimately "won" by the Internet protocol suite (TCP/IP) by the mid-1990s when it became the dominant protocol suite through rapid adoption of the Internet.
David Corydon Walden was an American computer scientist and Internet pioneer who contributed to the engineering development of the ARPANET, a precursor of the modern Internet. He specifically contributed to the Interface Message Processor, which was the packet switching node for the ARPANET. Walden was a contributor to IEEE Computer Society's Annals of the History of Computing and was a member of the TeX Users Group.
This idea of net neutrality...[Lawrence Lessig] used to call the principle e2e, for end to end
Historians credit seminal insights to Welsh scientist Donald W. Davies and American engineer Paul Baran
Aside from the technical problems of interconnecting computers with communications circuits, the notion of computer networks had been considered in a number of places from a theoretical point of view. Of particular note was work done by Paul Baran and others at the Rand Corporation in a study "On Distributed Communications" in the early 1960's. Also of note was work done by Donald Davies and others at the National Physical Laboratory in England in the mid-1960's. ... Another early major network development which affected development of the ARPANET was undertaken at the National Physical Laboratory in Middlesex, England, under the leadership of D. W. Davies.
Simulation work on packet networks was also undertaken by the NPL group.
Pouzin returned to his task of designing a simpler packet switching network than Arpanet. ... [Davies] had done some simulation of [wide-area] datagram networks, although he had not built any, and it looked technically viable.
In the early 1970s Mr Pouzin created an innovative data network that linked locations in France, Italy and Britain. Its simplicity and efficiency pointed the way to a network that could connect not just dozens of machines, but millions of them. It captured the imagination of Dr Cerf and Dr Kahn, who included aspects of its design in the protocols that now power the internet.