Datagram

Last updated

A datagram is a basic transfer unit associated with a packet-switched network. Datagrams are typically structured in header and payload sections. Datagrams provide a connectionless communication service across a packet-switched network. The delivery, arrival time, and order of arrival of datagrams need not be guaranteed by the network.

Contents

History

In the early 1970s, the term datagram was created by combining the words data and telegram by the CCITT rapporteur on packet switching, [1] Halvor Bothner-By. [2] [3] While the word was new, the concept had already a long history.

In 1964, Paul Baran described, in a RAND Corporation report, a hypothetical military network having to resist a nuclear attack. Small standardized message blocks, bearing source and destination addresses, were stored and forwarded in computer nodes of a highly redundant meshed computer network. [4] "The network user who has called up a virtual connection to an end station and has transmitted messages ... might also view the system as a black box providing an apparent circuit connection".

In 1967, Donald Davies published a seminal article in which he introduced the packet and packet switching . [5] His proposed core network is similar to the one proposed by Paul Baran though developed independently. He assumes that "all users of the network will provide themselves with some kind of error control". His target is a "common-carrier communication network". To support remote access to computer services by user terminals, which at that time were transmitted character by character, he included, at the network periphery, interface computers that convert character flows into packet flows and vice versa.

In 1970, Lawrence Roberts and Barry D. Wessler published an article about ARPANET, the first multi-node packet-switching network. [6] An accompanying paper described its switching nodes (the IMPs) and its packet formats. [7] The network core performed datagram switching as in Baran's and Davies' model, but the service offered to hosts by the network was connection oriented. [8] [9] A reliable message transfer service was thus offered to user computers, thus greatly simplifying the network design. This made the ARPANET what would come to be called a virtual circuit network. [10]

Roberts presented the idea of packet switching to the communication professionals and faced anger and hostility. Before ARPANET was operating, they argued that the router buffers would quickly run out. After the ARPANET was operating, they argued packet switching would never be economic without the government subsidy. Baran faced the same rejection and thus failed to convince the military to construct a packet-switching network. [11]

In 1973, Louis Pouzin presented his design for CYCLADES, the first large-scale network implementing the pure Davies datagram model. [12] The CYCLADES team has thus been the first to tackle the highly complex problem of providing user applications a reliable virtual circuit service [13] while using the end-to-end principle in a network service known to possibly produce non-negligible datagram losses and reordering. [14] Although Pouzin's concern "in a first stage is not to make breakthrough [sic] in packet switching technology, but to build a reliable communications tool for Cyclades", [12] two members of his team, Hubert Zimmerman and Gérard Le Lann, made significant contributions to the design of Internet's TCP that Vint Cerf, its main designer, acknowledged. [15]

In 1981, the Defense Advanced Research Projects Agency (DARPA) issued the first specification the Internet Protocol (IP). It introduced a major evolution of the datagram concept: fragmentation. [16] With fragmentation, some parts of the global network may use large packet size (typically local area networks to minimize processing overhead), while some others may impose smaller packet sizes (typically wide area networks to minimize response time). Network nodes may fragment a datagram into several smaller packets.

In 1999, the Internet Engineering Task Force (IETF) sanctioned the use of the already largely deployed network address translation (NAT) whereby each public address can be shared by several private devices. [17] With it, the forthcoming Internet Address exhaustion was delayed, leaving enough time to introduce IPv6, the new generation of Internet Protocol supporting longer addresses. The initial principle of full end to end network transparency to datagrams was for this relaxed: NAT nodes had to manage per-connection states, making them in part connection oriented.

In 2015, the IETF upgraded its informational 1998 RFC   2309 that datagram switching nodes perform active queue management (AQM), to make it a stronger and more detailed best current practice recommendation through the publication of RFC   7567. While the initial datagram queueing model was simple to implement and needed no more tuning than queue lengths, support of more sophisticated and parametrized mechanisms were found necessary "to improve and preserve Internet performance" (RED, ECN etc.). Further research on the subject was also called for, with a list of identified items. [18]

Definition

The term datagram is defined as follows: [19]

"A self-contained, independent entity of data carrying sufficient information to be routed from the source to the destination computer without reliance on earlier exchanges between this source and destination computer and the transporting network."

RFC 1594

A datagram needs to be self-contained without reliance on earlier exchanges because there is no connection of fixed duration between the two communicating points as there is, for example, in most voice telephone conversations. [20]

Datagram service is often compared to a mail delivery service; the user only provides the destination address but receives no guarantee of delivery, and no confirmation upon successful receipt. Datagram service is therefore considered unreliable. Datagram service routes datagrams without first creating a predetermined path. Datagram service is therefore considered connectionless. There is also no consideration given to the order in which it and other datagrams are sent or received. In fact, many datagrams in the same group can travel along different paths before reaching the same destination in a different order. [21]

Structure

Each datagram has two components, a header and a data payload. The header contains all the information sufficient for routing from the originating equipment to the destination without relying on prior exchanges between the equipment and the network. Headers may include source and destination addresses as well as type and length fields. The payload is the data to be transported. This process of nesting data payloads in a tagged header is called encapsulation.

Examples

Datagram nomenclature
OSI layerName
Layer 4 TCP segment
Layer 3 Network packet
Layer 2 Ethernet frame (IEEE 802.3)
Wireless LAN frame (IEEE 802.11)
Layer 1 Chip (CDMA)

Internet Protocol

The Internet Protocol (IP) defines standards for several types of datagrams. The internet layer is a datagram service provided by an IP. For example, UDP is run by a datagram service on the internet layer. IP is an entirely connectionless, best effort, unreliable, message delivery service. TCP is a higher-level protocol running on top of IP that provides a reliable connection-oriented service.

See also

Related Research Articles

<span class="mw-page-title-main">History of the Internet</span>

The history of the Internet has its origin in the efforts of scientists and engineers to build and interconnect computer networks. The Internet Protocol Suite, the set of rules used to communicate between networks and devices on the Internet, arose from research and development in the United States and involved international collaboration, particularly with researchers in the United Kingdom and France.

Internetworking is the practice of interconnecting multiple computer networks, such that any pair of hosts in the connected networks can exchange messages irrespective of their hardware-level networking technology. The resulting system of interconnected networks are called an internetwork, or simply an internet.

The Internet Control Message Protocol (ICMP) is a supporting protocol in the Internet protocol suite. It is used by network devices, including routers, to send error messages and operational information indicating success or failure when communicating with another IP address. For example, an error is indicated when a requested service is not available or that a host or router could not be reached. ICMP differs from transport protocols such as TCP and UDP in that it is not typically used to exchange data between systems, nor is it regularly employed by end-user network applications.

IEEE 802.2 is the original name of the ISO/IEC 8802-2 standard which defines logical link control (LLC) as the upper portion of the data link layer of the OSI Model. The original standard developed by the Institute of Electrical and Electronics Engineers (IEEE) in collaboration with the American National Standards Institute (ANSI) was adopted by the International Organization for Standardization (ISO) in 1998, but it remains an integral part of the family of IEEE 802 standards for local and metropolitan networks.

<span class="mw-page-title-main">Internet Protocol version 4</span> Fourth version of the Internet Protocol

Internet Protocol version 4 (IPv4) is the fourth version of the Internet Protocol (IP). It is one of the core protocols of standards-based internetworking methods in the Internet and other packet-switched networks. IPv4 was the first version deployed for production on SATNET in 1982 and on the ARPANET in January 1983. It is still used to route most Internet traffic today, even with the ongoing deployment of Internet Protocol version 6 (IPv6), its successor.

The Internet Protocol (IP) is the network layer communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet.

The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.

In computer networking, the maximum transmission unit (MTU) is the size of the largest protocol data unit (PDU) that can be communicated in a single network layer transaction. The MTU relates to, but is not identical to the maximum frame size that can be transported on the data link layer, e.g., Ethernet frame.

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.

In computer networking, the User Datagram Protocol (UDP) is one of the core communication protocols of the Internet protocol suite used to send messages to other hosts on an Internet Protocol (IP) network. Within an IP network, UDP does not require prior communication to set up communication channels or data paths.

In telecommunications, packet switching is a method of grouping data into packets that are transmitted over a digital network. Packets are made of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination, where the payload is extracted and used by an operating system, application software, or higher layer protocols. Packet switching is the primary basis for data communications in computer networks worldwide.

In computing, Internet Protocol Security (IPsec) is a secure network protocol suite that authenticates and encrypts packets of data to provide secure encrypted communication between two computers over an Internet Protocol network. It is used in virtual private networks (VPNs).

<span class="mw-page-title-main">Transport layer</span> Layer in the OSI and TCP/IP models providing host-to-host communication services for applications

In computer networking, the transport layer is a conceptual division of methods in the layered architecture of protocols in the network stack in the Internet protocol suite and the OSI model. The protocols of this layer provide end-to-end communication services for applications. It provides services such as connection-oriented communication, reliability, flow control, and multiplexing.

The end-to-end principle is a design framework in computer networking. In networks designed according to this principle, guaranteeing certain application-specific features, such as reliability and security, requires that they reside in the communicating end nodes of the network. Intermediary nodes, such as gateways and routers, that exist to establish the network, may implement these to improve efficiency but cannot guarantee end-to-end correctness.

The CYCLADES computer network was a French research network created in the early 1970s. It was one of the pioneering networks experimenting with the concept of packet switching and, unlike the ARPANET, was explicitly designed to facilitate internetworking.

In computer networking, a port or port number is a number assigned to uniquely identify a connection endpoint and to direct data to a specific service. At the software level, within an operating system, a port is a logical construct that identifies a specific process or a type of network service. A port at the software level is identified for each transport protocol and address combination by the port number assigned to it. The most common transport protocols that use port numbers are the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP); those port numbers are 16-bit unsigned numbers.

The internet layer is a group of internetworking methods, protocols, and specifications in the Internet protocol suite that are used to transport network packets from the originating host across network boundaries; if necessary, to the destination host specified by an IP address. The internet layer derives its name from its function facilitating internetworking, which is the concept of connecting multiple networks with each other through gateways.

An IPv6 packet is the smallest message entity exchanged using Internet Protocol version 6 (IPv6). Packets consist of control information for addressing and routing and a payload of user data. The control information in IPv6 packets is subdivided into a mandatory fixed header and optional extension headers. The payload of an IPv6 packet is typically a datagram or segment of the higher-level transport layer protocol, but may be data for an internet layer or link layer instead.

A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.

The Protocol Wars were a long-running debate in computer science that occurred from the 1970s to the 1990s, when engineers, organizations and nations became polarized over the issue of which communication protocol would result in the best and most robust networks. This culminated in the Internet–OSI Standards War in the 1980s and early 1990s, which was ultimately "won" by the Internet protocol suite (TCP/IP) by the mid-1990s when it became the dominant protocol through rapid adoption of the Internet.

References

  1. "The CCITT studies packet switching as part of public data network development".
  2. Rémi Després (November 2010). "X.25 virtual circuits — Transpac in France — Pre-Internet data networking". IEEE Communications Magazine. 48 (10). doi:10.1109/MCOM.2010.5621965.
  3. "Comment j'ai inventé le Datagramme" (in French). Archived from the original on 2019-02-28.
  4. "On distributed communications networks" (PDF). Archived from the original (PDF) on 2016-10-26.
  5. "A digital communication network for computers giving rapid response at remote terminals" (PDF). Archived (PDF) from the original on 2022-10-09.
  6. Lawrence Roberts; Barry D. Wessler (1970). "Computer network development to achieve resource sharing". Proceedings of the May 5-7, 1970, spring joint computer conference on - AFIPS '70 (Spring). p. 543. doi:10.1145/1476936.1477020. S2CID   9343511.
  7. Frank E Heart; R E Kahn; Severo M Ornstein; William R Crowther; David C Walden (1970). "The interface message processor for the ARPA computer network". Proceedings of the May 5-7, 1970, spring joint computer conference on - AFIPS '70 (Spring). p. 551. doi:10.1145/1476936.1477021. S2CID   9647377.
  8. "INTERFACE MESSAGE PROCESSOR Specifications for the Innterconnection of a Host" (PDF). January 2014. three parameters uniquely specify a connection between source and destination Hosts." "The destination IMP returns a positive acknowledgment for receipt of the message to the source IMP, which in turn passes this acknowledgment to the source Host." "Each link is unidirectional and is controlled by the network so that no more than one message at a time may be sent over it.
  9. Pelkey, James. "8.4 Transmission Control Protocol (TCP) 1973-1976". Entrepreneurial Capitalism and Innovation: A History of Computer Communications 1968–1988. Arpanet had its deficiencies, however, for it was neither a true datagram network nor did it provide end-to-end error correction.
  10. "An Interview with LOUIS POUZIN Conducted by Andrew L. Russell" (PDF). April 2012. Arpanet was virtual circuit." "essentially a virtual circuit service using internal datagram
  11. Roberts, L. (1988-01-01), "The arpanet and computer networks", A history of personal workstations, New York, NY, USA: Association for Computing Machinery, pp. 141–172, doi:10.1145/61975.66916, ISBN   978-0-201-11259-7 , retrieved 2023-11-30
  12. 1 2 Pouzen, Louis. "Presentation and major design aspects of the Cyclades network". Archived from the original on September 27, 2007.
  13. Extending TCP for transactions -- Concepts. doi: 10.17487/RFC1379 . RFC 1379.
  14. Bennett, Richard (September 2009). "Designed for Change: End-to-End Arguments, Internet Innovation, and the Net Neutrality Debate" (PDF). Information Technology and Innovation Foundation. pp. 7, 11. Retrieved 11 September 2017.
  15. V. Cerf; Y. Dalal; C. Sunshine (December 1974). SPECIFICATION OF INTERNET TRANSMISSION CONTROL PROGRAM. Network Working Group. doi: 10.17487/RFC0675 . RFC 675.Obsolete. Obsoleted by RFC  7805. NIC 2. INWG 72.
  16. J. Postel, ed. (September 1981). INTERNET PROTOCOL - DARPA INTERNET PROGRAM PROTOCOL SPECIFICATION. IETF. doi: 10.17487/RFC0791 . STD 5. RFC 791.IEN 128, 123, 111, 80, 54, 44, 41, 28, 26.Internet Standard 5. Obsoletes RFC  760. Updated by RFC  1349, 2474 and 6864.
  17. P. Srisuresh; M. Holdrege (August 1999). IP Network Address Translator (NAT) Terminology and Considerations. Network Working Group. doi: 10.17487/RFC2663 . RFC 2663.Informational.
  18. F. Baker; G. Fairhurst, eds. (July 2015). IETF Recommendations Regarding Active Queue Management. Internet Engineering Task Force. doi: 10.17487/RFC7567 . ISSN   2070-1721. BCP 197. RFC 7567.Best Current Practice. Obsoletes RFC  2309.
  19. A. Marine; J. Reynolds; G. Malkin (March 1994). FYI on Questions and Answers - Answers to Commonly asked "New Internet User" Questions. Network Working Group. doi: 10.17487/RFC1594 . FYI 4. RFC 1594.Obsolete. Obsoleted by RFC  2664. Obsoletes RFC  1325.
  20. Tanenbaum, Andrew S.; Wetherall, David J. (2011). Computer networks, fifth edition. Pearson. p. 59. ISBN   978-0-13-255317-9.
  21. Packet Reordering Metrics. November 2006. doi: 10.17487/RFC4737 . RFC 4737.