Robustness principle

Last updated

In computing, the robustness principle is a design guideline for software that states: "be conservative in what you do, be liberal in what you accept from others". It is often reworded as: "be conservative in what you send, be liberal in what you accept". The principle is also known as Postel's law, after Jon Postel, who used the wording in an early specification of TCP. [1]

Contents

In other words, programs that send messages to other machines (or to other programs on the same machine) should conform completely to the specifications, but programs that receive messages should accept non-conformant input as long as the meaning is clear.

Among programmers, to produce compatible functions, the principle is also known in the form: be contravariant in the input type and covariant in the output type.

Interpretation

RFC 1122 (1989) expanded on Postel's principle by recommending that programmers "assume that the network is filled with malevolent entities that will send in packets designed to have the worst possible effect". [2] Protocols should allow for the addition of new codes for existing fields in future versions of protocols by accepting messages with unknown codes (possibly logging them). Programmers should avoid sending messages with "legal but obscure protocol features" that might expose deficiencies in receivers, and design their code "not just to survive other misbehaving hosts, but also to cooperate to limit the amount of disruption such hosts can cause to the shared communication facility". [3]

Criticism

In 2001, Marshall Rose characterized several deployment problems when applying Postel's principle in the design of a new application protocol. [4] For example, a defective implementation that sends non-conforming messages might be used only with implementations that tolerate those deviations from the specification until, possibly several years later, it is connected with a less tolerant application that rejects its messages. In such a situation, identifying the problem is often difficult, and deploying a solution can be costly. Rose therefore recommended "explicit consistency checks in a protocol ... even if they impose implementation overhead".

From 2015 to 2018, in a series of Internet Drafts, Martin Thomson argues that Postel's robustness principle actually leads to a lack of robustness, including security: [5]

A flaw can become entrenched as a de facto standard. Any implementation of the protocol is required to replicate the aberrant behavior, or it is not interoperable. This is both a consequence of applying the robustness principle, and a product of a natural reluctance to avoid fatal error conditions. Ensuring interoperability in this environment is often referred to as aiming to be "bug for bug compatible".

In 2018, a paper on privacy-enhancing technologies by Florentin Rochet and Olivier Pereira showed how to exploit Postel's robustness principle inside the Tor routing protocol to compromise the anonymity of onion services and Tor clients. [6]

See also

Related Research Articles

The Dynamic Host Configuration Protocol (DHCP) is a network management protocol used on Internet Protocol (IP) networks for automatically assigning IP addresses and other communication parameters to devices connected to the network using a client–server architecture.

<span class="mw-page-title-main">HTTP</span> Application protocol for distributed, collaborative, hypermedia information systems

The Hypertext Transfer Protocol (HTTP) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser.

In computer network engineering, an Internet Standard is a normative specification of a technology or methodology applicable to the Internet. Internet Standards are created and published by the Internet Engineering Task Force (IETF). They allow interoperation of hardware and software from different sources which allows internets to function. As the Internet became global, Internet Standards became the lingua franca of worldwide communications.

The Internet Control Message Protocol (ICMP) is a supporting protocol in the Internet protocol suite. It is used by network devices, including routers, to send error messages and operational information indicating success or failure when communicating with another IP address. For example, an error is indicated when a requested service is not available or that a host or router could not be reached. ICMP differs from transport protocols such as TCP and UDP in that it is not typically used to exchange data between systems, nor is it regularly employed by end-user network applications.

<span class="mw-page-title-main">Internet Protocol version 4</span> Fourth version of the Internet Protocol

Internet Protocol version 4 (IPv4) is the fourth version of the Internet Protocol (IP). It is one of the core protocols of standards-based internetworking methods in the Internet and other packet-switched networks. IPv4 was the first version deployed for production on SATNET in 1982 and on the ARPANET in January 1983. It is still used to route most Internet traffic today, even with the ongoing deployment of Internet Protocol version 6 (IPv6), its successor.

<span class="mw-page-title-main">IPv6</span> Version 6 of the Internet Protocol

Internet Protocol version 6 (IPv6) is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion, and was intended to replace IPv4. In December 1998, IPv6 became a Draft Standard for the IETF, which subsequently ratified it as an Internet Standard on 14 July 2017.

The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.

ping (networking utility) Network utility used to test the reachability of a host

ping is a computer network administration software utility used to test the reachability of a host on an Internet Protocol (IP) network. It is available for virtually all operating systems that have networking capability, including most embedded network administration software.

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.

A datagram is a basic transfer unit associated with a packet-switched network. Datagrams are typically structured in header and payload sections. Datagrams provide a connectionless communication service across a packet-switched network. The delivery, arrival time, and order of arrival of datagrams need not be guaranteed by the network.

<span class="mw-page-title-main">Network Time Protocol</span> Standard protocol for synchronizing time across devices

The Network Time Protocol (NTP) is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use. NTP was designed by David L. Mills of the University of Delaware.

The Network Control Protocol (NCP) was a communication protocol for a computer network in the 1970s and early 1980s. It provided the transport layer of the protocol stack running on host computers of the ARPANET, the predecessor to the modern Internet.

The Internet Printing Protocol (IPP) is a specialized communication protocol for communication between client devices and printers. It allows clients to submit one or more print jobs to the network-attached printer or print server, and perform tasks such as querying the status of a printer, obtaining the status of print jobs, or cancelling individual print jobs.

This article lists communication protocols that are designed for file transfer over a telecommunications network.

The internet layer is a group of internetworking methods, protocols, and specifications in the Internet protocol suite that are used to transport network packets from the originating host across network boundaries; if necessary, to the destination host specified by an IP address. The internet layer derives its name from its function facilitating internetworking, which is the concept of connecting multiple networks with each other through gateways.

SMTP Authentication, often abbreviated SMTP AUTH, is an extension of the Simple Mail Transfer Protocol (SMTP) whereby a client may log in using any authentication mechanism supported by the server. It is mainly used by submission servers, where authentication is mandatory.

A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.

In computer networking, the link layer is the lowest layer in the Internet protocol suite, the networking architecture of the Internet. The link layer is the group of methods and communications protocols confined to the link that a host is physically connected to. The link is the physical and logical network component used to interconnect hosts or nodes in the network and a link protocol is a suite of methods and standards that operate only between adjacent network nodes of a network segment.

References

  1. Postel, Jon, ed. (January 1980). Transmission Control Protocol. IETF. doi: 10.17487/RFC0761 . RFC 761 . Retrieved June 9, 2014.
  2. Braden, R., ed. (October 1989). Requirements for Internet Hosts: Communication Layers. IETF. doi: 10.17487/RFC1122 . RFC 1122 . Retrieved June 9, 2014.
  3. Wilde, Erik (2012) [1999]. Wilde's WWW: Technical Foundations of the World Wide Web. Springer‑Verlag. p.  26. doi:10.1007/978-3-642-95855-7. ISBN   978-3-642-95855-7. S2CID   19897299.
  4. Rose, M. (November 2001). On the Design of Application Protocols. IETF. doi: 10.17487/RFC3117 . RFC 3117 . Retrieved June 9, 2014.
  5. Thomson, Martin (May 2019). The Harmful Consequences of the Robustness Principle. IETF . Retrieved October 4, 2019.
  6. Rochet, Florentin; Pereira, Olivier (2018). "Dropping on the Edge: Flexibility and Traffic Confirmation in Onion Routing Protocols" (PDF). Proceedings of the Privacy Enhancing Technologies Symposium. 2018 (2). De Gruyter Open: 27–46. doi:10.1515/popets-2018-0011. ISSN   2299-0984.