Protocol ossification

Last updated

Protocol ossification is the loss of flexibility, extensibility and evolvability of network protocols. This is largely due to middleboxes that are sensitive to the wire image of the protocol, and which can interrupt or interfere with messages that are valid but which the middlebox does not correctly recognise. This is a violation of the end-to-end principle. Secondary causes include inflexibility in endpoint implementations of protocols.

Contents

Ossification is a major issue in Internet protocol design and deployment, as it can prevent new protocols or extensions from being deployed on the Internet, or place strictures on the design of new protocols; new protocols may have to be encapsulated in an already-deployed protocol or mimic the wire image of another protocol. Because of ossification, the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are the only practical choices for transport protocols on the Internet, and TCP itself has significantly ossified, making extension or modification of the protocol difficult.

Recommended methods of preventing ossification include encrypting protocol metadata, and ensuring that extension points are exercised and wire image variability is exhibited as fully as possible; remedying existing ossification requires coordination across protocol participants. QUIC is the first IETF transport protocol to have been designed with deliberate anti-ossification properties.

History

Significant ossification had set in on the Internet by 2005, with analyses of the problem also being published in that year; [1] Ammar (2018) suggests that ossification was a consequence of the Internet attaining global scale and becoming the primary communication network. [2]

Multipath TCP was the first extension to a core Internet protocol to deeply confront protocol ossification during its design. [3]

The IETF created the Transport Services (taps) working group in 2014. [4] It has a mandate to mitigate ossification at the transport protocol layer. [5]

QUIC is the first IETF transport protocol to deliberately minimise its wire image to avoid ossification. [6]

The Internet Architecture Board identified design considerations around the exposure of protocol information to network elements as a "developing field" in 2023. [7]

Causes

The primary cause of protocol ossification is middlebox interference, [8] invalidating the end-to-end principle. [9] Middleboxes may entirely block unknown protocols or unrecognised extensions to known protocols, interfere with extension or feature negotiation, or perform more invasive modification of protocol metadata. [10] Not all middlebox modifications are necessarily ossifying; of those which are potentially harmful, they are disproportionately towards the network edge. [11] Middleboxes are deployed by network operators unilaterally to solve specific problems, [12] including performance optimisation, security requirements (e.g., firewalls), network address translation or enhancing control of networks. [13] These middlebox deployments provide localised short-term utility but degrade the global long-term evolvability of the Internet in a manifestation of the tragedy of the commons. [12]

Changes to a protocol must be tolerated by all on-path intermediaries; if wide Internet deployment of the change is desired, then this extends to a large portion of intermediaries on the Internet. A middlebox must tolerate widely-used protocols as they were being used at the time of its deployment, but is liable not to tolerate new protocols or changes to extant ones, effectively creating a vicious cycle as novel wire images cannot gain wide enough deployment to make middleboxes tolerate the new wire image across the entire Internet. [9] Even all participants tolerating the protocol is no guarantee of use: in the absence of a negotiation or discovery mechanism, the endpoints may default to a protocol that is considered more reliable. [14]

Beyond middleboxes, ossification can also be caused by insufficient flexibility within the endpoint's implementation. Operating system kernels are slow to change and deploy, [14] and protocols implemented in hardware can also inappropriately fix protocol details. [15] A widely-used API that makes assumptions about the operation of underlying protocols can hinder the deployment of protocols that do not share those assumptions. [9]

Prevention and remediation

The Internet Architecture Board recommended in 2019 that implicit signals to observers should be replaced with signals deliberately intended for the consumption of those observers, and signals not intended for their consumption should not be available to them (e.g., by encryption); and also that the protocol metadata should be integrity protected so that it cannot be modified by middleboxes. [16] However, even fully encrypted metadata may not entirely prevent ossification in the network, as the wire image of a protocol can still show patterns that come to be relied upon. [17] Network operators use metadata for a variety of benign management purposes, [18] and Internet research is also informed by data gathered from protocol metadata; [19] a protocol's designer must balance ossification resistance against observability for operational or research needs. [17] Arkko et al. (2023) provides further guidance on these considerations: disclosure of information by a protocol to the network should be intentional, [20] performed with the agreement of both recipient and sender, [21] authenticated to the degree possible and necessary, [22] only acted upon to the degree of its trustworthiness, [23] and minimised and provided to a minimum number of entities. [24] [25]

Active use of extension points is required if they are not to ossify. [26] Reducing the number of extension points, documenting invariants that protocol participants can rely on as opposed to incidental details that must not be relied upon, and prompt detection of issues in deployed systems can assist in ensuring active use. [27] However, even active use may only exercise a narrow portion of the protocol and ossification can still occur in the parts that remain invariant in practice despite theoretical variability. [28] [29] "Greasing" an extension point, where some implementations indicate support for non-existent extensions, can ensure that actually-existent-but-unrecognised extensions are tolerated (cf. chaos engineering). [30] HTTP headers are an example of an extension point that has successfully avoided significant ossification, as participants will generally ignore unrecognised headers. [31]

A new protocol may be designed to mimic the wire image of an existing ossified protocol; [32] alternatively, a new protocol may be encapsulated within an existing, tolerated protocol. A disadvantage of encapsulation is that there is typically overhead and redundant work (e.g., outer checksums made redundant by inner integrity checks). [33]

Besides middleboxes, other sources of ossification can also be resisted. User-space implementation of protocols can lead to more rapid evolution. If the new protocol is encapsulated in UDP, then user-space implementation is possible. [34] [35] Where support for protocols is uncertain, participants may simultaneously try alternative protocols, at the cost of increasing the amount of data sent. [36]

With sufficient effort and coordination, ossification can be directly reversed. A flag day, where protocol participants make changes in concert, can break the vicious cycle and establish active use. This approach was used to deploy EDNS, which had formerly not been tolerated by servers. [37]

Examples

The Transmission Control Protocol has suffered from ossification. [38] One measurement found that a third of paths across the Internet encounter at least one intermediary that modifies TCP metadata, and 6.5% of paths encounter harmful ossifying effects from intermediaries. [39] Extensions to TCP have been affected: the design of MPTCP was constrained by middlebox behaviour, [3] [40] and the deployment of TCP Fast Open has been likewise hindered. [41] [38]

The Stream Control Transmission Protocol has been little-deployed on the Internet due to intolerance from middleboxes, [9] and also due to the very widespread BSD sockets API ill-fitting its capabilities. [42] In practice, TCP and UDP are the only usable Internet transport protocols. [43]

Transport Layer Security (TLS) has experienced ossification. TLS was the original context for the introduction of greasing extension points. TLS 1.3, as originally designed, proved undeployable on the Internet: middleboxes had ossified the protocol's version parameter. This was discovered late in the protocol design process, during experimental deployments by web browsers. As a result, version 1.3 mimics the wire image of version 1.2. [44]

QUIC has been specifically designed to be deployable, evolvable and to have anti-ossification properties; [45] it is the first IETF transport protocol to deliberately minimise its wire image for these ends. [6] It is greased, [30] it has protocol invariants explicitly specified, [46] it is encapsulated in UDP, and its protocol metadata is encrypted. [45] Still, applications using QUIC must be prepared to fall back to other protocols, as UDP is blocked by some middleboxes. [47]

See also

Related Research Articles

<span class="mw-page-title-main">HTTP</span> Application protocol for distributed, collaborative, hypermedia information systems

The Hypertext Transfer Protocol (HTTP) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser.

In computer network engineering, an Internet Standard is a normative specification of a technology or methodology applicable to the Internet. Internet Standards are created and published by the Internet Engineering Task Force (IETF). They allow interoperation of hardware and software from different sources which allows internets to function. As the Internet became global, Internet Standards became the lingua franca of worldwide communications.

The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport Layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.

<span class="mw-page-title-main">Transport layer</span> Layer in the OSI and TCP/IP models providing host-to-host communication services for applications

In computer networking, the transport layer is a conceptual division of methods in the layered architecture of protocols in the network stack in the Internet protocol suite and the OSI model. The protocols of this layer provide end-to-end communication services for applications. It provides services such as connection-oriented communication, reliability, flow control, and multiplexing.

Transport Layer Security (TLS) is a cryptographic protocol designed to provide communications security over a computer network. The protocol is widely used in applications such as email, instant messaging, and voice over IP, but its use in securing HTTPS remains the most publicly visible.

Datagram Transport Layer Security (DTLS) is a communications protocol providing security to datagram-based applications by allowing them to communicate in a way designed to prevent eavesdropping, tampering, or message forgery. The DTLS protocol is based on the stream-oriented Transport Layer Security (TLS) protocol and is intended to provide similar security guarantees. The DTLS protocol datagram preserves the semantics of the underlying transport—the application does not suffer from the delays associated with stream protocols, but because it uses UDP or SCTP, the application has to deal with packet reordering, loss of datagram and data larger than the size of a datagram network packet. Because DTLS uses UDP or SCTP rather than TCP, it avoids the "TCP meltdown problem", when being used to create a VPN tunnel.

In computing, the robustness principle is a design guideline for software that states: "be conservative in what you do, be liberal in what you accept from others". It is often reworded as: "be conservative in what you send, be liberal in what you accept". The principle is also known as Postel's law, after Jon Postel, who used the wording in an early specification of TCP.

A middlebox is a computer networking device that transforms, inspects, filters, and manipulates traffic for purposes other than packet forwarding. Examples of middleboxes include firewalls, network address translators (NATs), load balancers, and deep packet inspection (DPI) boxes.

The internet layer is a group of internetworking methods, protocols, and specifications in the Internet protocol suite that are used to transport network packets from the originating host across network boundaries; if necessary, to the destination host specified by an IP address. The internet layer derives its name from its function facilitating internetworking, which is the concept of connecting multiple networks with each other through gateways.

An IPv6 transition mechanism is a technology that facilitates the transitioning of the Internet from the Internet Protocol version 4 (IPv4) infrastructure in use since 1983 to the successor addressing and routing system of Internet Protocol Version 6 (IPv6). As IPv4 and IPv6 networks are not directly interoperable, transition technologies are designed to permit hosts on either network type to communicate with any other host.

TCP Cookie Transactions (TCPCT) is specified in RFC 6013 as an extension of Transmission Control Protocol (TCP) intended to secure it against denial-of-service attacks, such as resource exhaustion by SYN flooding and malicious connection termination by third parties. Unlike the original SYN cookies approach, TCPCT does not conflict with other TCP extensions, but requires TCPCT support in the client (initiator) as well as the server (responder) TCP stack.

A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination.

The Stream Control Transmission Protocol (SCTP) is a computer networking communications protocol in the transport layer of the Internet protocol suite. Originally intended for Signaling System 7 (SS7) message transport in telecommunication, the protocol provides the message-oriented feature of the User Datagram Protocol (UDP), while ensuring reliable, in-sequence transport of messages with congestion control like the Transmission Control Protocol (TCP). Unlike UDP and TCP, the protocol supports multihoming and redundant paths to increase resilience and reliability.

HTTP/2 is a major revision of the HTTP network protocol used by the World Wide Web. It was derived from the earlier experimental SPDY protocol, originally developed by Google. HTTP/2 was developed by the HTTP Working Group of the Internet Engineering Task Force (IETF). HTTP/2 is the first new version of HTTP since HTTP/1.1, which was standardized in RFC 2068 in 1997. The Working Group presented HTTP/2 to the Internet Engineering Steering Group (IESG) for consideration as a Proposed Standard in December 2014, and IESG approved it to publish as Proposed Standard on February 17, 2015. The HTTP/2 specification was published as RFC 7540 on May 14, 2015.

QUIC is a general-purpose transport layer network protocol initially designed by Jim Roskind at Google, implemented, and deployed in 2012, announced publicly in 2013 as experimentation broadened, and described at an IETF meeting. QUIC is used by more than half of all connections from the Chrome web browser to Google's servers. Microsoft Edge and Firefox support it. Safari implements the protocol, however it is not enabled by default.

Multipath TCP (MPTCP) is an ongoing effort of the Internet Engineering Task Force's (IETF) Multipath TCP working group, that aims at allowing a Transmission Control Protocol (TCP) connection to use multiple paths to maximize throughput and increase redundancy.

<span class="mw-page-title-main">Well-known URI</span>

A well-known URI is a Uniform Resource Identifier for URL path prefixes that start with /.well-known/. They are implemented in webservers so that requests to the servers for well-known services or information are available at URLs consistent well-known locations across servers.

<span class="mw-page-title-main">HTTP/3</span> Version of the HTTP network protocol

HTTP/3 is the third major version of the Hypertext Transfer Protocol used to exchange information on the World Wide Web, complementing the widely-deployed HTTP/1.1 and HTTP/2. Unlike previous versions which relied on the well-established TCP, HTTP/3 uses QUIC, a multiplexed transport protocol built on UDP. On 6 June 2022, IETF published HTTP/3 as a Proposed Standard in RFC 9114.

References

  1. Ammar 2018, p. 57-58.
  2. Ammar 2018, p. 59.
  3. 1 2 Raiciu et al. 2012, p. 1.
  4. "Transport Services (taps) Group history". IETF.
  5. "Transport Services charter-ietf-taps-02". IETF.
  6. 1 2 Trammell & Kuehlewind 2019, p. 2.
  7. Arkko et al. 2023, 3. Further Work.
  8. Papastergiou et al. 2017, p. 619.
  9. 1 2 3 4 Papastergiou et al. 2017, p. 620.
  10. Edeline & Donnet 2019, p. 171.
  11. Edeline & Donnet 2019, p. 173-175.
  12. 1 2 Edeline & Donnet 2019, p. 169.
  13. Honda et al. 2011, p. 1.
  14. 1 2 Papastergiou et al. 2017, p. 621.
  15. Corbet 2015.
  16. Hardie 2019, p. 7-8.
  17. 1 2 Fairhurst & Perkins 2021, 7. Conclusions.
  18. Fairhurst & Perkins 2021, 2. Current Uses of Transport Headers within the Network.
  19. Fairhurst & Perkins 2021, 3. Research, Development, and Deployment.
  20. Arkko et al. 2023, 2.1. Intentional Distribution.
  21. Arkko et al. 2023, 2.2. Control of the Distribution of Information.
  22. Arkko et al. 2023, 2.3. Protecting Information and Authentication.
  23. Arkko et al. 2023, 2.5. Limiting Impact of Information.
  24. Arkko et al. 2023, 2.4. Minimize Information.
  25. Arkko et al. 2023, 2.6. Minimum Set of Entities.
  26. Thomson & Pauly 2021, 3. Active Use.
  27. Thomson & Pauly 2021, 4. Complementary Techniques.
  28. Thomson & Pauly 2021, 3.1. Dependency Is Better.
  29. Trammell & Kuehlewind 2019, p. 7.
  30. 1 2 Thomson & Pauly 2021, 3.3. Falsifying Active Use.
  31. Thomson & Pauly 2021, 3.4. Examples of Active Use.
  32. Papastergiou et al. 2017, p. 623.
  33. Papastergiou et al. 2017, p. 623-4.
  34. Papastergiou et al. 2017, p. 630.
  35. Corbet 2016.
  36. Papastergiou et al. 2017, p. 629.
  37. Thomson & Pauly 2021, 3.5. Restoring Active Use.
  38. 1 2 Thomson & Pauly 2021, A.5. TCP.
  39. Edeline & Donnet 2019, p. 175-176.
  40. Hesmans et al. 2013, p. 1.
  41. Rybczyńska 2020.
  42. Papastergiou et al. 2017, p. 627.
  43. McQuistin, Perkins & Fayed 2016, p. 1.
  44. Sullivan 2017.
  45. 1 2 Corbet 2018.
  46. Thomson 2021, 2. Fixed Properties of All QUIC Versions.
  47. Kühlewind & Trammell 2022, 2. The Necessity of Fallback.

Bibliography

Further reading