HTTP pipelining

Last updated
Time diagram of non-pipelined vs. pipelined connection HTTP pipelining2.svg
Time diagram of non-pipelined vs. pipelined connection

HTTP pipelining is a feature of HTTP/1.1, which allows multiple HTTP requests to be sent over a single TCP connection without waiting for the corresponding responses. [1] HTTP/1.1 requires servers to respond to pipelined requests correctly, with non-pipelined but valid responses even if server does not support HTTP pipelining. Despite this requirement, many legacy HTTP/1.1 servers do not support pipelining correctly, forcing most HTTP clients to not use HTTP pipelining.

Contents

The technique was superseded by multiplexing via HTTP/2, [2] which is supported by most modern browsers. [3]

In HTTP/3, multiplexing is accomplished via QUIC which replaces TCP. This further reduces loading time, as there is no head-of-line blocking even if some packets are lost.

Motivation and limitations

The pipelining of requests results in a dramatic improvement [4] in the loading times of HTML pages, especially over high latency connections such as satellite Internet connections. The speedup is less apparent on broadband connections, as the limitation of HTTP 1.1 still applies: the server must send its responses in the same order that the requests were received—so the entire connection remains first-in-first-out [1] and HOL blocking can occur.

The asynchronous operations of HTTP/2 and SPDY are solution for this. [5] By 2017 most browsers supported HTTP/2 by default which uses multiplexing instead. [2]

Non-idempotent requests such as POST should not be pipelined. [6] Read requests like GET and HEAD can always be pipelined. A sequence of other idempotent requests like PUT and DELETE can be pipelined or not depending on whether requests in the sequence depend on the effect of others. [1]

HTTP pipelining requires both the client and the server to support it. HTTP/1.1 conforming servers are required to produce valid responses to pipelined requests, but may not actually process requests concurrently. [7]

Most pipelining problems happen in HTTP intermediate nodes (hop-by-hop), i.e. in proxy servers, especially in transparent proxy servers (if one of them along the HTTP chain does not handle pipelined requests properly then nothing works as it should). [8]

Using pipelining with HTTP proxy servers is usually not recommended also because the HOL blocking problem may really slow down proxy server responses (as the server responses must be in the same order of the received requests). [1] [9]

Example: if a client sends 4 pipelined GET requests to a proxy through a single connection and the first one is not in its cache then the proxy has to forward that request to the destination web server; if the following three requests are instead found in its cache, the proxy has to wait for the web server response, then it has to send it to the client and only then it can send the three cached responses too.

If instead a client opens 4 connections to a proxy and sends 1 GET request per connection (without using pipelining) the proxy can send the three cached responses to client in parallel before the response from server is received, decreasing the overall completion time (because requests are served in parallel with no head-of-line blocking problem). [10] The same advantage exists in HTTP/2 multiplexed streams.

Implementation status

Pipelining was introduced in HTTP/1.1 and was not present in HTTP/1.0. [11]

There have always been complaints about browsers, proxy servers, etc. not working well when using pipelined requests / responses, up to the point that for many years (at least till 2011) software developers, engineers, web experts, etc. tried to summarize the various kind of problems they noted, to fix things and to give advices about how to deal with pipelining on the Open Web. [8]

Implementation in web browsers

Of all the major browsers, only Opera had a fully working implementation that was enabled by default. In other browsers HTTP pipelining was disabled or not implemented. [5]

Implementation in web proxy servers

Most HTTP proxies do not pipeline outgoing requests. [21]

Some HTTP proxies, including transparent HTTP proxies, may manage pipelined requests very badly (e.g. by mixing up the order of pipelined responses). [22]

Some versions of the Squid web proxy will pipeline up to two outgoing requests. This functionality has been disabled by default and needs to be manually enabled for "bandwidth management and access logging reasons". [23] Squid supports multiple requests from clients.

The Polipo proxy pipelines outgoing requests. [24]

Tempesta FW, an open source application delivery controller, [25] also pipelines requests to backend servers. [26]

Other implementations

The libwww library made by the World Wide Web Consortium (W3C), supports pipelining since version 5.1 released at 18 February 1997. [27]

Other application development libraries that support HTTP pipelining include:

Some other applications currently exploiting pipelining are:

Testing tools which support HTTP pipelining include:

See also

Related Research Articles

The Gopher protocol is a communication protocol designed for distributing, searching, and retrieving documents in Internet Protocol networks. The design of the Gopher protocol and user interface is menu-driven, and presented an alternative to the World Wide Web in its early stages, but ultimately fell into disfavor, yielding to HTTP. The Gopher ecosystem is often regarded as the effective predecessor of the World Wide Web.

<span class="mw-page-title-main">HTTP</span> Application protocol for distributed, collaborative, hypermedia information systems

The Hypertext Transfer Protocol (HTTP) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser.

<span class="mw-page-title-main">Web browser</span> Software used to navigate the internet

A web browser is an application for accessing websites. When a user requests a web page from a particular website, the browser retrieves its files from a web server and then displays the page on the user's screen. Browsers are used on a range of devices, including desktops, laptops, tablets, and smartphones. In 2020, an estimated 4.9 billion people have used a browser. The most used browser is Google Chrome, with a 65% global market share on all devices, followed by Safari with 18%.

<span class="mw-page-title-main">Web server</span> Computer software that distributes web pages

A web server is computer software and underlying hardware that accepts requests via HTTP or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a web page or other resource using HTTP, and the server responds with the content of that resource or an error message. A web server can also accept and store resources sent from the user agent if configured to do so.

<span class="mw-page-title-main">Proxy server</span> Computer server that makes and receives requests on behalf of a user

In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource. It improves privacy, security, and performance in the process.

<span class="mw-page-title-main">Squid (software)</span> Caching and forwarding HTTP web proxy

Squid is a caching and forwarding HTTP web proxy. It has a wide variety of uses, including speeding up a web server by caching repeated requests, caching World Wide Web (WWW), Domain Name System (DNS), and other computer network lookups for a group of people sharing network resources, and aiding security by filtering traffic. Although used for mainly HTTP and File Transfer Protocol (FTP), Squid includes limited support for several other protocols including Internet Gopher, Secure Sockets Layer (SSL), Transport Layer Security (TLS), and Hypertext Transfer Protocol Secure (HTTPS). Squid does not support the SOCKS protocol, unlike Privoxy, with which Squid can be used in order to provide SOCKS support.

The Online Certificate Status Protocol (OCSP) is an Internet protocol used for obtaining the revocation status of an X.509 digital certificate. It is described in RFC 6960 and is on the Internet standards track. It was created as an alternative to certificate revocation lists (CRL), specifically addressing certain problems associated with using CRLs in a public key infrastructure (PKI). Messages communicated via OCSP are encoded in ASN.1 and are usually communicated over HTTP. The "request/response" nature of these messages leads to OCSP servers being termed OCSP responders.

A proxy auto-config (PAC) file defines how web browsers and other user agents can automatically choose the appropriate proxy server for fetching a given URL.

<span class="mw-page-title-main">HTTP compression</span> Capability that can be built into web servers and web clients

HTTP compression is a capability that can be built into web servers and web clients to improve transfer speed and bandwidth utilization.

<span class="mw-page-title-main">HTTP referer</span> HTTP header field

In HTTP, "Referer" is an optional HTTP header field that identifies the address of the web page, from which the resource has been requested. By checking the referrer, the server providing the new web page can see where the request originated.

<span class="mw-page-title-main">HTTP persistent connection</span> Using a single TCP connection to send and receive multiple HTTP requests/responses

HTTP persistent connection, also called HTTP keep-alive, or HTTP connection reuse, is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair. The newer HTTP/2 protocol uses the same idea and takes it further to allow multiple concurrent requests/responses to be multiplexed over a single connection.

SPDY is an obsolete open-specification communication protocol developed for transporting web content. SPDY became the basis for HTTP/2 specification. However, HTTP/2 diverged from SPDY and eventually HTTP/2 subsumed all usecases of SPDY. After HTTP/2 was ratified as a standard, major implementers, including Google, Mozilla, and Apple, deprecated SPDY in favor of HTTP/2. Since 2021, no modern browser supports SPDY.

<span class="mw-page-title-main">Polipo</span>

Polipo is a lightweight caching and forwarding web proxy server. It has a wide variety of uses, from aiding security by filtering traffic; to caching web, DNS and other computer network lookups for a group of people sharing network resources; to speeding up a web server by caching repeated requests. It can be configured to use on-disk cache and serve cached content when offline and perform various forms of content filtering.

<span class="mw-page-title-main">WebSocket</span> Computer network protocol

WebSocket is a computer communications protocol, providing full-duplex communication channels over a single TCP connection. The WebSocket protocol was standardized by the IETF as RFC 6455 in 2011. The current API specification allowing web applications to use this protocol is known as WebSockets. It is a living standard maintained by the WHATWG and a successor to The WebSocket API from the W3C.

Web performance refers to the speed in which web pages are downloaded and displayed on the user's web browser. Web performance optimization (WPO), or website optimization is the field of knowledge about increasing web performance.

HTTP/2 is a major revision of the HTTP network protocol used by the World Wide Web. It was derived from the earlier experimental SPDY protocol, originally developed by Google. HTTP/2 was developed by the HTTP Working Group of the Internet Engineering Task Force (IETF). HTTP/2 is the first new version of HTTP since HTTP/1.1, which was standardized in RFC 2068 in 1997. The Working Group presented HTTP/2 to the Internet Engineering Steering Group (IESG) for consideration as a Proposed Standard in December 2014, and IESG approved it to publish as Proposed Standard on February 17, 2015. The HTTP/2 specification was published as RFC 7540 on May 14, 2015.

QUIC is a general-purpose transport layer network protocol initially designed by Jim Roskind at Google, implemented, and deployed in 2012, announced publicly in 2013 as experimentation broadened, and described at an IETF meeting. QUIC is used by more than half of all connections from the Chrome web browser to Google's servers. Microsoft Edge and Firefox support it. Safari implements the protocol, however it is not enabled by default.

DNS over HTTPS (DoH) is a protocol for performing remote Domain Name System (DNS) resolution via the HTTPS protocol. A goal of the method is to increase user privacy and security by preventing eavesdropping and manipulation of DNS data by man-in-the-middle attacks by using the HTTPS protocol to encrypt the data between the DoH client and the DoH-based DNS resolver. By March 2018, Google and the Mozilla Foundation had started testing versions of DNS over HTTPS. In February 2020, Firefox switched to DNS over HTTPS by default for users in the United States.

<span class="mw-page-title-main">HTTP/3</span> Version of the HTTP network protocol

HTTP/3 is the third major version of the Hypertext Transfer Protocol used to exchange information on the World Wide Web, complementing the widely-deployed HTTP/1.1 and HTTP/2. Unlike previous versions which relied on the well-established TCP, HTTP/3 uses QUIC, a multiplexed transport protocol built on UDP. On 6 June 2022, IETF published HTTP/3 as a Proposed Standard in RFC 9114.

References

  1. 1 2 3 4 Fielding, R.; Reschke, J. (2014). Fielding, R.; Reschke, J. (eds.). "Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing: Pipelining". ietf.org. doi: 10.17487/RFC7230 . Retrieved 2014-07-24.{{cite journal}}: Cite journal requires |journal= (help)
  2. 1 2 "Revision 1330814 | Connection management in HTTP/1.x | MDN". MDN Web Docs. Retrieved 2018-03-19.
  3. "HTTP2 browser support" . Retrieved March 9, 2017.
  4. Nielsen, Henrik Frystyk; Gettys, Jim; Baird-Smith, Anselm; Prud'hommeaux, Eric; Lie, Håkon Wium; Lilley, Chris (24 June 1997). "Network Performance Effects of HTTP/1.1, CSS1, and PNG". World Wide Web Consortium. Retrieved 14 January 2010.
  5. 1 2 Willis, Nathan (18 November 2009). "Reducing HTTP latency with SPDY". LWN.net.
  6. "Connections". w3.org.
  7. "HTTP/1.1 Pipelining FAQ'".
  8. 1 2 Mark Nottingham (March 14, 2011). "Making HTTP Pipelining Usable on the Open Web" . Retrieved October 16, 2021.
  9. 1 2 "Wayback link of 'Windows Internet Explorer 8 Expert Zone Chat (August 14, 2008)'". Microsoft. August 14, 2008. Archived from the original on December 4, 2010. Retrieved May 10, 2012.
  10. Fielding, R.; Reschke, J. (2014). Fielding, R.; Reschke, J. (eds.). "Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing: Concurrency". ietf.org. doi: 10.17487/RFC7230 . Retrieved 2014-07-24.{{cite journal}}: Cite journal requires |journal= (help)
  11. "Key Differences between HTTP/1.0 and HTTP/1.1". Archived from the original on 2016-04-24. Retrieved 2016-04-16.
  12. "Internet Explorer and Connection Limits". IEBlog. Retrieved 2016-11-14.
  13. Pipelining Network MozillaZine
  14. Cheah Chu Yeow (2005). Firefox secrets . p.  180. ISBN   0-9752402-4-2.
  15. "Bug 264354: Enable HTTP pipelining by default". Mozilla . Retrieved September 16, 2011.
  16. "Source code – nsHttpConnection.cpp". Firefox source code. Mozilla. May 7, 2010. Retrieved December 5, 2010.
  17. "Bug 1340655: Remove H1 Pipeline Support". Mozilla . Retrieved March 22, 2017.
  18. Emir Arian. Internet Communication: Protocols and related subjects . Retrieved 2021-10-16.
  19. HTTP Pipelining - The Chromium Projects
  20. "HTTP/1 Pipelining support has been removed in Firefox 54 - Pale Moon forum". forum.palemoon.org. Retrieved 2018-06-07.
  21. Mark Nottingham (June 20, 2007). "The State of Proxy Caching" . Retrieved May 16, 2009.
  22. Mark Nottingham (July 11, 2011). "What proxies must do" . Retrieved October 16, 2021.
  23. "squid : pipeline_prefetch configuration directive". Squid. November 9, 2009. Retrieved December 1, 2009.
  24. "Polipo — a caching web proxy". Juliusz Chroboczek. September 18, 2009. Retrieved November 12, 2009.
  25. "Tempesta FW — a Linux Application Delivery Controller". GitHub. Retrieved March 29, 2018.
  26. "Servers: Tempesta's side - tempesta-tech/tempesta Wiki". Tempesta Technologies INC. August 1, 2017. Retrieved March 29, 2018.
  27. Kahan, José (June 7, 2002). "Change History of libwww". World Wide Web Consortium . Retrieved August 3, 2010.
  28. "Using HTTP::Async for Parallel HTTP Requests (Colin Bradford)" (PDF). Archived from the original (PDF) on 2012-03-10. Retrieved 2010-08-03.
  29. System.Net.HttpWebRequest & pipelining
  30. QNetworkRequest Class Reference Archived 2009-12-22 at the Wayback Machine , Nokia QT documentation
  31. Pipelined HTTP GET utility
  32. Curl pipelining explanation Archived 2012-06-27 at the Wayback Machine , Curl developer documentation
  33. Curl pipelining removal announcement Archived 2021-02-05 at the Wayback Machine
  34. C. Michael Pilato; Ben Collins-Sussman; Brian W. Fitzpatrick (2008). Version Control with Subversion. O'Reilly Media. p. 238. ISBN   978-0-596-51033-6.
  35. Justin R. Erenkrantz (2007). "Subversion: Powerful New Toys" (PDF).
  36. "HTTP/HTTPS messages". Microsoft TechNet. January 21, 2005.
  37. How CICS Web support handles pipelining
  38. "HTTP Website". Archived from the original on 2012-06-08. Retrieved 2010-10-01.