TCP tuning

Last updated

TCP tuning techniques adjust the network congestion avoidance parameters of Transmission Control Protocol (TCP) connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. [1] However, blindly following instructions without understanding their real consequences can hurt performance as well.

Contents

Network and system characteristics

Bandwidth-delay product (BDP)

Bandwidth-delay product (BDP) is a term primarily used in conjunction with TCP to refer to the number of bytes necessary to fill a TCP "path", i.e. it is equal to the maximum number of simultaneous bits in transit between the transmitter and the receiver.

High performance networks have very large BDPs. To give a practical example, two nodes communicating over a geostationary satellite link with a round-trip delay time (or round-trip time, RTT) of 0.5 seconds and a bandwidth of 10 Gbit/s can have up to 0.5×10 Gbits, i.e., 5 Gbit of unacknowledged data in flight. Despite having much lower latencies than satellite links, even terrestrial fiber links can have very high BDPs because their link capacity is so large. Operating systems and protocols designed as recently as a few years ago when networks were slower were tuned for BDPs of orders of magnitude smaller, with implications for limited achievable performance.

Buffers

The original TCP configurations supported TCP receive window size buffers of up to 65,535 (64 KiB - 1) bytes, which was adequate for slow links or links with small RTTs. Larger buffers are required by the high performance options described below.

Buffering is used throughout high performance network systems to handle delays in the system. In general, buffer size will need to be scaled proportionally to the amount of data "in flight" at any time. For very high performance applications that are not sensitive to network delays, it is possible to interpose large end to end buffering delays by putting in intermediate data storage points in an end to end system, and then to use automated and scheduled non-real-time data transfers to get the data to their final endpoints.

TCP speed limits

Maximum achievable throughput for a single TCP connection is determined by different factors. One trivial limitation is the maximum bandwidth of the slowest link in the path. But there are also other, less obvious limits for TCP throughput. Bit errors can create a limitation for the connection as well as RTT.

Window size

In computer networking, RWIN (TCP Receive Window) is the amount of data that a computer can accept without acknowledging the sender. If the sender has not received acknowledgement for the first packet it sent, it will stop and wait and if this wait exceeds a certain limit, it may even retransmit. This is how TCP achieves reliable data transmission.

Even if there is no packet loss in the network, windowing can limit throughput. Because TCP transmits data up to the window size before waiting for the acknowledgements, the full bandwidth of the network may not always get used. The limitation caused by window size can be calculated as follows:

where RWIN is the TCP Receive Window and RTT is the round-trip time for the path.

At any given time, the window advertised by the receive side of TCP corresponds to the amount of free receive memory it has allocated for this connection. Otherwise it would risk dropping received packets due to lack of space.

The sending side should also allocate the same amount of memory as the receive side for good performance. That is because, even after data has been sent on the network, the sending side must hold it in memory until it has been acknowledged as successfully received, just in case it would have to be retransmitted. If the receiver is far away, acknowledgments will take a long time to arrive. If the send memory is small, it can saturate and block emission. A simple computation gives the same optimal send memory size as for the receive memory size given above.

Packet loss

When packet loss occurs in the network, an additional limit is imposed on the connection. [2] In the case of light to moderate packet loss when the TCP rate is limited by the congestion avoidance algorithm, the limit can be calculated according to the formula (Mathis, et al.):

where MSS is the maximum segment size and Ploss is the probability of packet loss. If packet loss is so rare that the TCP window becomes regularly fully extended, this formula doesn't apply.

TCP options for high performance

A number of extensions have been made to TCP over the years to increase its performance over fast high-RTT links ("long fat networks" or LFNs).

TCP timestamps (RFC 1323) play a double role: they avoid ambiguities due to the 32-bit sequence number field wrapping around, and they allow more precise RTT estimation in the presence of multiple losses per RTT. With those improvements, it becomes reasonable to increase the TCP window beyond 64 kB, which can be done using the window scaling option (RFC 1323).

The TCP selective acknowledgment option (SACK, RFC 2018) allows a TCP receiver to precisely inform the TCP sender about which segments have been lost. This increases performance on high-RTT links, when multiple losses per window are possible.

Path MTU Discovery avoids the need for in-network fragmentation, increasing the performance in the presence of packet loss.

Tuning slow connections

The default IP queue length is 1000, which is generally too large. Imagine a Wi-Fi base station having a speed of 20 Mbit/s and an average packet size of 750 byte. How large should the IP queue be? A voice over IP client should be able to transmit a packet every 20 ms. The estimated maximum number of packets in transit would then be:

Estimated buffer size = 20000000 * 0,020 / 8 / 750 = 66

A better queue length would be:

ifconfig wlan0 mtu 1492 txqueuelen 100

See also

Related Research Articles

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

FAST TCP is a TCP congestion avoidance algorithm especially targeted at long-distance, high latency links, developed at the Netlab, California Institute of Technology and now being commercialized by FastSoft. FastSoft was acquired by Akamai Technologies in 2012.

TCP offload engine (TOE) is a technology used in some network interface cards (NIC) to offload processing of the entire TCP/IP stack to the network controller. It is primarily used with high-speed network interfaces, such as gigabit Ethernet and 10 Gigabit Ethernet, where processing overhead of the network stack becomes significant. TOEs are often used as a way to reduce the overhead associated with Internet Protocol (IP) storage protocols such as iSCSI and Network File System (NFS).

Throughput of a network can be measured using various tools available on different platforms. This page explains the theory behind what these tools set out to measure and the issues regarding these measurements.

Transmission Control Protocol (TCP) uses a congestion control algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start and congestion window (CWND), to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet. Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet.

Nagle's algorithm is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. It was defined by John Nagle while working for Ford Aerospace. It was published in 1984 as a Request for Comments (RFC) with title Congestion Control in IP/TCP Internetworks in RFC 896.

TCP Westwood (TCPW) is a sender-side-only modification to TCP New Reno that is intended to better handle large bandwidth-delay product paths, with potential packet loss due to transmission or other errors, and with dynamic load.

Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks, or network congestion. Packet loss is measured as a percentage of packets lost with respect to packets sent.

In data communications, the bandwidth-delay product is the product of a data link's capacity and its round-trip delay time. The result, an amount of data measured in bits, is equivalent to the maximum amount of data on the network circuit at any given time, i.e., data that has been transmitted but not yet acknowledged. The bandwidth-delay product was originally proposed as a rule of thumb for sizing router buffers in conjunction with congestion avoidance algorithm random early detection (RED).

Bandwidth management is the process of measuring and controlling the communications on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network. Bandwidth is described by bit rate and measured in units of bits per second (bit/s) or bytes per second (B/s).

<span class="mw-page-title-main">Performance-enhancing proxy</span>

Performance-enhancing proxies (PEPs) are network agents designed to improve the end-to-end performance of some communication protocols. PEP standards are defined in RFC 3135 and RFC 3449.

The TCP window scale option is an option to increase the receive window size allowed in Transmission Control Protocol above its former maximum value of 65,535 bytes. This TCP option, along with several others, is defined in RFC 7323 which deals with long fat networks (LFNs).

Type of Transmission Control Protocol which is designed to provide much higher throughput and scalability.

In computing, Microsoft's Windows Vista and Windows Server 2008 introduced in 2007/2008 a new networking stack named Next Generation TCP/IP stack, to improve on the previous stack in several ways. The stack includes native implementation of IPv6, as well as a complete overhaul of IPv4. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after a change in settings. The new stack, implemented as a dual-stack model, depends on a strong host-model and features an infrastructure to enable more modular components that one can dynamically insert and remove.

A sliding window protocol is a feature of packet-based data transmission protocols. Sliding window protocols are used where reliable in-order delivery of packets is required, such as in the data link layer as well as in the Transmission Control Protocol (TCP). They are also used to improve efficiency when the channel may include high latency.

In computer networking, Rate Based Satellite Control Protocol (RBSCP) is a tunneling method proposed by Cisco to improve the performance of satellite network links with high latency and error rates.

Bufferbloat is a cause of high latency and jitter in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation, as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), audio streaming, online gaming, and even ordinary web browsing.

Zeta-TCP refers to a set of proprietary Transmission Control Protocol (TCP) algorithms aiming at improving the end-to-end performance of TCP, regardless of whether the peer is Zeta-TCP or any other TCP protocol stack, in other words, to be compatible with the existing TCP algorithms. It was designed and implemented by AppEx Networks Corporation.

Infineta Systems was a company that made WAN optimization products for high performance, latency-sensitive network applications. The company advertised that it allowed application data rate to exceed the nominal data rate of the link. Infineta Systems ceased operations by February 2013, a liquidator was appointed, and its products will no longer be manufactured, sold or distributed.

References

  1. "High Performance SSH/SCP - HPN-SSH". Psc.edu. Retrieved January 23, 2020.
  2. "The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm". Psc.edu. Archived from the original on May 11, 2012. Retrieved January 3, 2017.