Zeta-TCP

Last updated

Zeta-TCP [1] refers to a set of proprietary Transmission Control Protocol (TCP) algorithms aiming at improving the end-to-end performance of TCP, regardless of whether the peer is Zeta-TCP or any other TCP protocol stack, in other words, to be compatible with the existing TCP algorithms. It was designed and implemented by AppEx Networks Corporation.

Contents

Zeta-TCP primarily provides the following improvements:

Congestion avoidance

Most TCP stack implementations use TCP New Reno or its variations (such as TCP SACK RFC3517) as the congestion avoidance algorithm. The New Reno-based algorithms are loss-based. Loss-based algorithms treat the packet losses as the sole indication of the congestions in the network. As the Internet has since evolved, this assumption is often an overkill today. The congestion loss is constantly descending with the advancement of technologies, while, relatively, random loss due to the properties of the media (e.g., wireless/Fading channels), wireline noises/cross-talk, connectivity flaws, software bugs, etc., is increasing. Once "congestion" is detected (or false-alarmed), New Reno shrinks the Congestion Window (CWND) sharply, causing a plunge of the sending rate. This is one of the major reasons that TCP based applications are often barely able to use a fraction of the subscribed bandwidth, especially when the round-trip delay time (RTT) is large.

TCP Vegas and its variations, most notably FAST TCP, base their congestion predications on the RTT measurement only. Such latency-based algorithms overcome the problems of the loss-based ones, and are usually a more realistic reflection of congestion in the network. However the latency-based algorithms have their own limitations.

Zeta-TCP addresses the problem by combining the strength of latency-based and loss-based algorithms. It constantly measures the RTT variation and loss rate variation, and calculates the likelihood of congestion. CWND backoff schemes are applied based on the likelihood. With the highest likelihood, it applies the backoff scheme of New Reno, which has already been proven to be effective and stable with many years of massive deployment.

Loss detection

The packet losses in the real network environment rarely spread evenly; rather they tend to happen close to each other. The TCP related RFCs (New Reno and SACK, etc.) explicitly defined how the first loss can be detected with high confidence. However, the detection of the losses after TCP enters the Fast Recovery mode with SACK permitted is not efficient in RFC3517. Some popular Operating Systems have their own implementations that are equally suboptimal.

Zeta-TCP provides a simple algorithm to calculate the loss probability on every unACK'd/unSACK'd packet. A packet is retransmitted only when its loss probability has surpassed a certain threshold. The same rule applies to the retransmission decision of every packet. Therefore, Zeta-TCP minimizes the number of retransmitted packets, further improving the bandwidth utilization. Lab tests have confirmed that Zeta-TCP retransmitted much fewer packets than the other TCP implementations under the same loss rate.

Zeta-TCP has developed a mechanism to accurately detect the packet loss at the earliest time once it suspects a loss is likely to happen. Early-detection usually can save one RTT or two on retransmission.

Reverse control

While the other algorithms focus on accelerating the outgoing traffic, Reverse Control attempts to address the incoming issues. Reverse Control monitors the quality of the connections with inbound data, and executes the algorithm to hint the peer to transmit as fast as it can when the connection quality is good. The algorithm also tends to more accurately identify the real packet losses from other abnormal situations in order to avoid triggering unnecessary fast recoveries.

The Reverse-controlled inbound acceleration is heuristic in that the inbound speed is controlled by the sender, i.e., the peer. It can only hint to the peer to send faster. Some TCP stacks take the hint and others do not. Often the sender side (e.g. content server) has a rate limiting mechanism that caps acceleration.

In addition to acceleration, Reverse Control can also limit the inbound rate. Unlike acceleration, slowing the inbound traffic is effective and accurate with the TCP flow control mechanism. The inbound rate limiting of Zeta-TCP lays the foundation of the inbound flow control of AppEx IPEQ. [2]

Implementation

At the time of writing, Zeta-TCP has been implemented as software modules for Linux (Netfilter Kernel Module), Microsoft Windows 10 down to XP and related Windows Server versions (NDIS IM Filter/NDIS LWF), and WinCE. AppEx does not modify the protocol stack, but intercepts the TCP flows and apply its algorithms on-the-fly. This nonintrusive way to implement the algorithms is intended for wider acceptance. The drawback is the added overhead of processing but the overhead is negligible in comparison with the performance gains.

Related Research Articles

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport Layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.

Traffic shaping is a bandwidth management technique used on computer networks which delays some or all datagrams to bring them into compliance with a desired traffic profile. Traffic shaping is used to optimize or guarantee performance, improve latency, or increase usable bandwidth for some kinds of packets by delaying other kinds. It is often confused with traffic policing, the distinct but related practice of packet dropping and packet marking.

Explicit Congestion Notification (ECN) is an extension to the Internet Protocol and to the Transmission Control Protocol and is defined in RFC 3168 (2001). ECN allows end-to-end notification of network congestion without dropping packets. ECN is an optional feature that may be used between two ECN-enabled endpoints when the underlying network infrastructure also supports it.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

FAST TCP is a TCP congestion avoidance algorithm especially targeted at long-distance, high latency links, developed at the Netlab, California Institute of Technology and now being commercialized by FastSoft. FastSoft was acquired by Akamai Technologies in 2012.

Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start and congestion window (CWND), to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet. Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet.

TCP Vegas is a TCP congestion avoidance algorithm that emphasizes packet delay, rather than packet loss, as a signal to help determine the rate at which to send packets. It was developed at the University of Arizona by Lawrence Brakmo and Larry L. Peterson and introduced in 1994.

TCP Westwood (TCPW) is a sender-side-only modification to TCP New Reno that is intended to better handle large bandwidth-delay product paths, with potential packet loss due to transmission or other errors, and with dynamic load.

TCP tuning techniques adjust the network congestion avoidance parameters of Transmission Control Protocol (TCP) connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. However, blindly following instructions without understanding their real consequences can hurt performance as well.

Retransmission, essentially identical with automatic repeat request (ARQ), is the resending of packets which have been either damaged or lost. Retransmission is one of the basic mechanisms used by protocols operating over a packet switched computer network to provide reliable communication.

Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks, or network congestion. Packet loss is measured as a percentage of packets lost with respect to packets sent.

BIC TCP is one of the congestion control algorithms that can be used for Transmission Control Protocol (TCP). BIC is optimized for high speed networks with high latency: so-called "long fat networks". For these networks, BIC has significant advantage over previous congestion control schemes in correcting for severely underutilized bandwidth.

Bandwidth management is the process of measuring and controlling the communications on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network. Bandwidth is described by bit rate and measured in units of bits per second (bit/s) or bytes per second (B/s).

H-TCP is another implementation of TCP with an optimized congestion control algorithm for high speed networks with high latency. It was created by researchers at the Hamilton Institute in Ireland.

In computing, Microsoft's Windows Vista and Windows Server 2008 introduced in 2007/2008 a new networking stack named Next Generation TCP/IP stack, to improve on the previous stack in several ways. The stack includes native implementation of IPv6, as well as a complete overhaul of IPv4. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after a change in settings. The new stack, implemented as a dual-stack model, depends on a strong host-model and features an infrastructure to enable more modular components that one can dynamically insert and remove.

Karn's algorithm addresses the problem of getting accurate estimates of the round-trip time for messages when using the Transmission Control Protocol (TCP) in computer networking. The algorithm, also sometimes termed as the Karn-Partridge algorithm was proposed in a paper by Phil Karn and Craig Partridge in 1987.

CUBIC is a network congestion avoidance algorithm for TCP which can achieve high bandwidth connections over networks more quickly and reliably in the face of high latency than earlier algorithms. It helps optimize long fat networks.

Micro Transport Protocol or μTP is an open UDP-based variant of the BitTorrent peer-to-peer file sharing protocol intended to mitigate poor latency and other congestion control problems found in conventional BitTorrent over TCP, while providing reliable, ordered delivery.

Bufferbloat is a cause of high latency and jitter in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation, as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), audio streaming, online gaming, and even ordinary web browsing.

QUIC is a general-purpose transport layer network protocol initially designed by Jim Roskind at Google, implemented, and deployed in 2012, announced publicly in 2013 as experimentation broadened, and described at an IETF meeting. QUIC is used by more than half of all connections from the Chrome web browser to Google's servers. Microsoft Edge and Firefox support it. Safari implements the protocol, however it is not enabled by default.

References

  1. "Whitepaper: Zeta-TCP - Intelligent, Adaptive, Asymmetric TCP Acceleration" (PDF).
  2. "Whitepaper: AppEx IPEQ (IP End-to-End QoS)" (PDF).