BIC TCP

Last updated

BIC TCP (Binary Increase Congestion control) is one of the congestion control algorithms that can be used for Transmission Control Protocol (TCP). BIC is optimized for high speed networks with high latency: so-called "long fat networks". For these networks, BIC has significant advantage over previous congestion control schemes in correcting for severely underutilized bandwidth. [1]

Contents

BIC implements a unique congestion window (cwnd) algorithm. This algorithm tries to find the maximum cwnd by searching in three parts: binary search increase, additive increase, and slow start. When a network failure occurs, the BIC uses multiplicative decrease in correcting the cwnd. [2]

BIC TCP is implemented and used by default in Linux kernels 2.6.8 and above. The default implementation was again changed to CUBIC TCP in the 2.6.19 version.

Algorithm

Define the following variables:

 Smax:    the maximum increment  Smin:    the minimum increment  wmax:    the maximum window size    β:       multiplicative window decrease factor  cwnd:    congestion window size    bic_inc: window increment per RTT (round trip time)

At every RTT interval update cwnd with the following:

If no packets are dropped, the congestion window (cwnd) increases in three distinct ways: binary search increase, additive increase, and slow start. In each step, one is used as an increment.

One step of increasing cwnd:

 if (cwnd < wmax)          // binary search OR additive    bic_inc = (wmax - cwnd) / 2;  else                     // slow start OR additive    bic_inc = cwnd - wmax;  if (bic_inc > Smax)      // additive    bic_inc = Smax;  else if (bic_inc < Smin) // binary search OR slow start    bic_inc = Smin;  cwnd = cwnd + (bic_inc / cwnd);

If one or more packets are dropped, the cwnd is reduced using multiplicative decrease. This requires β, which is used in decreasing cwnd by (100×β)%. In the case of two flows, one with a large cwnd and the other a small cwnd, fast convergence is used to decrease the greater cwnd flow's wmax at a greater rate than the smaller cwnd's flow to allow faster convergence of the greater cwnd's flow when increasing its cwnd.

One step of decreasing cwnd:

 if (cwnd < wmax) // fast convergence    wmax = cwnd * (2-β) / 2;  else     wmax = cwnd;  cwnd = cwnd * (1-β);

See also

Related Research Articles

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport Layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

FAST TCP is a TCP congestion avoidance algorithm especially targeted at long-distance, high latency links, developed at the Netlab, California Institute of Technology and now being commercialized by FastSoft. FastSoft was acquired by Akamai Technologies in 2012.

Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start and congestion window (CWND), to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet. Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet.

TCP Vegas is a TCP congestion avoidance algorithm that emphasizes packet delay, rather than packet loss, as a signal to help determine the rate at which to send packets. It was developed at the University of Arizona by Lawrence Brakmo and Larry L. Peterson and introduced in 1994.

TCP Westwood (TCPW) is a sender-side-only modification to TCP New Reno that is intended to better handle large bandwidth-delay product paths, with potential packet loss due to transmission or other errors, and with dynamic load.

TCP tuning techniques adjust the network congestion avoidance parameters of Transmission Control Protocol (TCP) connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. However, blindly following instructions without understanding their real consequences can hurt performance as well.

The additive-increase/multiplicative-decrease (AIMD) algorithm is a feedback control algorithm best known for its use in TCP congestion control. AIMD combines linear growth of the congestion window when there is no congestion with an exponential reduction when congestion is detected. Multiple flows using AIMD congestion control will eventually converge to an equal usage of a shared link. The related schemes of multiplicative-increase/multiplicative-decrease (MIMD) and additive-increase/additive-decrease (AIAD) do not reach stability.

HighSpeed TCP (HSTCP) is a congestion control algorithm protocol defined in RFC 3649 for Transport Control Protocol (TCP). Standard TCP performs poorly in networks with a large bandwidth-delay product. It is unable to fully utilize available bandwidth. HSTCP makes minor modifications to standard TCP's congestion control mechanism to overcome this limitation.

UDP-based Data Transfer Protocol (UDT), is a high-performance data transfer protocol designed for transferring large volumetric datasets over high-speed wide area networks. Such settings are typically disadvantageous for the more common TCP protocol.

Compound TCP (CTCP) is a Microsoft algorithm that was introduced as part of the Windows Vista and Window Server 2008 TCP stack. It is designed to aggressively adjust the sender's congestion window to optimise TCP for connections with large bandwidth-delay products while trying not to harm fairness. It is also available for Linux, as well as for Windows XP and Windows Server 2003 via a hotfix.

H-TCP is another implementation of TCP with an optimized congestion control algorithm for high speed networks with high latency. It was created by researchers at the Hamilton Institute in Ireland.

Type of Transmission Control Protocol which is designed to provide much higher throughput and scalability.

TCP acceleration is the name of a series of techniques for achieving better throughput on a network connection than standard TCP achieves, without modifying the end applications. It is an alternative or a supplement to TCP tuning.

TCP-Illinois is a variant of TCP congestion control protocol, developed at the University of Illinois at Urbana–Champaign. It is especially targeted at high-speed, long-distance networks. A sender side modification to the standard TCP congestion control algorithm, it achieves a higher average throughput than the standard TCP, allocates the network resource fairly as the standard TCP, is compatible with the standard TCP, and provides incentives for TCP users to switch.

CUBIC is a network congestion avoidance algorithm for TCP which can achieve high bandwidth connections over networks more quickly and reliably in the face of high latency than earlier algorithms. It helps optimize long fat networks.

TCP-Friendly Rate Control (TFRC) is a congestion control mechanism designed for unicast flows operating in an Internet environment and competing with TCP traffic. The goal is to compete fairly with TCP traffic on medium timescales, but to be much less variable than TCP on short timescales.

Raj Jain

Raj Jain is a professor of Computer Science and Engineering in the Washington University School of Engineering and Applied Science at Washington University in St. Louis, Missouri.

Zeta-TCP refers to a set of proprietary Transmission Control Protocol (TCP) algorithms targeted to improve the end-to-end performance of TCP, regardless of whether the peer is Zeta-TCP or any other TCP protocol stack, in other words, to be compatible with the existing TCP algorithms. It was designed and implemented by AppEx Networks Corporation.

In computer networking, delay-gradient congestion control refers to a class of congestion control algorithms, which react to the differences in round-trip delay time (RTT), as opposed to classical congestion control methods, which react to packet loss or an RTT threshold being exceeded. Such algorithms include CAIA Delay-Gradient (CDG) and TIMELY.

References

  1. "BIC FAQ". www4.ncsu.edu. Retrieved December 25, 2018.
  2. "Binary increase congestion control (BIC) for fast long-distance networks - IEEE Conference Publication". doi:10.1109/INFCOM.2004.1354672. S2CID   11750446.Cite journal requires |journal= (help)