Bottleneck (network)

Last updated

In a communication network, sometimes a max-min fairness of the network is desired, usually opposed to the basic first-come first-served policy. With max-min fairness, data flow between any two nodes is maximized, but only at the cost of more or equally expensive data flows. To put it another way, in case of network congestion any data flow is only impacted by smaller or equal flows.

In such context, a bottleneck link for a given data flow is a link that is fully utilized (is saturated) and of all the flows sharing this link, the given data flow achieves maximum data rate network-wide. [1] Note that this definition is substantially different from a common meaning of a bottleneck. Also note, that this definition does not forbid a single link to be a bottleneck for multiple flows.

A data rate allocation is max-min fair if and only if a data flow between any two nodes has at least one bottleneck link. This concept is critical in understanding network efficiency and fairness, as it ensures that no single flow can monopolize network resources to the detriment of others.

Bottleneck links are significant in network design and management because they determine the maximum throughput of a network. Identifying and managing bottlenecks is crucial for maintaining optimal performance in networked systems. Strategies to mitigate the impact of bottleneck links include increasing the capacity of the bottleneck link, optimizing traffic management, and using load-balancing techniques to distribute data flows across multiple paths.

See also

Related Research Articles

Network throughput refers to the rate of message delivery over a communication channel, such as Ethernet or packet radio, in a communication network. The data that these messages contain may be delivered over physical or logical links, or through network nodes. Throughput is usually measured in bits per second, and sometimes in data packets per second or data packets per time slot.

<span class="mw-page-title-main">Frame Relay</span> Wide area network technology

Frame Relay is a standardized wide area network (WAN) technology that specifies the physical and data link layers of digital telecommunications channels using a packet switching methodology. Originally designed for transport across Integrated Services Digital Network (ISDN) infrastructure, it may be used today in the context of many other network interfaces.

In computer science and optimization theory, the max-flow min-cut theorem states that in a flow network, the maximum amount of flow passing from the source to the sink is equal to the total weight of the edges in a minimum cut, i.e., the smallest total weight of the edges which if removed would disconnect the source from the sink.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

<span class="mw-page-title-main">Flow network</span> Directed graph where edges have a capacity

In graph theory, a flow network is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often in operations research, a directed graph is called a network, the vertices are called nodes and the edges are called arcs. A flow must satisfy the restriction that the amount of flow into a node equals the amount of flow out of it, unless it is a source, which has only outgoing flow, or sink, which has only incoming flow. A network can be used to model traffic in a computer network, circulation with demands, fluids in pipes, currents in an electrical circuit, or anything similar in which something travels through a network of nodes.

Transmission Control Protocol (TCP) uses a congestion control algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start and congestion window (CWND), to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet. Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet.

<span class="mw-page-title-main">Bottleneck (engineering)</span> Phenomenon in engineering

In engineering, a bottleneck is a phenomenon by which the performance or capacity of an entire system is severely limited by a single component. The component is sometimes called a bottleneck point. The term is metaphorically derived from the neck of a bottle, where the flow speed of the liquid is limited by its neck.

Network calculus is "a set of mathematical results which give insights into man-made systems such as concurrent programs, digital circuits and communication networks." Network calculus gives a theoretical framework for analysing performance guarantees in computer networks. As traffic flows through a network it is subject to constraints imposed by the system components, for example:

Fairness measures or metrics are used in network engineering to determine whether users or applications are receiving a fair share of system resources. There are several mathematical and conceptual definitions of fairness.

In mathematical optimization, the push–relabel algorithm is an algorithm for computing maximum flows in a flow network. The name "push–relabel" comes from the two basic operations used in the algorithm. Throughout its execution, the algorithm maintains a "preflow" and gradually converts it into a maximum flow by moving flow locally between neighboring nodes using push operations under the guidance of an admissible network maintained by relabel operations. In comparison, the Ford–Fulkerson algorithm performs global augmentations that send flow following paths from the source all the way to the sink.

Fair queuing is a family of scheduling algorithms used in some process and network schedulers. The algorithm is designed to achieve fairness when a limited resource is shared, for example to prevent flows with large packets or processes that generate small jobs from consuming more throughput or CPU time than other flows or processes.

Maximum throughput scheduling is a procedure for scheduling data packets in a packet-switched best-effort network, typically a wireless network, in view to maximize the total throughput of the network, or the system spectral efficiency in a wireless network. This is achieved by giving scheduling priority to the least "expensive" data flows in terms of consumed network resources per transferred amount of information.

Bandwidth management is the process of measuring and controlling the communications on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network. Bandwidth is described by bit rate and measured in units of bits per second (bit/s) or bytes per second (B/s).

In communication networks, multiplexing and the division of scarce resources, max-min fairness is said to be achieved by an allocation if and only if the allocation is feasible and an attempt to increase the allocation of any participant necessarily results in the decrease in the allocation of some other participant with an equal or smaller allocation.

<span class="mw-page-title-main">Three-phase traffic theory</span> Theory of traffic flow

Three-phase traffic theory is a theory of traffic flow developed by Boris Kerner between 1996 and 2002. It focuses mainly on the explanation of the physics of traffic breakdown and resulting congested traffic on highways. Kerner describes three phases of traffic, while the classical theories based on the fundamental diagram of traffic flow have two phases: free flow and congested traffic. Kerner’s theory divides congested traffic into two distinct phases, synchronized flow and wide moving jam, bringing the total number of phases to three:

Cooperative diversity is a cooperative multiple antenna technique for improving or maximising total network channel capacities for any given set of bandwidths which exploits user diversity by decoding the combined signal of the relayed signal and the direct signal in wireless multihop networks. A conventional single hop system uses direct transmission where a receiver decodes the information only based on the direct signal while regarding the relayed signal as interference, whereas the cooperative diversity considers the other signal as contribution. That is, cooperative diversity decodes the information from the combination of two signals. Hence, it can be seen that cooperative diversity is an antenna diversity that uses distributed antennas belonging to each node in a wireless network. Note that user cooperation is another definition of cooperative diversity. User cooperation considers an additional fact that each user relays the other user's signal while cooperative diversity can be also achieved by multi-hop relay networking systems.

Multipath routing is a routing technique simultaneously using multiple alternative paths through a network. This can yield a variety of benefits such as fault tolerance, increased bandwidth, and improved security.

Bufferbloat is a cause of high latency and jitter in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation, as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), audio streaming, online gaming, and even ordinary web browsing.

<span class="mw-page-title-main">Traffic congestion reconstruction with Kerner's three-phase theory</span>

Vehicular traffic can be either free or congested. Traffic occurs in time and space, i.e., it is a spatiotemporal process. However, usually traffic can be measured only at some road locations. For efficient traffic control and other intelligent transportation systems, the reconstruction of traffic congestion is necessary at all other road locations at which traffic measurements are not available. Traffic congestion can be reconstructed in space and time based on Boris Kerner’s three-phase traffic theory with the use of the ASDA and FOTO models introduced by Kerner. Kerner's three-phase traffic theory and, respectively, the ASDA/FOTO models are based on some common spatiotemporal features of traffic congestion observed in measured traffic data.

Commelec is a framework that provides distributed and real-time control of electrical grids by using explicit setpoints for active/reactive power absorptions/injections. It is based on the joint-operation of communication and electricity systems. Commelec has been developed by scientists at École Polytechnique Fédérale de Lausanne, a research institute and university in Lausanne, Switzerland. The Commelec project is part of the SNSF’s National Research Programme “Energy Turnaround”.

References

  1. "Jean-Yves Le Boudec (EPFL Lausanne) "Rate adaptation, Congestion Control and Fairness: A Tutorial" Nov 2005" (PDF). Archived (PDF) from the original on 6 October 2014. Retrieved 1 October 2014.
Notes