This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
Low-latency queuing (LLQ) is a network scheduling feature developed by Cisco to bring strict priority queuing (PQ) to class-based weighted fair queuing (CBWFQ). LLQ allows delay-sensitive data (such as voice) to be given preferential treatment over other traffic by letting the data to be dequeued and sent first. [1]
Class-based weighted fair queuing (CB-WFQ) was initially released without the support of a priority queuing system, thus it could not guarantee the delay and jitter (delay variation) requirements of real-time, interactive voice and video conversations. Since for CBWFQ, the weight for a packet belonging to a specific class is derived from the bandwidth assigned to the class, which in turn determines the order in which packets are sent. All packets are serviced fairly based on weight and no class of packets may be granted strict priority. This scheme poses problems for voice traffic that is largely intolerant of delay, especially variation in delay.
In order to address this, Cisco released LLQ to provide strict priority queuing for CBWFQ by enabling the use of a single, strict priority queue within CBWFQ at the class level. This allows for directing traffic belonging to a class to the CBWFQ strict priority queue. One or more classes priority status can be given within a policy map. When multiple classes within a single policy map are configured as priority classes, all traffic from these classes is enqueued to the same single, strict priority queue.
Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by the American National Standards Institute and ITU-T for digital transmission of multiple types of traffic. ATM was developed to meet the needs of the Broadband Integrated Services Digital Network as defined in the late 1980s, and designed to integrate telecommunication networks. It can handle both traditional high-throughput data traffic and real-time, low-latency content such as telephony (voice) and video. ATM provides functionality that uses features of circuit switching and packet switching networks by using asynchronous time-division multiplexing. ATM was seen in the 1990s as a competitor to Ethernet and networks carrying IP traffic as, unlike Ethernet, it was faster and designed with quality-of-service in mind, but it fell out of favor once Ethernet reached speeds of 1 gigabits per second.
Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network, or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.
Differentiated services or DiffServ is a computer networking architecture that specifies a mechanism for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks. DiffServ can, for example, be used to provide low-latency to critical network traffic such as voice or streaming media while providing best-effort service to non-critical services such as web traffic or file transfers.
Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.
Weighted round robin (WRR) is a network scheduler for data flows, but also used to schedule processes.
Deficit Round Robin (DRR), also Deficit Weighted Round Robin (DWRR), is a scheduling algorithm for the network scheduler. DRR is, like weighted fair queuing (WFQ), a packet-based implementation of the ideal Generalized Processor Sharing (GPS) policy. It was proposed by M. Shreedhar and G. Varghese in 1995 as an efficient and fair algorithm.
Resilient Packet Ring (RPR), as defined by IEEE standard 802.17, is a protocol designed for the transport of data traffic over optical fiber ring networks. The standard began development in November 2000 and has undergone several amendments since its initial standard was completed in June 2004. The amended standards are 802.17a through 802.17d, the last of which was adopted in May 2011. It is designed to provide the resilience found in SONET and Synchronous Digital Hierarchy networks but, instead of setting up circuit oriented connections, provides a packet based transmission, in order to increase the efficiency of Ethernet and IP services.
ALTQ is the network scheduler for Berkeley Software Distribution. ALTQ provides queueing disciplines, and other components related to quality of service (QoS), required to realize resource sharing. It is most commonly implemented on BSD-based routers. ALTQ is included in the base distribution of FreeBSD, NetBSD, and DragonFly BSD, and was integrated into the pf packet filter of OpenBSD but later replaced by a new queueing subsystem.
Generalized processor sharing (GPS) is an ideal scheduling algorithm for process schedulers and network schedulers. It is related to the fair-queuing principle which groups packets into classes and shares the service capacity between them. GPS shares this capacity according to some fixed weights.
In computer science, an input queue is a collection of processes in storage that are waiting to be brought into memory to run a program. Input queues are mainly used in Operating System Scheduling which is a technique for distributing resources among processes. Input queues not only apply to operating systems (OS), but may also be applied to scheduling inside networking devices. The purpose of scheduling is to ensure resources are being distributed fairly and effectively; therefore, it improves the performance of the system.
RFC 2638 from the IETF defines the entity of the Bandwidth Broker (BB) in the framework of differentiated services (DiffServ). According to RFC 2638, a Bandwidth Broker is an agent that has some knowledge of an organization's priorities and policies and allocates quality of service (QoS) resources with respect to those policies. In order to achieve an end-to-end allocation of resources across separate domains, the Bandwidth Broker managing a domain will have to communicate with its adjacent peers, which allows end-to-end services to be constructed out of purely bilateral agreements. Admission control is one of the main tasks that a Bandwidth Broker has to perform, in order to decide whether an incoming resource reservation request will be accepted or not. Most Bandwidth Brokers use simple admission control modules, although there are also proposals for more sophisticated admission control according to several metrics such as acceptance rate, network utilization, etc. The BB acts as a Policy Decision Point (PDP) in deciding whether to allow or reject a flow, whilst the edge routers acts as Policy Enforcement Points (PEPs) to police traffic.
Weighted random early detection (WRED) is a queueing discipline for a network scheduler suited for congestion avoidance. It is an extension to random early detection (RED) where a single queue may have several different sets of queue thresholds. Each threshold set is associated to a particular traffic class.
Bandwidth management is the process of measuring and controlling the communications on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network. Bandwidth is described by bit rate and measured in units of bits per second (bit/s) or bytes per second (B/s).
Weighted fair queueing (WFQ) is a network scheduling algorithm. WFQ is both a packet-based implementation of the generalized processor sharing (GPS) policy, and a natural extension of fair queuing (FQ). Whereas FQ shares the link's capacity in equal subparts, WFQ allows schedulers to specify, for each flow, which fraction of the capacity will be given.
The Time-Triggered Ethernet standard defines a fault-tolerant synchronization strategy for building and maintaining synchronized time in Ethernet networks, and outlines mechanisms required for synchronous time-triggered packet switching for critical integrated applications and integrated modular avionics (IMA) architectures. SAE International released SAE AS6802 in November 2011.
ITU-T Y.156sam Ethernet Service Activation Test Methodology is a draft recommendation under study by the ITU-T describing a new testing methodology adapted to the multiservice reality of packet-based networks.
Bufferbloat is a cause of high latency and jitter in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation, as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), audio streaming, online gaming, and even ordinary web browsing.
ITU-T Y.1564 is an Ethernet service activation test methodology, which is the new ITU-T standard for turning up, installing and troubleshooting Ethernet-based services. It is the only standard test methodology that allows for complete validation of Ethernet service-level agreements (SLAs) in a single test.
A network scheduler, also called packet scheduler, queueing discipline (qdisc) or queueing algorithm, is an arbiter on a node in a packet switching communication network. It manages the sequence of network packets in the transmit and receive queues of the protocol stack and network interface controller. There are several network schedulers available for the different operating systems, that implement many of the existing network scheduling algorithms.
Time-Sensitive Networking (TSN) is a set of standards under development by the Time-Sensitive Networking task group of the IEEE 802.1 working group. The TSN task group was formed in November 2012 by renaming the existing Audio Video Bridging Task Group and continuing its work. The name changed as a result of the extension of the working area of the standardization group. The standards define mechanisms for the time-sensitive transmission of data over deterministic Ethernet networks.