Maximum throughput scheduling

Last updated

Maximum throughput scheduling is a procedure for scheduling data packets in a packet-switched best-effort network, typically a wireless network, in view to maximize the total throughput of the network, or the system spectral efficiency in a wireless network. This is achieved by giving scheduling priority to the least "expensive" data flows in terms of consumed network resources per transferred amount of information.

Contents

In advanced packet radio systems, for example the HSDPA 3.5G cellular system, channel-dependent scheduling is used instead of FIFO queuing to take advantage of favourable channel conditions to make best use of available radio conditions. Maximum throughput scheduling may be tempting in this context, especially in simulations where throughput of various schemes are compared. However, maximum throughput scheduling is normally not desirable, and channel-dependent scheduling should be used with care, as we will see below.

Cost function in wireless packet radio systems

In a wireless network with link adaptation, and without co-channel interference from nearby wireless networks, the bit rate depends heavily on the carrier to noise ratio (CNR), which depends on the attenuation on the link between the transmitter and receiver, i.e. the path loss. For maximum throughput scheduling, links that are affected by low attenuation should be considered as inexpensive, and should be given scheduling priority.

Example 2: Spread spectrum

In the uplink of a spread spectrum cellular system, the carrier-to-interference ratio (CIR) is held constant by the power control for all users. For a user that suffers from high path loss, the power control will cause high interference level to signals from other users. This will prevent other more efficient data flows, since there is a maximum allowed interference level in the cell, and reduce the throughput. Consequently, for maximum throughput scheduling, data flows that suffer from high path loss should be considered as the most expensive, also in this case.

Example 3: Dynamic channel allocation

In wireless network with fast dynamic channel allocation (DCA), on a packet-by-packet or slot-by-slot basis, a user that is situated in the overlap between the coverage areas of several base stations would cause, or would be affected by, interference to/from nearby cells. The DCA algorithm would prevent the nearby cells from using the same frequency channel simultaneously. The cost function would correspond to the number of blocked nearby base station sites.

Comparison with other resource sharing policies

If there are large differences between the "cost" of each data flow, which is the case especially in wireless networking, resources may be assigned to only one or very few data flows per physical channel in the network. If there are many simultaneously active data flows, a majority of the data flows will have to wait until the most inexpensive flows have no more data to transfer, and will suffer from scheduling starvation.

A maximum throughput scheduling policy may be tempting since it would optimize the resource utilization in a given network, but it would not be likely to maximize profit for the network operator. The levels of customer satisfaction would remain low due to many customers experiencing long or permanent service outages.

Proportional fairness would result in lower throughput, but starvation would be avoided.

Max-min fairness would result in even lower throughput, but higher level of fairness, meaning that the service quality that each data flow achieves would be even more stable.

Unlike max-min fair scheduling based on the fair queuing or round robin algorithms, a maximum throughput scheduling algorithm relies on the calculation of a cost function, which in wireless networks may require fast and truthful measurement of the path loss. Proportional fairness based on weighted fair queuing also require measurement or calculation of the cost function.

See also

Related Research Articles

Asynchronous Transfer Mode Digital telecommunications protocol for voice, video, and data

Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by ANSI and ITU-T for digital transmission of multiple types of traffic. ATM was developed to meet the needs of the Broadband Integrated Services Digital Network as defined in the late 1980s, and designed to integrate telecommunication networks. It can handle both traditional high-throughput data traffic and real-time, low-latency content such as telephony (voice) and video. ATM provides functionality that uses features of circuit switching and packet switching networks by using asynchronous time-division multiplexing.

Network throughput refers to the rate of successful message delivery over a communication channel, such as Ethernet or packet radio, in a communication network.The data that these messages contain may be delivered over physical or logical links, or through network nodes. Throughput is usually measured in bits per second, and sometimes in data packets per second or data packets per time slot.

Communication channel Physical or logical connection used for transmission of information

A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for information transfer of, for example a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second.

In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows.

Round-robin scheduling Algorithm employed by process and network schedulers in computing

Round-robin (RR) is one of the algorithms employed by process and network schedulers in computing. As the term is generally used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority. Round-robin scheduling is simple, easy to implement, and starvation-free. Round-robin scheduling can be applied to other scheduling problems, such as data packet scheduling in computer networks. It is an operating system concept.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

A traffic generation model is a stochastic model of the traffic flows or data sources in a communication network, for example a cellular network or a computer network. A packet generation model is a traffic generation model of the packet flows or data sources in a packet-switched network. For example, a web traffic model is a model of the data that is sent or received by a user's web-browser. These models are useful during the development of telecommunication technologies, in view to analyse the performance and capacity of various protocols, algorithms and network topologies.

Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start and congestion window (CWND), to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet. Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet.

TCP tuning techniques adjust the network congestion avoidance parameters of Transmission Control Protocol (TCP) connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. However, blindly following instructions without understanding their real consequences can hurt performance as well.

Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks, or network congestion. Packet loss is measured as a percentage of packets lost with respect to packets sent.

Proportional-fair scheduling is a compromise-based scheduling algorithm. It is based upon maintaining a balance between two competing interests: Trying to maximize total throughput of the network while at the same time allowing all users at least a minimal level of service. This is done by assigning each data flow a data rate or a scheduling priority that is inversely proportional to its anticipated resource consumption.

Fairness measures or metrics are used in network engineering to determine whether users or applications are receiving a fair share of system resources. There are several mathematical and conceptual definitions of fairness.

Fair queuing is a family of scheduling algorithms used in some process and network schedulers. The algorithm is designed to achieve fairness when a limited resource is shared, for example to prevent flows with large packets or processes that generate small jobs from consuming more throughput or CPU time than other flows or processes.

In communication networks, multiplexing and the division of scarce resources, max-min fairness is said to be achieved by an allocation if and only if the allocation is feasible and an attempt to increase the allocation of any participant necessarily results in the decrease in the allocation of some other participant with an equal or smaller allocation.

In radio resource management for wireless and cellular networks, channel allocation schemes allocate bandwidth and communication channels to base stations, access points and terminal equipment. The objective is to achieve maximum system spectral efficiency in bit/s/Hz/site by means of frequency reuse, but still assure a certain grade of service by avoiding co-channel interference and adjacent channel interference among nearby cells or networks that share the bandwidth.

Weighted fair queueing (WFQ) is a network scheduling algorithm. WFQ is both a packet-based implementation of the generalized processor sharing (GPS) policy, and a natural extension of fair queuing (FQ). Whereas FQ shares the link's capacity in equal subparts, WFQ allows schedulers to specify, for each flow, which fraction of the capacity will be given.

Radio resource management (RRM) is the system level management of co-channel interference, radio resources, and other radio transmission characteristics in wireless communication systems, for example cellular networks, wireless local area networks, wireless sensor systems, and radio broadcasting networks. RRM involves strategies and algorithms for controlling parameters such as transmit power, user allocation, beamforming, data rates, handover criteria, modulation scheme, error coding scheme, etc. The objective is to utilize the limited radio-frequency spectrum resources and radio network infrastructure as efficiently as possible.

Bufferbloat is a cause of high latency and jitter in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation, as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), audio streaming, online gaming, and even ordinary web browsing.

In queueing theory, a discipline within the mathematical theory of probability, the backpressure routing algorithm is a method for directing traffic around a queueing network that achieves maximum network throughput, which is established using concepts of Lyapunov drift. Backpressure routing considers the situation where each job can visit multiple service nodes in the network. It is an extension of max-weight scheduling where each job visits only a single service node.

Deterministic Networking (DetNet) is an effort by the IETF DetNet Working Group to study implementation of deterministic data paths for real-time applications with extremely low data loss rates, packet delay variation (jitter), and bounded latency, such as audio and video streaming, industrial automation, and vehicle control.