The hierarchical fair-service curve (HFSC) is a network scheduling algorithm for a network scheduler proposed by Ion Stoica, Hui Zhang and T. S. Eugene from Carnegie Mellon University at SIGCOMM 1997 [1] [2]
In this paper, we propose a scheduling algorithm that to the best of our knowledge is the first that can support simultaneously (a) hierarchical link-sharing service, (b) guaranteed real-time service with provable tight delay bounds, and (c) decoupled delay and bandwidth allocation (which subsumes priority scheduling). This is achieved by defining and incorporating fairness property, which is essential for link-sharing, into Service-Curve based schedulers, which can decouple the allocation of bandwidth and delay. We call the hierarchical version of the resulted algorithm a Hierarchical Fair Service Curve (H-FSC) Algorithm. We analyze the performance of H-FSC and present simulation results to demonstrate the advantages of H-FSC over previously proposed algorithms such as H-PFQ and CBQ. Preliminary experimental results based on a prototype implementation in NetBSD are also presented.
It is based on a QoS and CBQ. An implementation of HFSC is available in all operating systems based on the Linux kernel, [3] such as e.g. OpenWrt, [4] and also in DD-WRT, NetBSD 5.0, FreeBSD 8.0 and OpenBSD 4.6.
XFS is a high-performance 64-bit journaling file system created by Silicon Graphics, Inc (SGI) in 1993. It was the default file system in SGI's IRIX operating system starting with its version 5.3. XFS was ported to the Linux kernel in 2001; as of June 2014, XFS is supported by most Linux distributions; Red Hat Enterprise Linux uses it as default filesystem.
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows.
Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.
Deficit Round Robin (DRR), also Deficit Weighted Round Robin (DWRR), is a scheduling algorithm for the network scheduler. DRR is, like weighted fair queuing (WFQ), a packet-based implementation of the ideal Generalized Processor Sharing (GPS) policy. It was proposed by M. Shreedhar and G. Varghese in 1995 as an efficient and fair algorithm.
Class-based queuing (CBQ) is a queuing discipline for the network scheduler that allows traffic to share bandwidth equally, after being grouped by classes. The classes can be based upon a variety of parameters, such as priority, interface, or originating program.
ALTQ is the network scheduler for Berkeley Software Distribution. ALTQ provides queueing disciplines, and other components related to quality of service (QoS), required to realize resource sharing. It is most commonly implemented on BSD-based routers. ALTQ is included in the base distribution of FreeBSD, NetBSD, and DragonFly BSD, and was integrated into the pf packet filter of OpenBSD but later replaced by a new queueing subsystem.
Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start and congestion window (CWND), to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet. Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet.
TCP Vegas is a TCP congestion avoidance algorithm that emphasizes packet delay, rather than packet loss, as a signal to help determine the rate at which to send packets. It was developed at the University of Arizona by Lawrence Brakmo and Larry L. Peterson and introduced in 1994.
Scott J. Shenker is an American computer scientist, and professor of computer science at UC Berkeley. He is also the leader of the Extensible Internet Group at the International Computer Science Institute in Berkeley, California.
Argus – the Audit Record Generation and Utilization System is the first implementation of network flow monitoring, and is an ongoing open source network flow monitor project. Started by Carter Bullard in 1984 at Georgia Tech, and developed for cyber security at Carnegie Mellon University in the early 1990s, Argus has been an important contributor to Internet cyber security technology over its 30 years..
OpenVZ is an operating-system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs). OpenVZ is similar to Solaris Containers and LXC.
H-TCP is another implementation of TCP with an optimized congestion control algorithm for high speed networks with high latency. It was created by researchers at the Hamilton Institute in Ireland.
Weighted fair queueing (WFQ) is a network scheduling algorithm. WFQ is both a packet-based implementation of the generalized processor sharing (GPS) policy, and a natural extension of fair queuing (FQ). Whereas FQ shares the link's capacity in equal subparts, WFQ allows schedulers to specify, for each flow, which fraction of the capacity will be given.
Blue is a scheduling discipline for the network scheduler developed by graduate student Wu-chang Feng for Professor Kang G. Shin at the University of Michigan and others at the Thomas J. Watson Research Center of IBM in 1999.
The Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management (SLURM), or simply Slurm, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters.
Ion Stoica is a Romanian-American computer scientist specializing in distributed systems, cloud computing and computer networking. He is a professor of computer science at the University of California Berkeley and co-director of AMPLab. He co-founded Conviva, and Databricks, with other original developers of Apache Spark.
CoDel is an active queue management (AQM) algorithm in network routing, developed by Van Jacobson and Kathleen Nichols and published as RFC8289. It is designed to overcome bufferbloat in networking hardware, such as routers, by setting limits on the delay network packets experience as they pass through buffers in this equipment. CoDel aims to improve on the overall performance of the random early detection (RED) algorithm by addressing some of its fundamental misconceptions, as perceived by Jacobson, and by being easier to manage.
A network scheduler, also called packet scheduler, queueing discipline (qdisc) or queueing algorithm, is an arbiter on a node in a packet switching communication network. It manages the sequence of network packets in the transmit and receive queues of the protocol stack and network interface controller. There are several network schedulers available for the different operating systems, that implement many of the existing network scheduling algorithms.
Open vSwitch, sometimes abbreviated as OVS, is an open-source implementation of a distributed virtual multilayer switch. The main purpose of Open vSwitch is to provide a switching stack for hardware virtualization environments, while supporting multiple protocols and standards used in computer networks.
Dave Täht is an American computer scientist, musician, lecturer, asteroid exploration advocate, and Internet activist. He is the CEO of TekLibre, LLC.