Leaky bucket

Last updated

The leaky bucket analogy. Water can be added intermittently to the bucket, which leaks out at a constant rate until empty, and will also overflow when full. Leaky bucket analogy.svg
The leaky bucket analogy. Water can be added intermittently to the bucket, which leaks out at a constant rate until empty, and will also overflow when full.

The leaky bucket is an algorithm based on an analogy of how a bucket with a constant leak will overflow if either the average rate at which water is poured in exceeds the rate at which the bucket leaks or if more water than the capacity of the bucket is poured in all at once. It can be used to determine whether some sequence of discrete events conforms to defined limits on their average and peak rates or frequencies, e.g. to limit the actions associated with these events to these rates or delay them until they do conform to the rates. It may also be used to check conformance or limit to an average rate alone, i.e. remove any variation from the average.

Contents

It is used in packet-switched computer networks and telecommunications networks in both the traffic policing, traffic shaping and scheduling of data transmissions, in the form of packets, [note 1] to defined limits on bandwidth and burstiness (a measure of the variations in the traffic flow).

A version of the leaky bucket, the generic cell rate algorithm, is recommended for Asynchronous Transfer Mode (ATM) networks [1] in UPC and NPC at user–network interfaces or inter-network interfaces or network-to-network interfaces to protect a network from excessive traffic levels on connections routed through it. The generic cell rate algorithm, or an equivalent, may also be used to shape transmissions by a network interface card onto an ATM network.

At least some implementations of the leaky bucket are a mirror image of the token bucket algorithm and will, given equivalent parameters, determine exactly the same sequence of events to conform or not conform to the same limits.

Overview

Two different methods of applying this leaky bucket analogy are described in the literature. [2] [3] [4] [5] These give what appear to be two different algorithms, both of which are referred to as the leaky bucket algorithm and generally without reference to the other method. This has resulted in confusion about what the leaky bucket algorithm is and what its properties are.

In one version the bucket is a counter or variable separate from the flow of traffic or schedule of events. [2] [4] [5] This counter is used only to check that the traffic or events conform to the limits: The counter is incremented as each packet arrives at the point where the check is being made or an event occurs, which is equivalent to the way water is added intermittently to the bucket. The counter is also decremented at a fixed rate, equivalent to the way the water leaks out of the bucket. As a result, the value in the counter represents the level of the water in the bucket. If the counter remains below a specified limit value when a packet arrives or an event occurs, i.e. the bucket does not overflow, that indicates its conformance to the bandwidth and burstiness limits or the average and peak rate event limits. This version is referred to here as the leaky bucket as a meter.

In the second version, the bucket is a queue in the flow of traffic. [3] This queue is used to directly control that flow: Packets are entered into the queue as they arrive, equivalent to the water being added to the bucket. These packets are then removed from the queue (first come, first served), usually at a fixed rate, e.g. for onward transmission, equivalent to water leaking from the bucket. This configuration imposes conformance rather than checking it, and where the output is serviced at a fixed rate (and where the packets are all the same length), the resulting traffic stream is necessarily devoid of burstiness or jitter. So in this version, the traffic itself is the analogue of the water passing through the bucket. This version is referred to here as leaky bucket as a queue.

The leaky bucket as a meter is exactly equivalent to (a mirror image of) the token bucket algorithm, i.e. the process of adding water to the leaky bucket exactly mirrors that of removing tokens from the token bucket when a conforming packet arrives, the process of leaking of water from the leaky bucket exactly mirrors that of regularly adding tokens to the token bucket, and the test that the leaky bucket will not overflow is a mirror of the test that the token bucket contains enough tokens and will not underflow. Thus, given equivalent parameters, the two algorithms will see the same traffic as conforming or nonconforming. The leaky bucket as a queue can be seen as a special case of the leaky bucket as a meter. [6]

As a meter

Traffic policing with a leaky bucket as a meter Leaky bucket as a meter-policing.JPG
Traffic policing with a leaky bucket as a meter

Jonathan S. Turner is credited [7] with the original description of the leaky bucket algorithm and describes it as follows: "A counter associated with each user transmitting on a connection is incremented whenever the user sends a packet and is decremented periodically. If the counter exceeds a threshold upon being incremented, the network discards the packet. The user specifies the rate at which the counter is decremented (this determines the average bandwidth) and the value of the threshold (a measure of burstiness)". [2] The bucket (analogous to the counter) is, in this case, used as a meter to test the conformance of packets, rather than as a queue to directly control them.

Another description of what is essentially the same meter version of the algorithm, the generic cell rate algorithm, is given by the ITU-T in recommendation I.371 and in the ATM Forum's UNI Specification. [4] [5] The description, in which the term cell is equivalent to packet in Turner's description [2] is given by the ITU-T as follows: "The continuous-state leaky bucket can be viewed as a finite capacity bucket whose real-valued content drains out at a continuous rate of 1 unit of content per time unit and whose content is increased by the increment T for each conforming cell... If at a cell arrival the content of the bucket is less than or equal to the limit value τ, then the cell is conforming; otherwise, the cell is non-conforming. The capacity of the bucket (the upper bound of the counter) is (T + τ)". [5] These specifications also state that, due to its finite capacity, if the contents of the bucket at the time the conformance is tested is greater than the limit value, and hence the cell is non-conforming, then the bucket is left unchanged; that is, the water is simply not added if it would make the bucket overflow.

David E. McDysan and Darrel L. Spohn provide a commentary on the description given by the ITU-T/ATM Forum. In this, they state, "In the leaky bucket analogy, the [ATM] cells do not actually flow through the bucket; only the check for conforming admission does". [6] However, uncommonly in the descriptions in the literature, McDysan and Spohn also refer to the leaky bucket algorithm as a queue, going on, "Note that one implementation of traffic shaping is to actually have the cells flow through the bucket". [6]

Traffic shaping with a leaky bucket as a meter Leaky bucket as a meter-shaping.JPG
Traffic shaping with a leaky bucket as a meter

In describing the operation of the ITU-T's version of the algorithm, McDysan and Spohn invoke a "notion commonly employed in queueing theory of a fictional gremlin". [6] This gremlin inspects the level in the bucket and takes action if the level is above the limit value τ: in traffic policing, it pulls open a trap door, which causes the arriving packet to be dropped and stops its water from entering the bucket; in traffic shaping, it pushes up a flap, which delays the arriving packet and prevents it from delivering its water, until the water level in the bucket falls below τ.

The difference between the descriptions given by Turner and the ITU-T/ATM Forum is that Turner's is specific to traffic policing, whereas the ITU-T/ATM Forum description is applicable to both traffic policing and traffic shaping. Also, Turner does not state that the contents of the counter should only be affected by conforming packets, and should only be incremented when this would not cause it to exceed a limit, i.e. Turner does not explicitly state that the bucket's capacity or counter's maximum value is finite. [note 2]

Concept of operation

A description of the concept of operation of the leaky bucket algorithm as a meter that can be used in either traffic policing or traffic shaping may be described as a fixed capacity bucket, associated with each virtual connection or user. The bucket leaks at a fixed rate. If the bucket is empty, it stops leaking. For a packet to conform, it has to be possible to add a specific amount of water to the bucket The specific amount added by a conforming packet can be the same for all packets or can be proportional to the length of the packet. If this amount of water would cause the bucket to overflow then the packet does not conform and the water in the bucket is left unchanged.

Uses

The leaky bucket as a meter can be used in either traffic shaping or traffic policing. For example, in ATM networks, in the form of the generic cell rate algorithm, it is used to compare the bandwidth and burstiness of traffic on a virtual channel (VC) or virtual path (VP) against the specified limits on the rate at which cells may arrive and the maximum jitter, or variation in inter-arrival intervals, for the VC or VP. In traffic policing, cells that do not conform to these limits (nonconforming cells) may be dropped or may be reduced in priority for downstream traffic management functions to drop if there is congestion. In traffic shaping, cells are delayed until they conform. Traffic policing and traffic shaping are commonly used in UPC and NPC to protect the network against excess or excessively bursty traffic. (See bandwidth management and congestion avoidance.) Traffic shaping is commonly used in the network interfaces in hosts to prevent transmissions from exceeding the bandwidth or jitter limits and being discarded by traffic management functions in the network. (See scheduling (computing) and network scheduler.)

The leaky bucket algorithm as a meter can also be used in a leaky bucket counter to measure the rate of random (stochastic) processes. A Leaky bucket counter can be used to indicate, by its overflowing when the average or peak rate of events increases above some acceptable background level. [8] For example, such a leaky bucket counter can be used to detect when there is a sudden burst of correctable memory errors or when there has been a gradual, but significant, increase in the average rate, which may indicate an impending correction failure. [9]

The use of the leaky bucket algorithm in a leaky bucket counter is similar to that in traffic management, in that it is used as a meter. Essentially, the events replace the packets in the description, with each event causing a quantity of water to be added to the bucket. If the bucket would overflow, as a result of the event, then the event should trigger the action associated with an out-of-limits event. Some implementations [8] seem to parallel Turner's description, [2] in that there is no explicit limit on the maximum value that the counter may take, implying that once the counter has exceeded the threshold, it may not return to its previous state until a period significantly greater than the equivalent of the emission interval has passed, which may be increased by what would otherwise be conforming events. However, other implementations may not increment the counter while it is overflowed, allowing it to correctly determine whether the following events conform or not.

Parameters

In the case of the leaky bucket algorithm as a meter, the limits on the traffic can be bandwidth and a burstiness of the output. [4] [5] [note 3] The bandwidth limit and burstiness limit for the connection may be specified in a traffic contract. A bandwidth limit may be specified as a packet rate, a bit rate, or as an emission interval between the packets. A limit on burstiness may be specified as a delay variation tolerance, or as a maximum burst size (MBS).

Multiple sets of contract parameters can be applied concurrently to a connection using multiple instances of the leaky bucket algorithm, each of which may take a bandwidth and a burstiness limit: see Generic cell rate algorithm § Dual Leaky Bucket Controller.

Emission interval

The rate at which the bucket leaks will determine the bandwidth limit, which is referred to as the average rate by Turner [2] and the inverse of which is referred to as the emission interval by the ITU-T. It is easiest to explain what this interval is where packets have a fixed length. Hence, the first part of this description assumes this, and the implications of variable packet lengths are considered separately.

Consider a bucket that is exactly filled to the top by preceding traffic, i.e. when the maximum permitted burstiness has already occurred, i.e. the maximum number of packets or cells have just arrived in the minimum amount of time for them to still conform to the bandwidth and jitter limits. The minimum interval before the next packet can conform is then the time it takes for the bucket to leak exactly the amount of water delivered by a packet, and if a packet is tested and conforms at that time, this will exactly fill the bucket once more. Thus, once the bucket is filled, the maximum rate that packets can conform is with this interval between each packet.

Turner [2] refers to this rate as the average, implying that its inverse is the average interval. There is, however, some ambiguity in what the average rate and interval are. Since packets can arrive at any lower rate, this is an upper bound, rather than a fixed value, so it could at best be called the maximum for the average rate. Also, during the time the maximum burstiness occurs, packets can arrive at smaller intervals and thus at a higher rate than this. So, for any period less than infinity, the actual average rate can be (but is not necessarily) greater than this and the average interval can be (but is not necessarily) less than the emission interval. Hence, because of this ambiguity, the term emission interval is used hereafter. However, it is still true that the minimum value that the long-term average interval can take tends to be the emission interval.

For variable-length packets, where the amount added to the bucket is proportional to the packet length, the maximum rate at which they can conform varies according to their length: the amount that the bucket must have leaked from full for a packet to conform is the amount the packet will add, and if this is proportional to the packet length, so is the interval between it and the preceding packet that filled the bucket. Hence, it is not possible to specify a specific emission interval for variable-length packets, and the bandwidth limit has to be specified explicitly, in bits or bytes per second.

Delay variation tolerance

It is easiest to explain delay variation tolerance for the case where packets have a fixed length. Hence, the first part of this description assumes this, and the implications of variable packet lengths are considered separately.

The ITU-T defines a limit value, τ, that is less than the capacity of the bucket by T (the amount by which the bucket contents is incremented for each conforming cell), so that the capacity of the bucket is T + τ. This limit value specifies how much earlier a packet can arrive than it would normally be expected if the packets were arriving with exactly the emission interval between them.

Imagine the following situation: A bucket leaks at 1 unit of water per second, so the limit value, τ and the amount of water added by a packet, T, are effectively in units of seconds. This bucket starts off empty, so when a packet arrives at the bucket, it does not quite fill the bucket by adding its water T, and the bucket is now τ below its capacity. So when the next packet arrives the bucket only has to have drained by Tτ for this to conform. So the interval between these two packets can be as much as τ less than T.

This extends to multiple packets in a sequence: Imagine the following: The bucket again starts off empty, so the first packet to arrive clearly conforms. The bucket then becomes exactly full after a number of conforming packets, N, have arrived in the minimum possible time for them to conform. For the last (the Nth) packet to conform, the bucket must have leaked enough of the water from the preceding N – 1 packets ((N – 1) × T seconds' worth) for it to be exactly at the limit value τ at this time. Hence, the water leaked is (N – 1) × Tτ, which because the leak is one unit per second, took exactly (N – 1) × Tτ seconds to leak. Thus the shortest time in which all N packets can arrive and conform is (N – 1) × Tτ seconds, which is exactly τ less than the time it would have taken if the packets had been arriving at exactly the emission interval.

However, packets can only arrive with intervals less than T when the bucket is not filled by the previous packet. If it is filled, then the bucket must have drained by the full amount T before the next packet conforms. So once this bucket space has been used up by packets arriving at less than T, subsequent frames must arrive at intervals no less than T. They may, however, arrive at greater intervals, when the bucket will not be filled by them. Since the bucket stops leaking when it is empty, there is always a limit (τ) to how much tolerance can be accrued by these intervals greater than T.

Since the limit value τ defines how much earlier a packet can arrive than would be expected, it is the limit on the difference between the maximum and minimum delays from the source to the point where the conformance test is being made (assuming the packets are generated with no jitter). Hence, the use of the term cell delay variation tolerance (CDVt) for this parameter in ATM.

As an example, a possible source of delay variation is where a number of streams of packets are multiplexed together at the output of a switch. Assuming that the sum of the bandwidths of these connections is less than the capacity of the output, all of the packets that arrive can be transmitted eventually. However, if their arrivals are independent, e.g. because they arrive at different inputs of the switch, then several may arrive at or nearly at the same time. Since the output can only transmit one packet at a time, the others must be queued in a buffer until it is their turn to be transmitted. This buffer then introduces an additional delay between a packet arriving at an input and being transmitted by the output, and this delay varies, depending on how many other packets are already queued in the buffer. A similar situation can occur at the output of a host (in the network interface controller) when multiple packets have the same or similar release times, and this delay can usually be modelled as a delay in a virtual output buffer.

For variable length packets, where the amount of water added by a given packet is proportional to its length, τ can not be seen as a limit on how full the bucket can be when a packet arrives, as this varies depending on the packet size. However, the time it takes to drain from this level to empty is still how much earlier a packet can arrive than is expected when packets are transmitted at the bandwidth limit. Thus, it is still the maximum variation in transfer delay to the point where the conformance test is being applied that can be tolerated, and thus the tolerance on maximum delay variation.

Maximum burst size

The limit value or delay variation tolerance also controls how many packets can arrive in a burst, determined by the excess depth of the bucket over the capacity required for a single packet. Hence MBS is also a measure of burstiness or jitter, and it is possible to specify the burstiness as an MBS and derive the limit value τ from this or to specify it as a jitter or delay variation tolerance or limit value, and derive the MBS from this.

A burst or clump of packets can arrive at a higher rate than determined by the emission interval T. This may be the line rate of the physical layer connection when the packets in the burst will arrive back-to-back. However, as in ATM, the tolerance may be applied to a lower rate, in that case, the Sustainable Cell Rate (SCR), and the burst of packets (cells) can arrive at a higher rate, but less than the line rate of the physical layer, in that case, the Peak Cell Rate (PCR). The MBS may then be the number of cells needed to transport a higher layer packet (see Segmentation and reassembly), where the packets are transmitted with a maximum bandwidth determined by the SCR and cells within the packets are transmitted at the PCR; thus allowing the last cell of the packet, and the packet itself, to arrive significantly earlier than it would if the cells were sent at the SCR: transmission duration = (MBS-1)/PCR rather than (MBS-1)/SCR. This bursting at the PCR puts a significantly higher load on shared resources, e.g. switch output buffers, than does transmission at the SCR, and is thus more likely to result in buffer overflows and network congestion. However, it puts a lesser load on these resources than would transmitting at the SCR with a limit value, τSCR, that allows MBS cells to be transmitted and arrive back-to-back at the line rate.

If the limit value is large enough, then several packets can arrive in a burst and still conform: if the bucket starts from empty, the first packet to arrive will add T, but if, by the time the next packet arrives, the contents is below τ, this will also conform. Assuming that each packet takes δ to arrive, then if τ (expressed as the time it takes the bucket to empty from the limit value) is equal to or greater than the emission interval less the minimum interarrival time, T δ, the second packet will conform even if it arrives as a burst with the first. Similarly, if τ is equal to or greater than (T δ) × 2, then 3 packets can arrive in a burst, etc.

The maximum size of this burst, M, can be calculated from the emission interval, T; the maximum jitter tolerance, τ; and the time taken to transmit/receive a packet, δ, as follows: [4]

Equally, the minimum value of jitter tolerance τ that gives a specific MBS can be calculated from the MBS as follows: [4]

In the case of ATM, where technically MBS only relates to the SCR tolerance, in the above equation the time it takes each packet to arrive, δ, is the emission interval for cells at the PCR TPCR, and the emission interval, T, is the emission interval for the SCR TSCR. Where MBS is to be the number of cells required to transport a segmented packet, the limit value in the above, τ, should be that for the SCR τSCR. However, at the UNI or an NNI, where cells at the PCR will have been subjected to delay variation, it should be the limit value for the SCR plus that for the PCR τSCR + τPCR.

For variable length packets, the maximum burst size will depend on the lengths of the packets in the burst and there is no single value for the maximum burst size. However, it is possible to specify the total burst length in bytes, from the byte rate of the input stream, the equivalent byte rate of the leak, and the bucket depth.

Comparison with the token bucket algorithm

The leaky bucket algorithm is sometimes contrasted with the token bucket algorithm. However, the above concept of operation for the leaky bucket as a meter may be directly compared with the token bucket algorithm, the description of which is given in that article as the following:

  • A token is added to the bucket every 1/r seconds.
  • The bucket can hold at the most b tokens. If a token arrives when the bucket is full, it is discarded.
  • When a packet (network layer PDU)[ sic ] [note 1] of "n" bytes arrives, n tokens are removed from the bucket, and the packet is sent to the network.
  • If fewer than n tokens are available, no tokens are removed from the bucket, and the packet is considered to be non-conformant.

This can be compared with the concept of operation, repeated from above:

  • A fixed capacity bucket, associated with each virtual connection or user, leaks at a fixed rate.
  • If the bucket is empty, it stops leaking.
  • For a packet to conform, it has to be possible to add a specific amount of water to the bucket: The specific amount added by a conforming packet can be the same for all packets, or can be proportional to the length of the packet.
  • If this amount of water would cause the bucket to exceed its capacity then the packet does not conform and the water in the bucket is left unchanged.

As can be seen, these two descriptions are essentially mirror images of one another: one adds something to the bucket on a regular basis and takes something away for conforming packets down to a limit of zero; the other takes away regularly and adds for conforming packets up to a limit of the bucket's capacity. So, is an implementation that adds tokens for a conforming packet and removes them at a fixed rate an implementation of the leaky bucket or of the token bucket? Similarly, which algorithm is used in an implementation that removes water for a conforming packet and adds water at a fixed rate? In fact both are effectively the same, i.e. implementations of both the leaky bucket and token bucket, as these are the same basic algorithm described differently. This explains why, given equivalent parameters, the two algorithms will see exactly the same packets as conforming or nonconforming. The differences in the properties and performance of implementations of the leaky and token bucket algorithms thus result entirely from the differences in the implementations, i.e. they do not stem from differences in the underlying algorithms.

The points to note are that the leaky bucket algorithm, when used as a meter, can allow a conforming output packet stream with jitter or burstiness, can be used in traffic policing as well as shaping, and can be implemented for variable-length packets.

As a queue

The leaky bucket as a queue is essentially a way of describing a simple FIFO buffer or queue that is serviced at a fixed rate to remove burstiness or jitter. A description of it is given by Andrew S. Tanenbaum, in (an older version of) his book Computer Networks as "The leaky bucket consists of a finite queue. When a packet arrives, if there is room on the queue it is appended to the queue; otherwise it is discarded. At every clock tick one packet is transmitted (unless the queue is empty)". [3] An implementation of the leaky bucket as a queue is therefore always a form of traffic shaping function.

The leaky bucket as a queue LeakyBucket.png
The leaky bucket as a queue

As can be seen this implementation is restricted in that the packets are only ever transmitted at a fixed rate. To underline this, Tanenbaum also states that "The leaky bucket algorithm enforces a rigid output pattern at the average rate, no matter how bursty the [input] traffic is". [10] However, this assertion is only strictly true as long as the queue does not become empty: if the average arrival rate is less than the rate of clock ticks, or if the input is sufficiently bursty that the losses bring the rate of the remainder below the clock tick rate (i.e. gaps in the input stream are long enough and the queue is small enough that it can become empty), there will be gaps in the output stream.

A further restriction is that the leaky bucket as a queue traffic shaping function only transmits packets on the ticks; hence, if it is used within a network, equivalent to UPC and NPC, it also imposes a fixed phase on the onward transmission of packets. Whereas, when using a leaky bucket meter to control onward transmission, a packet is transmitted as soon as it conforms, i.e. relative to the previous one or, if it already conforms, its arrival time; not according to some arbitrary local clock. Perversely, in the context of the transfer delay, this imposition of a fixed phase that may, over time, differ from that of an otherwise conforming input packet stream, constitutes a delay variation and hence a jitter. Jitter caused in this particular way could only be observed where the delay is measured as the transit time between two separate measurement points, one either side of the leaky bucket as a queue shaping function. However, in the context of real-time data transmissions, it is this end-to-end delay that determines the latency of received data. Hence, the leaky bucket as a queue is unsuitable for traffic shaping real-time transmissions.

Limiting variable length packets using the leaky bucket algorithm as a queue is significantly more complicated than it is for fixed-length packets. Tanenbaum gives a description of a "byte-counting" leaky bucket for variable length packets as follows: "At each tick, a counter is initialized to n. If the first packet on the queue has fewer bytes than the current value of the counter, it is transmitted, and the counter is decremented by that number of bytes. Additional packets may also be sent, as long as the counter is high enough. When the counter drops below the length of the next packet on the queue, transmission stops until the next tick, at which time the residual byte count is reset [to n] and the flow can continue". [3] As with the version for fixed length packets, this implementation has a strong effect on the phase of transmissions, resulting in variable end-to-end delays, and unsuitability for real-time traffic shaping.

Uses

The leaky bucket as a queue can only be used in shaping traffic to a specified bandwidth with no jitter in the output. [10] It may be used within the network, e.g. as part of bandwidth management, but is more appropriate to traffic shaping in the network interfaces of hosts. Leaky bucket algorithm is used in Nginx's ngx_http_limit_conn_module module for limiting the number of concurrent connections originating from a single IP address. [11]

Parameters

In the case of the leaky bucket algorithm as a queue, the only defined limit for this algorithm is the bandwidth of its output. [10] [note 3]

The bandwidth limit for the connection may be specified in a traffic contract. A bandwidth limit may be specified as a packet or frame rate, a byte or bit rate, or as an emission interval between the packets.

Inefficiency

The implementation of the leaky bucket as a queue does not use available network resources efficiently. Because it transmits packets only at fixed intervals, there will be many instances when the traffic volume is very low and large portions of network resources (bandwidth in particular) are not being used. Therefore no mechanism exists in the leaky-bucket implementation as a queue to allow individual flows to burst up to port speed, effectively consuming network resources at times when there would not be resource contention in the network. Implementations of the token bucket and leaky bucket as a meter do, however, allow output traffic flows to have bursty characteristics.

Comparison between the two versions

Analysis of the two versions of the leaky bucket algorithm shows that the version as a queue is a special case of the version as a meter.

Imagine a traffic shaping function for fixed-length packets that is implemented using a fixed-length queue, forming a delay element, which is serviced using a leaky bucket as a meter. Imagine also that the bucket in this meter has a depth equal to the amount added by a packet, i.e. has a limit value, τ, of zero. However, the conformance test is only performed at intervals of the emission interval, when the packet at the head of the queue is transmitted and its water is added. This water then leaks away during the next emission interval (or is removed just prior to performing the next conformance test), allowing the next packet to conform then or at some subsequent emission interval. The service function can also be viewed in terms of a token bucket with the same depth, where enough tokens for one packet are added (if the bucket is not full) at the emission intervals. This implementation will then receive packets with a bursty arrival pattern (limited by the queue depth) and transmit them on at intervals that are always exact (integral) multiples of the emission interval.

However, the implementation of the leaky bucket as a meter (or token bucket) in a traffic shaping function described above is an exact equivalent to the description of the leaky bucket as a queue: [3] the delay element of the meter version is the bucket of the queue version; the bucket of the meter version is the process that services the queue, and the leak is such that the emission interval is the same as the tick interval. Therefore for fixed length packets, the implementation of the leaky bucket as a queue is of a special case of a traffic shaping function using a leaky bucket (or token bucket) as a meter in which the limit value, τ, is zero and the process of testing conformance is performed at the lowest possible rate.

The leaky bucket as a queue for variable packet lengths can also be described as equivalent to a special case of the leaky bucket as a meter. The suggested implementation [3] can, like the fixed length implementation, be seen as traffic shaping function in which the queue is a delay element, rather than the bucket, and the function that services the queue is, in this case, explicitly given as a token bucket: it is decremented for conforming packets and incremented at a fixed rate. Hence, as the leaky bucket as a meter and token bucket are equivalent, the leaky bucket as a queue for variable packet lengths is also a special case of a traffic shaping function using a leaky bucket (or token bucket) as a meter.

There is an interesting consequence of seeing the leaky bucket as a queue for variable packet lengths as a specific implementation of the token bucket or leaky bucket as a meter in traffic shaping. This is that the bucket of the meter has a depth, n, and, as is always the case with the token bucket, this depth determines the burstiness of the output traffic (perhaps in relation to the average or minimum number of tokens required by the packets). Hence, it is possible to quantify the burstiness of the output of this "byte counting" leaky bucket as a meter, unless all packets are of the maximum length when it becomes pointless. However, this ability to define a burstiness for the output is in direct contradiction to the statement that the leaky bucket (as a queue) necessarily gives an output with a rigid rate, no matter how bursty the input.

See also

Notes

  1. 1 2 In traffic management, the leaky bucket is normally applied to the equivalent of OSI layer 2 PDUs, e.g. ATM cells and Ethernet frames, which are called frames. It may be argued then that the description of this algorithm should be given in terms of frames not packets, which are, in the ISO-OSI 7 layer model, layer 3 Network Layer PDUs. However, the term packet is commonly used generically in the descriptions of this algorithm in the literature, and this convention is also applied here. It is not, however, intended to imply that the leaky bucket algorithm is applied exclusively to Network Layer PDUs.
  2. To make Turner's description clearly aligned with ITU-T, the statement "If the counter exceeds a threshold upon being incremented, the network discards the packet" would have to be changed to something like "If the counter would exceed a threshold [equivalent to the bucket depth, T + τ, in the ITU-T description] upon being incremented, the network discards the packet and the counter is not incremented", i.e. it is only incremented when it is less than or equal to the limit value, τ, or at least T less than the bucket depth in the ITU-T description.
  3. 1 2 Traffic shaping functions include a queue that is necessarily of finite size. Hence, if he input stream exceeds some level of burstiness dependent on the length of the queue or consistently exceeds the bandwidth limit being imposed on the output stream, the queue will overflow and packets will (normally) be discarded: see Traffic shaping#Overflow condition. Therefore, traffic shaping functions can be seen as applying traffic policing to the input connection and traffic shaping to the output. They should, therefore, take a parameter for the burstiness limit on the input additional to that or those for the leaky bucket. However, this input burstiness limit may be defaulted to a value that is not expected to impact on normal traffic (the queue is assumed to be deep enough for all normal circumstances), and not always specified explicitly.

Related Research Articles

<span class="mw-page-title-main">Asynchronous Transfer Mode</span> Digital telecommunications protocol for voice, video, and data

Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by the American National Standards Institute and International Telecommunication Union Telecommunication Standardization Sector for digital transmission of multiple types of traffic. ATM was developed to meet the needs of the Broadband Integrated Services Digital Network as defined in the late 1980s, and designed to integrate telecommunication networks. It can handle both traditional high-throughput data traffic and real-time, low-latency content such as telephony (voice) and video. ATM provides functionality that uses features of circuit switching and packet switching networks by using asynchronous time-division multiplexing. ATM was seen in the 1990s as a competitor to Ethernet and networks carrying IP traffic as, unlike Ethernet, it was faster and designed with quality-of-service in mind, but it fell out of favor once Ethernet reached speeds of 1 gigabits per second.

In computer networking, integrated services or IntServ is an architecture that specifies the elements to guarantee quality of service (QoS) on networks. IntServ can for example be used to allow video and sound to reach the receiver without interruption.

Traffic shaping is a bandwidth management technique used on computer networks which delays some or all datagrams to bring them into compliance with a desired traffic profile. Traffic shaping is used to optimize or guarantee performance, improve latency, or increase usable bandwidth for some kinds of packets by delaying other kinds. It is often confused with traffic policing, the distinct but related practice of packet dropping and packet marking.

In computer networks, rate limiting is used to control the rate of requests sent or received by a network interface controller. It can be used to prevent DoS attacks and limit web scraping.

The token bucket is an algorithm used in packet-switched and telecommunications networks. It can be used to check that data transmissions, in the form of packets, conform to defined limits on bandwidth and burstiness. It can also be used as a scheduling algorithm to determine the timing of transmissions that will comply with the limits set for the bandwidth and burstiness: see network scheduler.

<span class="mw-page-title-main">Random early detection</span> Algorithm

Random early detection (RED), also known as random early discard or random early drop, is a queuing discipline for a network scheduler suited for congestion avoidance.

Network performance refers to measures of service quality of a network as seen by the customer.

A long-tailed or heavy-tailed distribution is one that assigns relatively high probabilities to regions far from the mean or median. A more formal mathematical definition is given below. In the context of teletraffic engineering a number of quantities of interest have been shown to have a long-tailed distribution. For example, if we consider the sizes of files transferred from a web server, then, to a good degree of accuracy, the distribution is heavy-tailed, that is, there are a large number of small files transferred but, crucially, the number of very large files transferred remains a major component of the volume downloaded.

In communications, traffic policing is the process of monitoring network traffic for compliance with a traffic contract and taking steps to enforce that contract. Traffic sources which are aware of a traffic contract may apply traffic shaping to ensure their output stays within the contract and is thus not discarded. Traffic exceeding a traffic contract may be discarded immediately, marked as non-compliant, or left as-is, depending on administrative policy and the characteristics of the excess traffic.

The generic cell rate algorithm (GCRA) is a leaky bucket-type scheduling algorithm for the network scheduler that is used in Asynchronous Transfer Mode (ATM) networks. It is used to measure the timing of cells on virtual channels (VCs) and or Virtual Paths (VPs) against bandwidth and jitter limits contained in a traffic contract for the VC or VP to which the cells belong. Cells that do not conform to the limits given by the traffic contract may then be re-timed (delayed) in traffic shaping, or may be dropped (discarded) or reduced in priority (demoted) in traffic policing. Nonconforming cells that are reduced in priority may then be dropped, in preference to higher priority cells, by downstream components in the network that are experiencing congestion. Alternatively they may reach their destination if there is enough capacity for them, despite them being excess cells as far as the contract is concerned: see priority control.

Generalized processor sharing (GPS) is an ideal scheduling algorithm for process schedulers and network schedulers. It is related to the fair-queuing principle which groups packets into classes and shares the service capacity between them. GPS shares this capacity according to some fixed weights.

Network calculus is "a set of mathematical results which give insights into man-made systems such as concurrent programs, digital circuits and communication networks." Network calculus gives a theoretical framework for analysing performance guarantees in computer networks. As traffic flows through a network it is subject to constraints imposed by the system components, for example:

The Media Delivery Index (MDI) is a set of measures that can be used to monitor both the quality of a delivered video stream as well as to show system margin for IPTV systems by providing an accurate measurement of jitter and delay at network level (Internet Protocol, IP), which are the main causes for quality loss. Identifying and quantizing such problems in this kind of networks is key to maintaining high quality video delivery and providing indications that warn system operators with enough advance notice to allow corrective action.

Bandwidth management is the process of measuring and controlling the communications on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network. Bandwidth is described by bit rate and measured in units of bits per second (bit/s) or bytes per second (B/s).

In computer networking, network traffic control is the process of managing, controlling or reducing the network traffic, particularly Internet bandwidth, e.g. by the network scheduler. It is used by network administrators, to reduce congestion, latency and packet loss. This is part of bandwidth management. In order to use these tools effectively, it is necessary to measure the network traffic to determine the causes of network congestion and attack those problems specifically.

Usage Parameter Control (UPC) and Network Parameter Control (NPC) are functions that may be performed in a computer network. UPC may be performed at the input to a network "to protect network resources from malicious as well as unintentional misbehaviour". NPC is the same and done for the same reasons as UPC, but at the interface between two networks.

CoDel is an active queue management (AQM) algorithm in network routing, developed by Van Jacobson and Kathleen Nichols and published as RFC8289. It is designed to overcome bufferbloat in networking hardware, such as routers, by setting limits on the delay network packets experience as they pass through buffers in this equipment. CoDel aims to improve on the overall performance of the random early detection (RED) algorithm by addressing some of its fundamental misconceptions, as perceived by Jacobson, and by being easier to manage.

Design of robust and reliable networks and network services relies on an understanding of the traffic characteristics of the network. Throughout history, different models of network traffic have been developed and used for evaluating existing and proposed networks and services.

Time-Sensitive Networking (TSN) is a set of standards under development by the Time-Sensitive Networking task group of the IEEE 802.1 working group. The TSN task group was formed in November 2012 by renaming the existing Audio Video Bridging Task Group and continuing its work. The name changed as a result of the extension of the working area of the standardization group. The standards define mechanisms for the time-sensitive transmission of data over deterministic Ethernet networks.

Deterministic Networking (DetNet) is an effort by the IETF DetNet Working Group to study implementation of deterministic data paths for real-time applications with extremely low data loss rates, packet delay variation (jitter), and bounded latency, such as audio and video streaming, industrial automation, and vehicle control.

References

  1. ITU-T, Traffic control and congestion control in B ISDN, Recommendation I.371, International Telecommunication Union, 2004, page 17
  2. 1 2 3 4 5 6 7 Turner, J., New directions in communications (or which way to the information age?). IEEE Communications Magazine 24 (10): 8–15. ISSN   0163-6804, 1986.
  3. 1 2 3 4 5 6 Andrew S. Tanenbaum, Computer Networks, Fourth Edition, ISBN   0-13-166836-6, Prentice Hall PTR, 2003., page 401.
  4. 1 2 3 4 5 6 ATM Forum, The User Network Interface (UNI), v. 3.1, ISBN   0-13-393828-X, Prentice Hall PTR, 1995.
  5. 1 2 3 4 5 ITU-T, Traffic control and congestion control in B ISDN, Recommendation I.371, International Telecommunication Union, 2004, Annex A, page 87.
  6. 1 2 3 4 McDysan, David E. and Spohn, Darrel L., ATM : Theory and Application, ISBN   0-07-060362-6, McGraw-Hill series on computer communications, 1995, pages 358–9.
  7. Andrew S. Tanenbaum, Computer Networks, Fourth Edition, ISBN   0-13-166836-6, Prentice Hall PTR, 2003, Page 400.
  8. 1 2 "Leaky bucket counter". The Free Dictionary.
  9. Intel, Intel Server Board S5400SF: Technical Product Specification, September 2007, http://download.intel.com/support/motherboards/server/s5400sf/sb/s5400sf_tps_rev2_01.pdf.
  10. 1 2 3 Andrew S. Tanenbaum, Computer Networks, Fourth Edition, ISBN   0-13-166836-6, Prentice Hall PTR, 2003, page 402.
  11. "Module ngx_http_limit_conn_module".