Long-tail traffic

Last updated

A long-tailed or heavy-tailed distribution is one that assigns relatively high probabilities to regions far from the mean or median. A more formal mathematical definition is given below. In the context of teletraffic engineering a number of quantities of interest have been shown to have a long-tailed distribution. For example, if we consider the sizes of files transferred from a web server, then, to a good degree of accuracy, the distribution is heavy-tailed, that is, there are a large number of small files transferred but, crucially, the number of very large files transferred remains a major component of the volume downloaded.

Contents

Many processes are technically long-range dependent but not self-similar. The differences between these two phenomena are subtle. Heavy-tailed refers to a probability distribution, and long-range dependent refers to a property of a time series and so these should be used with care and a distinction should be made. The terms are distinct although superpositions of samples from heavy-tailed distributions aggregate to form long-range dependent time series.

Additionally, there is Brownian motion which is self-similar but not long-range dependent.

Overview

The design of robust and reliable networks and network services has become an increasingly challenging task in today's Internet world. To achieve this goal, understanding the characteristics of Internet traffic plays a more and more critical role. Empirical studies of measured traffic traces have led to the wide recognition of self-similarity in network traffic. [1]

Self-similar Ethernet traffic exhibits dependencies over a long range of time scales. This is to be contrasted with telephone traffic which is Poisson in its arrival and departure process. [2]

With many time-series if the series is averaged then the data begins to look smoother. However, with self-similar data, one is confronted with traces that are spiky and bursty, even at large scales. Such behaviour is caused by strong dependence in the data: large values tend to come in clusters, and clusters of clusters, etc. This can have far-reaching consequences for network performance. [3]

Heavy-tail distributions have been observed in many natural phenomena including both physical and sociological phenomena. Mandelbrot established the use of heavy-tail distributions to model real-world fractal phenomena, e.g. Stock markets, earthquakes, and the weather. [2] Ethernet, WWW, SS7, TCP, FTP, TELNET and VBR video (digitised video of the type that is transmitted over ATM networks) traffic is self-similar. [4]

Self-similarity in packetised data networks can be caused by the distribution of file sizes, human interactions and/or Ethernet dynamics. [5] Self-similar and long-range dependent characteristics in computer networks present a fundamentally different set of problems to people doing analysis and/or design of networks, and many of the previous assumptions upon which systems have been built are no longer valid in the presence of self-similarity. [6]

Short-range dependence vs. long-range dependence

Long-range and short-range dependent processes are characterised by their autocovariance functions.

In short-range dependent processes, the coupling between values at different times decreases rapidly as the time difference increases.

In long-range processes, the correlations at longer time scales are more significant.

where ρ(k) is the autocorrelation function at a lag k, α is a parameter in the interval (0,1) and the ~ means asymptotically proportional to as k approaches infinity.

Long-range dependence as a consequence of mathematical convergence

Such power law scaling of the autocorrelation function can be shown to be biconditionally related to a power law relationship between the variance and the mean, when evaluated from sequences by the method of expanding bins. This variance to mean power law is an inherent feature of a family of statistical distributions called the Tweedie exponential dispersion models. Much as the central limit theorem explains how certain types of random data converge towards the form of a normal distribution there exists a related theorem, the Tweedie convergence theorem that explains how other types of random data will converge towards the form of these Tweedie distributions, and consequently express both the variance to mean power law and a power law decay in their autocorrelation functions.

The Poisson distribution and traffic

Before the heavy-tail distribution is introduced mathematically, the memoryless Poisson distribution, used to model traditional telephony networks, is briefly reviewed below. For more details, see the article on the Poisson distribution.

Assuming pure-chance arrivals and pure-chance terminations leads to the following:

where a is the number of call arrivals and is the mean number of call arrivals in time T. For this reason, pure-chance traffic is also known as Poisson traffic.

where d is the number of call departures and is the mean number of call departures in time T.

where h is the Mean Holding Time (MHT). [4]

Information on the fundamentals of statistics and probability theory can be found in the external links section.

The heavy-tail distribution

Heavy-tail distributions have properties that are qualitatively different from commonly used (memoryless) distributions such as the exponential distribution.

The Hurst parameter H is a measure of the level of self-similarity of a time series that exhibits long-range dependence, to which the heavy-tail distribution can be applied. H takes on values from 0.5 to 1. A value of 0.5 indicates the data is uncorrelated or has only short-range correlations. The closer H is to 1, the greater the degree of persistence or long-range dependence. [4]

Typical values of the Hurst parameter, H:

A distribution is said to be heavy-tailed if:

This means that regardless of the distribution for small values of the random variable, if the asymptotic shape of the distribution is hyperbolic, it is heavy-tailed. The simplest heavy-tail distribution is the Pareto distribution which is hyperbolic over its entire range. Complementary distribution functions for the exponential and Pareto distributions are shown below. Shown on the left is a graph of the distributions shown on linear axes, spanning a large domain. [8] To its right is a graph of the complementary distribution functions over a smaller domain, and with a logarithmic range. [5]

If the logarithm of the range of an exponential distribution is taken, the resulting plot is linear. In contrast, that of the heavy-tail distribution is still curvilinear. These characteristics can be clearly seen on the graph above to the right. A characteristic of long-tail distributions is that if the logarithm of both the range and the domain is taken, the tail of the long-tail distribution is approximately linear over many orders of magnitude. [9] In the graph above left, the condition for the existence of a heavy-tail distribution, as previously presented, is not met by the curve labelled "Gamma-Exponential Tail".

The probability mass function of a heavy-tail distribution is given by:

and its cumulative distribution function is given by:

where k represents the smallest value the random variable can take.

Readers interested in a more rigorous mathematical treatment of the subject are referred to the external links section.

What causes long-tail traffic?

In general, there are three main theories for the causes of long-tail traffic (see a review of all three causes [10] ). First, is a cause based in the application layer which theorizes that user session durations vary with a long-tail distribution due to the file size distribution. If the distribution of file sizes is heavy-tailed then the superposition of many file transfers in a client/server network environment will be long-range dependent. Additionally, this causal mechanism is robust with respect to changes in network resources (bandwidth and buffer capacity) and network topology. [11] This is currently the most popular explanation in the engineering literature and the one with the most empirical evidence through observed file size distributions.

Second, is a transport layer cause which theorizes that the feedback between multiple TCP streams due to TCP's congestion avoidance algorithm in moderate to high packet loss situations causes self-similar traffic or at least allows it to propagate. However, this is believed only to be a significant factor at relatively short timescales and not the long-term cause of self-similar traffic.

Finally, is a theorized link layer cause which is predicated based on physics simulations of packet switching networks on simulated topologies. At a critical packet creation rate, the flow in a network becomes congested and exhibits 1/f noise and long-tail traffic characteristics. There have been criticisms on these sorts of models though as being unrealistic in that network traffic is long-tailed even in non-congested regions [12] and at all levels of traffic.

Simulation showed that long-range dependence could arise in the queue length dynamics at a given node (an entity that transfers traffic) within a communications network even when the traffic sources are free of long-range dependence. The mechanism for this is believed to relate to feedback from routing effects in the simulation. [13]

Modelling long-tail traffic

Modelling of long-tail traffic is necessary so that networks can be provisioned based on accurate assumptions of the traffic that they carry. The dimensioning and provisioning of networks that carry long-tail traffic is discussed in the next section.

Since (unlike traditional telephony traffic) packetised traffic exhibits self-similar or fractal characteristics, conventional traffic models do not apply to networks that carry long-tail traffic. [4] Previous analytic work done in Internet studies adopted assumptions such as exponentially-distributed packet inter-arrivals, and conclusions reached under such assumptions may be misleading or incorrect in the presence of heavy-tailed distributions. [2]

It has for long been realised that efficient and accurate modelling of various real-world phenomena needs to incorporate the fact that observations made on different scales each carry essential information. In most simple terms, representing data on large scales by its mean is often useful (such as an average income or an average number of clients per day) but can be inappropriate (e.g. in the context of buffering or waiting queues). [3]

With the convergence of voice and data, the future multi-service network will be based on packetised traffic, and models which accurately reflect the nature of long-tail traffic will be required to develop, design and dimension future multi-service networks. [4] We seek an equivalent to the Erlang model for circuit switched networks. [5]

There is not an abundance of heavy-tailed models with rich sets of accompanying data-fitting techniques. [14] A clear model for fractal traffic has not yet emerged, nor is there any definite direction towards a clear model. [4] Deriving mathematical models which accurately represent long-tail traffic is a fertile area of research.

Gaussian models, even long-range dependent Gaussian models, are unable to accurately model current Internet traffic. [15] Classical models of time series such as Poisson and finite Markov processes rely heavily on the assumption of independence, or at least weak dependence. [3] Poisson and Markov related processes have, however, been used with some success. Nonlinear methods are used for producing packet traffic models which can replicate both short-range and long-range dependent streams. [13]

A number of models have been proposed for the task of modelling long-tail traffic. These include the following:

No unanimity exists about which of the competing models is appropriate, [4] but the Poisson Pareto Burst Process (PPBP), which is an M/G/ process, is perhaps the most successful model to date. It is demonstrated to satisfy the basic requirements of a simple, but accurate, model of long-tail traffic. [15]

Finally, results from simulations [4] using -stable stochastic processes for modelling traffic in broadband networks are presented. The simulations are compared to a variety of empirical data (Ethernet, WWW, VBR Video).

Network performance

In some cases, an increase in the Hurst parameter can lead to a reduction in network performance. The extent to which heavy-tailedness degrades network performance is determined by how well congestion control is able to shape source traffic into an on-average constant output stream while conserving information. [17] Congestion control of heavy-tailed traffic is discussed in the following section.

Traffic self-similarity negatively affects primary performance measures such as queue size and packet-loss rate. The queue length distribution of long-tail traffic decays more slowly than with Poisson sources. However, long-range dependence implies nothing about its short-term correlations which affect performance in small buffers. [16] For heavy-tailed traffic, extremely large bursts occur more frequently than with light-tailed traffic. [18] Additionally, aggregating streams of long-tail traffic typically intensifies the self-similarity ("burstiness") rather than smoothing it, compounding the problem. [1]

The graph above right, taken from, [4] presents a queueing performance comparison between traffic streams of varying degrees of self-similarity. Note how the queue size increases with increasing self-similarity of the data, for any given channel utilisation, thus degrading network performance.

In the modern network environment with multimedia and other QoS sensitive traffic streams comprising a growing fraction of network traffic, second-order performance measures in the form of “jitter” such as delay variation and packet loss variation are of import to provisioning user-specified QoS. Self-similar burstiness is expected to exert a negative influence on second-order performance measures. [19]

Packet-switching-based services, such as the Internet (and other networks that employ IP) are best-effort services, so degraded performance, although undesirable, can be tolerated. However, since the connection is contracted, ATM networks need to keep delays and jitter within negotiated limits. [20]

Self-similar traffic exhibits the persistence of clustering which has a negative impact on network performance.

Many aspects of network quality of service depend on coping with traffic peaks that might cause network failures, such as

Poisson processes are well-behaved because they are stateless, and peak loading is not sustained, so queues do not fill. With long-range order, peaks last longer and have greater impact: the equilibrium shifts for a while. [7]

Due to the increased demands that long-tail traffic places on networks resources, networks need to be carefully provisioned to ensure that quality of service and service level agreements are met. The following subsection deals with the provisioning of standard network resources, and the subsection after that looks at provisioning web servers that carry a significant amount of long-tail traffic.

Network provisioning for long-tail traffic

For network queues with long-range dependent inputs, the sharp increase in queuing delays at fairly low levels of utilisation and slow decay of queue lengths implies that an incremental improvement in loss performance requires a significant increase in buffer size. [21]

While throughput declines gradually as self-similarity increases, queuing delay increases more drastically. When traffic is self-similar, we find that queuing delay grows proportionally to the buffer capacity present in the system. Taken together, these two observations have potentially dire implications for QoS provisions in networks. To achieve a constant level of throughput or packet loss as self-similarity is increased, extremely large buffer capacity is needed. However, increased buffering leads to large queuing delays and thus self-similarity significantly steepens the trade-off curve between throughput/ packet loss and delay. [17]

ATM can be employed in telecommunications networks to overcome second-order performance measure problems. The short fixed-length cell used in ATM reduces the delay and most significantly the jitter for delay-sensitive services such as voice and video. [22]

Web site provisioning for long-tail traffic

Workload pattern complexities (for example, bursty arrival patterns) can significantly affect resource demands, throughput, and the latency encountered by user requests, in terms of higher average response times and higher response time variance. Without adaptive, optimal management and control of resources, SLAs based on response time are impossible. The capacity requirements on the site are increased while its ability to provide acceptable levels of performance and availability diminishes. [18] Techniques to control and manage long-tail traffic are discussed in the following section.

The ability to accurately forecast request patterns is an important requirement of capacity planning. A practical consequence of burstiness and heavy-tailed and correlated arrivals is difficulty in capacity planning. [18]

With respect to SLAs, the same level of service for heavy-tailed distributions requires a more powerful set of servers, compared with the case of independent light-tailed request traffic. To guarantee good performance, focus needs to be given to peak traffic duration because it is the huge bursts of requests that most degrade performance. That is why some busy sites require more headroom (spare capacity) to handle the volumes; for example, a high-volume online trading site reserves spare capacity with a ratio of three to one. [18]

Reference to additional information on the effect of long-range dependency on network performance can be found in the external links section.

Controlling long-tail traffic

Given the ubiquity of scale-invariant burstiness observed across diverse networking contexts, finding an effective traffic control algorithm capable of detecting and managing self-similar traffic has become an important problem. The problem of controlling self-similar network traffic is still in its infancy. [23]

Traffic control for self-similar traffic has been explored on two fronts: Firstly, as an extension of performance analysis in the resource provisioning context, and secondly, from the multiple time scale traffic control perspective where the correlation structure at large time scales is actively exploited to improve network performance. [24]

The resource provisioning approach seeks to identify the relative utility of the two principal network resource types – bandwidth and buffer capacity – with respect to their curtailing effects on self-similarity, and advocates a small buffer/ large bandwidth resource dimensioning policy. Whereas resource provisioning is open-loop in nature, multiple time scale traffic control exploits the long-range correlation structure present in self-similar traffic. [24] Congestion control can be exercised concurrently at multiple time scales, and by cooperatively engaging information extracted at different time scales, achieve significant performance gains. [23]

Another approach adopted in controlling long-tail traffic makes traffic controls cognizant of workload properties. For example, when TCP is invoked in HTTP in the context of web client/ server interactions, the size of the file being transported (which is known at the server) is conveyed or made accessible to protocols in the transport layer, including the selection of alternative protocols, for more effective data transport. For short files, which constitute the bulk of connection requests in heavy-tailed file size distributions of web servers, elaborate feedback control may be bypassed in favour of lightweight mechanisms in the spirit of optimistic control, which can result in improved bandwidth utilisation. [19]

It was found that the simplest way to control packet traffic is to limit the length of queues. Long queues in the network invariably occur at hosts (entities that can transmit and receive packets). Congestion control can therefore be achieved by reducing the rate of packet production at hosts with long queues. [13]

Long-range dependence and its exploitation for traffic control is best suited for flows or connections whose lifetime or connection duration is long lasting. [19]

See also

Related Research Articles

<span class="mw-page-title-main">Queueing theory</span> Mathematical study of waiting lines, or queues

Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service.

Network congestion in data networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying more data than it can handle. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

The token bucket is an algorithm used in packet-switched and telecommunications networks. It can be used to check that data transmissions, in the form of packets, conform to defined limits on bandwidth and burstiness. It can also be used as a scheduling algorithm to determine the timing of transmissions that will comply with the limits set for the bandwidth and burstiness: see network scheduler.

A traffic generation model is a stochastic model of the traffic flows or data sources in a communication network, for example a cellular network or a computer network. A packet generation model is a traffic generation model of the packet flows or data sources in a packet-switched network. For example, a web traffic model is a model of the data that is sent or received by a user's web-browser. These models are useful during the development of telecommunication technologies, in view to analyse the performance and capacity of various protocols, algorithms and network topologies.

Generalized processor sharing (GPS) is an ideal scheduling algorithm for process schedulers and network schedulers. It is related to the fair-queuing principle which groups packets into classes and shares the service capacity between them. GPS shares this capacity according to some fixed weights.

Bandwidth management is the process of measuring and controlling the communications on a network link, to avoid filling the link to capacity or overfilling the link, which would result in network congestion and poor performance of the network. Bandwidth is described by bit rate and measured in units of bits per second (bit/s) or bytes per second (B/s).

In queueing theory, a discipline within the mathematical theory of probability, a G-network is an open network of G-queues first introduced by Erol Gelenbe as a model for queueing systems with specific control functions, such as traffic re-routing or traffic destruction, as well as a model for neural networks. A G-queue is a network of queues with several types of novel and useful customers:

Self-similar processes are types of stochastic processes that exhibit the phenomenon of self-similarity. A self-similar phenomenon behaves the same when viewed at different degrees of magnification, or different scales on a dimension. Self-similar processes can sometimes be described using heavy-tailed distributions, also known as long-tailed distributions. Examples of such processes include traffic processes, such as packet inter-arrival times and burst lengths. Self-similar processes can exhibit long-range dependency.

In spatial ecology and macroecology, scaling pattern of occupancy (SPO), also known as the area-of-occupancy (AOO) is the way in which species distribution changes across spatial scales. In physical geography and image analysis, it is similar to the modifiable areal unit problem. Simon A. Levin (1992) states that the problem of relating phenomena across scales is the central problem in biology and in all of science. Understanding the SPO is thus one central theme in ecology.

<span class="mw-page-title-main">M/M/1 queue</span> Queue with Markov (Poisson) arrival process, exponential service time distribution and one server

In queueing theory, a discipline within the mathematical theory of probability, an M/M/1 queue represents the queue length in a system having a single server, where arrivals are determined by a Poisson process and job service times have an exponential distribution. The model name is written in Kendall's notation. The model is the most elementary of queueing models and an attractive object of study as closed-form expressions can be obtained for many metrics of interest in this model. An extension of this model with more than one server is the M/M/c queue.

In probability and statistics, the Tweedie distributions are a family of probability distributions which include the purely continuous normal, gamma and inverse Gaussian distributions, the purely discrete scaled Poisson distribution, and the class of compound Poisson–gamma distributions which have positive mass at zero, but are otherwise continuous. Tweedie distributions are a special case of exponential dispersion models and are often used as distributions for generalized linear models.

In queueing theory, a discipline within the mathematical theory of probability, quasireversibility is a property of some queues. The concept was first identified by Richard R. Muntz and further developed by Frank Kelly. Quasireversibility differs from reversibility in that a stronger condition is imposed on arrival rates and a weaker condition is applied on probability fluxes. For example, an M/M/1 queue with state-dependent arrival rates and state-dependent service times is reversible, but not quasireversible.

In computer networks, self-similarity is a feature of network data transfer dynamics. When modeling network data dynamics the traditional time series models, such as an autoregressive moving average model are not appropriate. This is because these models only provide a finite number of parameters in the model and thus interaction in a finite time window, but the network data usually have a long-range dependent temporal structure. A self-similar process is one way of modeling network data dynamics with such a long range correlation. This article defines and describes network data transfer dynamics in the context of a self-similar process. Properties of the process are shown and methods are given for graphing and estimating parameters modeling the self-similarity of network data.

<span class="mw-page-title-main">Lomax distribution</span>

The Lomax distribution, conditionally also called the Pareto Type II distribution, is a heavy-tail probability distribution used in business, economics, actuarial science, queueing theory and Internet traffic modeling. It is named after K. S. Lomax. It is essentially a Pareto distribution that has been shifted so that its support begins at zero.

In queueing theory, a discipline within the mathematical theory of probability, a bulk queue is a general queueing model where jobs arrive in and/or are served in groups of random size. Batch arrivals have been used to describe large deliveries and batch services to model a hospital out-patient department holding a clinic once a week, a transport link with fixed capacity and an elevator.

Design of robust and reliable networks and network services relies on an understanding of the traffic characteristics of the network. Throughout history, different models of network traffic have been developed and used for evaluating existing and proposed networks and services.

In statistics, burstiness is the intermittent increases and decreases in activity or frequency of an event. One measure of burstiness is the Fano factor—a ratio between the variance and mean of counts.

In mathematics and telecommunications, stochastic geometry models of wireless networks refer to mathematical models based on stochastic geometry that are designed to represent aspects of wireless networks. The related research consists of analyzing these models with the aim of better understanding wireless communication networks in order to predict and control various network performance metrics. The models require using techniques from stochastic geometry and related fields including point processes, spatial statistics, geometric probability, percolation theory, as well as methods from more general mathematical disciplines such as geometry, probability theory, stochastic processes, queueing theory, information theory, and Fourier analysis.

<span class="mw-page-title-main">Temporal network</span> Network whose links change over time

A temporal network, also known as a time-varying network, is a network whose links are active only at certain points in time. Each link carries information on when it is active, along with other possible characteristics such as a weight. Time-varying networks are of particular relevance to spreading processes, like the spread of information and disease, since each link is a contact opportunity and the time ordering of contacts is included.

References

  1. 1 2 Zhu X., Yu J., Doyle J., California Institute of Technology, Heavy-tailed distributions, generalised source coding and optimal web layout design.
  2. 1 2 3 Medina A., Computer Science Department, Boston University, Appendix: Heavy-tailed distributions.
  3. 1 2 3 4 Department of Electrical and Computer Engineering, Rice University, Internet Control and Inference Tools at the Edge: Network Traffic Modelling.
  4. 1 2 3 4 5 6 7 8 9 10 11 Kennedy I., Lecture Notes, ELEN5007 – Teletraffic Engineering, School of Electrical and Information Engineering, University of the Witwatersrand, 2005.
  5. 1 2 3 Neame T., ARC Centre for Ultra Broadband Information Networks, EEE Dept., The University of Melbourne, Performance Evaluation of a Queue Fed by a Poisson Pareto Burst Process Archived 2011-05-26 at the Wayback Machine .
  6. Barford P., Floyd S., Computer Science Department, Boston University, The Self-similarity and Long Range Dependence in Networks Web site.
  7. 1 2 Linington P.F., University of Kent, Everything you always wanted to know about self-similar network traffic and long-range dependency, but were ashamed to ask.
  8. School of Information Technology and Engineering, George Mason University, Development of Procedures to Analyze Queuing Models with Heavy-Tailed Interarrival and Service Times Archived 2005-03-15 at the Wayback Machine .
  9. Air Force Research Laboratory, Information Directorate, Heavy-tailed distributions and implications Archived 2005-12-15 at the Wayback Machine .
  10. Smith R. (2011). "The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena". Advances in Complex Systems. 14 (6): 905–949. arXiv: 0807.3374 . doi:10.1142/S0219525911003451. S2CID   18937228.
  11. Park K.; Kim G.; Crovella M. (1996). "On the relationship between file sizes, transport protocols, and self-similar network traffic". Proceedings of 1996 International Conference on Network Protocols (ICNP-96) (PDF). pp. 171–180. doi:10.1109/ICNP.1996.564935. ISBN   978-0-8186-7453-2. S2CID   13632261.
  12. Willinger, W., Govindan, R., Jamin, S., Paxson, V. & Shenker, S. (2002). "Scaling phenomena in the Internet: Critically examining Criticality". Proceedings of the National Academy of Sciences. 99 (3): 2573–80. Bibcode:2002PNAS...99.2573W. doi: 10.1073/pnas.012583099 . JSTOR   3057595. PMC   128578 . PMID   11875212.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  13. 1 2 3 Arrowsmith D.K., Woolf M., Internet Packet Traffic Congestion in Networks, Mathematics Research Centre, Queen Mary, University of London.
  14. Resnick S.I., Heavy Tail Modeling and Teletraffic Data, Cornell University.
  15. 1 2 Neame T., Characterisation and Modelling of Internet Traffic Streams, Department of Electrical and Electronic Engineering, University of Melbourne, 2003.
  16. 1 2 Zukerman M., ARC Centre for Ultra Broadband Information Networks, EEE Dept., The University of Melbourne, Traffic Modelling and Related Queueing Problems.
  17. 1 2 Park K., Kim G., Crovella M., On the Effect of Traffic Self-similarity on Network Performance.
  18. 1 2 3 4 Chiu W., IBM DeveloperWorks, Planning for growth: A proven methodology for capacity planning Archived 2012-10-23 at the Wayback Machine .
  19. 1 2 3 Park K., Future Directions and Open Problems in Performance Evaluation and Control of Self-Similar Network Traffic, Department of Computer Sciences, University of Purdue.
  20. Jitter analysis of ATM self-similar traffic Archived 2005-02-16 at the Wayback Machine . utdallas.edu.
  21. Grossglauser M.; Bolot J.C. (1999). "On the relevance of long-range dependence in network traffic" (PDF). IEEE/ACM Transactions on Networking. 7 (5): 629–640. doi:10.1109/90.803379. S2CID   27643981.
  22. Biran G., Introduction to ATM switching, RAD Data Communications Archived 2004-12-04 at the Wayback Machine .
  23. 1 2 Tuan T., Park K., Multiple Time Scale Congestion Control for Self-Similar Network Traffic, Department of Computer Sciences, University of Purdue.
  24. 1 2 Park K., Self-Similar Network Traffic and its Control, Department of Computer Sciences, University of Purdue.