Fairness measure

Last updated

Fairness measures or metrics are used in network engineering to determine whether users or applications are receiving a fair share of system resources. There are several mathematical and conceptual definitions of fairness.

Contents

Transmission Control Protocol fairness

Congestion control mechanisms for new network transmission protocols or peer-to-peer applications must interact well with Transmission Control Protocol (TCP). TCP fairness requires that a new protocol receive a no larger share of the network than a comparable TCP flow. This is important as TCP is the dominant transport protocol on the Internet, and if new protocols acquire unfair capacity they tend to cause problems such as congestion collapse. This was the case with the first versions of RealMedia's streaming protocol: it was based on UDP and was widely blocked at organizational firewalls until a TCP-based version was developed. TCP throughput unfairness over WiFi is a critical problem and needs further investigations. [1]

Jain's fairness index

Raj Jain's equation,

rates the fairness of a set of values where there are users, is the throughput for the th connection, and is the sample coefficient of variation. The result ranges from (worst case) to 1 (best case), and it is maximum when all users receive the same allocation. This index is when users equally share the resource, and the other users receive zero allocation.

This metric identifies underutilized channels and is not unduly sensitive to atypical network flow patterns. [2]

To achieve a given fairness level , one approximate method is to let , where

and A is an arbitrary factor, typically used for normalization. This gives an allocation with a fairness close to F, and the allocation can then be refined to get even closer. Note this also allows for a prioritization of allocation, as the s will be sorted.

An exact method is to let , where solves

.

A simple way to calculate is to use Newton's Method on , which converges consistently and fairly quickly.

Both of these methods give non-integer allocations, generally, and sometimes integer allocations are required. This can be done by using one of the above allocation methods, rounding down each allocation to the nearest integer (), and then iteratively allocating one unit to a user, with the probability that user k receives it is proportional to .

Max-min fairness

Max-min fairness is said to be achieved by an allocation if and only if the allocation is feasible and an attempt to increase the allocation of any flow necessarily results in the decrease in the allocation of some other flow with an equal or smaller allocation. A max-min fair allocation is achieved when bandwidth is allocated equally and in infinitesimal increments to all flows until one is satisfied, then amongst the remainder of the flows and so on until all flows are satisfied or the bandwidth is exhausted.

Fairly shared spectrum efficiency

In packet radio wireless networks, The fairly shared spectrum efficiency (FSSE) can be used as a combined measure of fairness and system spectrum efficiency. The system spectral efficiency is the aggregate throughput in the network divided by the utilized radio bandwidth in hertz. The FSSE is the portion of the system spectral efficiency that is shared equally among all active users (with at least one backlogged data packet in queue or under transmission). In case of scheduling starvation, the FSSE would be zero during certain time intervals. In case of equally shared resources, the FSSE would be equal to the system spectrum efficiency. To achieve max-min fairness, the FSSE should be maximized.

FSSE is useful especially when analyzing advanced radio resource management (RRM) schemes, for example channel adaptive scheduling, for cellular networks with best-effort packet data service. In such system it may be tempting to optimize the spectrum efficiency (i.e. the throughput). However, that might result in scheduling starvation of "expensive" users at far distance from the access point, whenever another active user is closer to the same or an adjacent access point. Thus the users would experience unstable service, perhaps resulting in a reduced number of happy customers. Optimizing the FSSE results in a compromise between fairness (especially avoiding scheduling starvation) and achieving high spectral efficiency.

If the cost of each user is known, in terms of consumed resources per transferred information bit, the FSSE measure may be redefined to reflect proportional fairness. In a proportional fair system, this "proportionally fair shared spectrum efficiency" (or "fairly shared radio resource cost") is maximized. This policy is less fair since "expensive" users are given lower throughput than others, but still scheduling starvation is avoided.

QoE fairness

The idea of QoE fairness is to quantify fairness among users by considering the Quality of Experience (QoE) as perceived by the end user. This is especially of importance in network management where operators want to keep their users sufficiently satisfied (i.e. high QoE) in a fair manner, see QoE management. Several approaches have been proposed to ensure network-wide QoE fairness especially for adaptive video streaming. [3] [4]

In contrast to network related measures like throughput, QoE is typically not measured on ratio scales. Hence, fairness measures like Jain's fairness index cannot be applied, as the measurement scale requires to be a ratio scale with a clearly defined zero point (see examples of misuse for coefficients of variation). QoE may be measure on interval scales. A typical example is a 5-point mean opinion score (MOS) scale, with 1 indicating lowest quality and 5 indicating highest quality. While the coefficient of variation is meaningless, the standard deviation provides a measure of the dispersion of QoE among users.

Hossfeld et al. have proposed a QoE Fairness index which considers the lower bound and the higher bound of the rating scale. [5]

The QoE fairness index has some desired properties like scale and metric independence. The unit of measurement does not matter. Any linear transformation of the QoE values does not change the value of the fairness index. The fairness index is bounded in the interval with 1 indicating perfect QoE fairness – all users experience the same quality. 0 indicates total unfairness, e.g. 50% of users experience highest QoE and 50% experience lowest QoE .

Product-based Fairness Indices

Product-based fairness indices are based on the general fairness formulation:

,

where is an arbitrary transformation function. For to be a valid transformation function: for . The resulting index thus has a value between 0 and 1. As Jain's fairness index is said to be unduly sensitive under atypical conditions, the product-based fairness can be defined arbitrarily to obtain a desired sensitivity.

An allocation that has fairness F according to the formulation above can be given by

,

where is any non-decreasing function with . it is often convenient to take g to be something like . Assuming f is increasing and and , this gives a minimum to maximum ratio of about

.

The linear product-based fairness index has and looks as follows:

.

It is observed that is very sensitive for small values of . For example yields

G's fairness index

The G's fairness index is primarily used by telecom operators in the context of bandwidth allocation[ citation needed ]. G's th-order fairness index scales the fractions of the product-based fairness index by a powered sine transformation :

,

where . The first quadrant of the sine wave is used as a mapping function to inflate fractions . As such, the sensitivity of the product-based fairness is decreased for values close to , while the index still outputs a value between 0 and 1.

Compared to Jain's fairness index, G's fairness index yields smaller values, it is more sensitive to potential unfair bandwidth distribution and can go to zero. In the context of networks, the latter is an advantage over Jain's fairness index when a few values in a set drop to low levels. Moreover, Jain's fairness index is deemed as an average user perception of fairness [6] whereas G's fairness index is focused more on equality within a group. For example, for we get and .

Bossaer's fairness index

Whereas G's fairness index inflates the fractions closer to , the Bossaer's fairness index inflates the fractions closer to 0. Bossaer's th-order transformation function yields the fairness index:

.

The linear product-based fairness indices is a special case of Bossaer's where .

Causal fairness

Causal fairness measures the frequency with which two nearly identical users or applications who differ only in a set of characteristics with respect to which resource allocation must be fair receive identical treatment. [7]

Other metrics

Several other metrics have been defined, such as Worst Case Fairness. [8]

Notes

  1. Pokhrel, Shiva Raj; Panda, Manoj; Vu, Hai L.; Mandjes, Michel (2016). "TCP Performance over Wi-Fi: Joint Impact of Buffer and Channel Losses". IEEE Transactions on Mobile Computing. 15 (5): 1279–1291. doi:10.1109/TMC.2015.2456883. S2CID   10323290.
  2. Jain, R.; Chiu, D.M.; Hawe, W. (1984). "A Quantitative Measure of Fairness and Discrimination for Resource Allocation in Shared Computer Systems" (PDF). DEC Research Report TR-301.
  3. Georgopoulos, Panagiotis; Elkhatib, Yehia; Broadbent, Matthew; Mu, Mu; Race, Nicholas (2013). "Towards network-wide QoE fairness using openflow-assisted adaptive video streaming". Proceedings of the 2013 ACM SIGCOMM workshop on Future human-centric multimedia networking. pp. 15–20. doi:10.1145/2491172.2491181. ISBN   9781450321839. S2CID   2946134.
  4. Petrangeli, Stefano; Claeys, Maxim; Latre, Steven; Famaey, Jeroen; De Turck, Filip (2014). "A multi-agent Q-Learning-based framework for achieving fairness in HTTP Adaptive Streaming". 2014 IEEE Network Operations and Management Symposium (NOMS). pp. 1–9. doi:10.1109/NOMS.2014.6838245. ISBN   978-1-4799-0913-1. S2CID   16573649.
  5. Hossfeld, Tobias; Skorin-Kapov, Lea; Heegaard, Poul E.; Varela, Martin (11 Oct 2016). "Definition of QoE fairness in shared systems". IEEE Communications Letters. 21 (1): 184–187. doi:10.1109/LCOMM.2016.2616342. hdl: 11250/2433049 . S2CID   23790117.Hobfeld, Tobias; Skorin-Kapov, Lea; Heegaard, Poul E.; Varela, Martin (19 Sep 2017). "Definition of QoE fairness in shared systems". Zenodo Preprint. doi:10.5281/zenodo.893343.
  6. Throughput Fairness Index: An Explanation
  7. Galhotra, Sainyam; Brun, Yuriy; Meliou, Alexandra (2017). "Fairness testing: Testing software for discrimination". Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. pp. 498–510. arXiv: 1709.03221 . doi:10.1145/3106237.3106277. ISBN   9781450351058. S2CID   6324652.
  8. Bennett, J. C. R.; Hui Zhang (1996). "WF/sup 2/Q: Worst-case fair weighted fair queueing". Proceedings of IEEE INFOCOM '96. Conference on Computer Communications. Vol. 1. p. 120. doi:10.1109/INFCOM.1996.497885. ISBN   978-0-8186-7293-4. S2CID   17558577.

Further reading

Related Research Articles

A likelihood function measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the joint probability distribution of the random variable that (presumably) generated the observations. When evaluated on the actual data points, it becomes a function solely of the model parameters.

<span class="mw-page-title-main">Stress–energy tensor</span> Tensor describing energy momentum density in spacetime

The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.

<span class="mw-page-title-main">Noether's theorem</span> Statement relating differentiable symmetries to conserved quantities

Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.

<span class="mw-page-title-main">Differential operator</span> Typically linear operator defined in terms of differentiation of functions

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.

<span class="mw-page-title-main">Poisson bracket</span> Operation in Hamiltonian mechanics

In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

<span class="mw-page-title-main">Radon transform</span> Integral transform

In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.

<span class="mw-page-title-main">LSZ reduction formula</span> Connection between correlation functions and the S-matrix

In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.

<span class="mw-page-title-main">Electromagnetic tensor</span> Mathematical object that describes the electromagnetic field in spacetime

In electromagnetism, the electromagnetic tensor or electromagnetic field tensor is a mathematical object that describes the electromagnetic field in spacetime. The field tensor was first used after the four-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski. The tensor allows related physical laws to be written concisely, and allows for the quantization of the electromagnetic field by the Lagrangian formulation described below.

In calculus, the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Wilhelm Leibniz, states that for an integral of the form where and the integrands are functions dependent on the derivative of this integral is expressible as where the partial derivative indicates that inside the integral, only the variation of with is considered in taking the derivative.

In many-body theory, the term Green's function is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators.

TCP-Illinois is a variant of TCP congestion control protocol, developed at the University of Illinois at Urbana–Champaign. It is especially targeted at high-speed, long-distance networks. A sender side modification to the standard TCP congestion control algorithm, it achieves a higher average throughput than the standard TCP, allocates the network resource fairly as the standard TCP, is compatible with the standard TCP, and provides incentives for TCP users to switch.

<span class="mw-page-title-main">Structure constants</span> Coefficients of an algebra over a field

In mathematics, the structure constants or structure coefficients of an algebra over a field are the coefficients of the basis expansion of the products of basis vectors. Because the product operation in the algebra is bilinear, by linearity knowing the product of basis vectors allows to compute the product of any elements . Therefore, the structure constants can be used to specify the product operation of the algebra. Given the structure constants, the resulting product is obtained by bilinearity and can be uniquely extended to all vectors in the vector space, thus uniquely determining the product for the algebra.

<span class="mw-page-title-main">Logit-normal distribution</span> Probability distribution

In probability theory, a logit-normal distribution is a probability distribution of a random variable whose logit has a normal distribution. If Y is a random variable with a normal distribution, and t is the standard logistic function, then X = t(Y) has a logit-normal distribution; likewise, if X is logit-normally distributed, then Y = logit(X)= log (X/(1-X)) is normally distributed. It is also known as the logistic normal distribution, which often refers to a multinomial logit version (e.g.).

In mathematics, the Kodaira–Spencer map, introduced by Kunihiko Kodaira and Donald C. Spencer, is a map associated to a deformation of a scheme or complex manifold X, taking a tangent space of a point of the deformation space to the first cohomology group of the sheaf of vector fields on X.

Orbit modeling is the process of creating mathematical models to simulate motion of a massive body as it moves in orbit around another massive body due to gravity. Other forces such as gravitational attraction from tertiary bodies, air resistance, solar pressure, or thrust from a propulsion system are typically modeled as secondary effects. Directly modeling an orbit can push the limits of machine precision due to the need to model small perturbations to very large orbits. Because of this, perturbation methods are often used to model the orbit in order to achieve better accuracy.

Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.

Lightfieldmicroscopy (LFM) is a scanning-free 3-dimensional (3D) microscopic imaging method based on the theory of light field. This technique allows sub-second (~10 Hz) large volumetric imaging with ~1 μm spatial resolution in the condition of weak scattering and semi-transparence, which has never been achieved by other methods. Just as in traditional light field rendering, there are two steps for LFM imaging: light field capture and processing. In most setups, a microlens array is used to capture the light field. As for processing, it can be based on two kinds of representations of light propagation: the ray optics picture and the wave optics picture. The Stanford University Computer Graphics Laboratory published their first prototype LFM in 2006 and has been working on the cutting edge since then.

Hypertabastic survival models were introduced in 2007 by Mohammad Tabatabai, Zoran Bursac, David Williams, and Karan Singh. This distribution can be used to analyze time-to-event data in biomedical and public health areas and normally called survival analysis. In engineering, the time-to-event analysis is referred to as reliability theory and in business and economics it is called duration analysis. Other fields may use different names for the same analysis. These survival models are applicable in many fields such as biomedical, behavioral science, social science, statistics, medicine, bioinformatics, medical informatics, data science especially in machine learning, computational biology, business economics, engineering, and commercial entities. They not only look at the time to event, but whether or not the event occurred. These time-to-event models can be applied in a variety of applications for instance, time after diagnosis of cancer until death, comparison of individualized treatment with standard care in cancer research, time until an individual defaults on loans, relapsed time for drug and smoking cessation, time until property sold after being put on the market, time until an individual upgrades to a new phone, time until job relocation, time until bones receive microscopic fractures when undergoing different stress levels, time from marriage until divorce, time until infection due to catheter, and time from bridge completion until first repair.

<span class="mw-page-title-main">Lenia</span> Continuous generalization of cellular automata

Lenia is a family of cellular automata created by Bert Wang-Chak Chan. It is intended to be a continuous generalization of Conway's Game of Life, with continuous states, space and time. As a consequence of its continuous, high-resolution domain, the complex autonomous patterns generated in Lenia are described as differing from those appearing in other cellular automata, being "geometric, metameric, fuzzy, resilient, adaptive, and rule-generic".