The Atkinson index (also known as the Atkinson measure or Atkinson inequality measure) is a measure of income inequality developed by British economist Anthony Barnes Atkinson. The measure is useful in determining which end of the distribution contributed most to the observed inequality. [1]
The Atkinson index is defined as:
where is individual income (i = 1, 2, ..., N) and is the mean income.
In other words, the Atkinson index is the complement to 1 of the ratio of the Hölder generalized mean of exponent 1−ε to the arithmetic mean of the incomes (where as usual the generalized mean of exponent 0 is interpreted as the geometric mean).
The index can be turned into a normative measure by imposing a coefficient to weight incomes. Greater weight can be placed on changes in a given portion of the income distribution by choosing , the level of "inequality aversion", appropriately. The Atkinson index becomes more sensitive to changes at the lower end of the income distribution as increases. Conversely, as the level of inequality aversion falls (that is, as approaches 0) the Atkinson becomes less sensitive to changes in the lower end of the distribution. The Atkinson index is for no value of highly sensitive to top incomes because of the common restriction that is nonnegative. [2]
The Atkinson parameter is often called the "inequality aversion parameter", since it regulates the sensitivity of the implied social welfare losses from inequality to income inequality as measured by some corresponding generalised entropy index. The Atkinson index is defined in reference to a corresponding social welfare function, where mean income multiplied by one minus the Atkinson index gives the welfare equivalent equally distributed income. Thus the Atkinson index gives the share of current income which could be sacrificed, without reducing social welfare, if perfect inequality were instated. For , (no aversion to inequality), the marginal social welfare from income is invariant to income, i.e. marginal increases in income produce as much social welfare whether they go to a poor or rich individual. In this case, the welfare equivalent equally distributed income is equal to mean income, and the Atkinson index is zero.
For (infinite aversion to inequality) the marginal social welfare of income of the poorest individual is infinitely larger than any even slightly richer individual, and the Atkinson social welfare function is equal to the smallest income in the sample. In this case, the Atkinson index is equal to mean income minus the smallest income, divided by mean income. As in large typical income distributions incomes of zero or near zero are common, the Atkinson index will tend to be one or very close to one for very large .
The Atkinson index then varies between 0 and 1 and is a measure of the amount of social utility to be gained by complete redistribution of a given income distribution, for a given parameter. Under the utilitarian ethical standard and some restrictive assumptions (a homogeneous population and constant elasticity of substitution utility), is equal to the income elasticity of marginal utility of income.
The Atkinson index with inequality aversion is equivalent (under a monotonic rescaling) to a generalized entropy index with parameter
The formula for deriving an Atkinson index with inequality aversion parameter from the corresponding GE index under the restriction is given by:
The Atkinson index satisfies the following properties:
In economics, the Gini coefficient, also known as the Gini index or Gini ratio, is a measure of statistical dispersion intended to represent the income inequality, the wealth inequality, or the consumption inequality within a nation or a social group. It was developed by Italian statistician and sociologist Corrado Gini.
In economics, the Lorenz curve is a graphical representation of the distribution of income or of wealth. It was developed by Max O. Lorenz in 1905 for representing inequality of the wealth distribution.
The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power-law probability distribution that is used in description of social, quality control, scientific, geophysical, actuarial, and many other types of observable phenomena; the principle originally applied to describing the distribution of wealth in a society, fitting the trend that a large portion of wealth is held by a small fraction of the population. The Pareto principle or "80-20 rule" stating that 80% of outcomes are due to 20% of causes was named in honour of Pareto, but the concepts are distinct, and only Pareto distributions with shape value of log45 ≈ 1.16 precisely reflect it. Empirical observation has shown that this 80-20 distribution fits a wide range of cases, including natural phenomena and human activities.
Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.
In probability theory, Chebyshev's inequality provides an upper bound on the probability of deviation of a random variable from its mean. More specifically, the probability that a random variable deviates from its mean by more than is at most , where is any positive constant and is the standard deviation.
Fermi–Dirac statistics is a type of quantum statistics that applies to the physics of a system consisting of many non-interacting, identical particles that obey the Pauli exclusion principle. A result is the Fermi–Dirac distribution of particles over energy states. It is named after Enrico Fermi and Paul Dirac, each of whom derived the distribution independently in 1926. Fermi–Dirac statistics is a part of the field of statistical mechanics and uses the principles of quantum mechanics.
In statistical mechanics, Maxwell–Boltzmann statistics describes the distribution of classical material particles over various energy states in thermal equilibrium. It is applicable when the temperature is high enough or the particle density is low enough to render quantum effects negligible.
In mathematics, particularly in linear algebra, tensor analysis, and differential geometry, the Levi-Civita symbol or Levi-Civita epsilon represents a collection of numbers defined from the sign of a permutation of the natural numbers 1, 2, ..., n, for some positive integer n. It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the permutation symbol, antisymmetric symbol, or alternating symbol, which refer to its antisymmetric property and definition in terms of permutations.
In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression.
In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.
Income inequality metrics or income distribution metrics are used by social scientists to measure the distribution of income and economic inequality among the participants in a particular economy, such as that of a specific country or of the world in general. While different theories may try to explain how income inequality comes about, income inequality metrics simply provide a system of measurement used to determine the dispersion of incomes. The concept of inequality is distinct from poverty and fairness.
In differential geometry, a tensor density or relative tensor is a generalization of the tensor field concept. A tensor density transforms as a tensor field when passing from one coordinate system to another, except that it is additionally multiplied or weighted by a power W of the Jacobian determinant of the coordinate transition function or its absolute value. A tensor density with a single index is called a vector density. A distinction is made among (authentic) tensor densities, pseudotensor densities, even tensor densities and odd tensor densities. Sometimes tensor densities with a negative weight W are called tensor capacity. A tensor density can also be regarded as a section of the tensor product of a tensor bundle with a density bundle.
The Theil index is a statistic primarily used to measure economic inequality and other economic phenomena, though it has also been used to measure racial segregation. The Theil index TT is the same as redundancy in information theory which is the maximum possible entropy of the data minus the observed entropy. It is a special case of the generalized entropy index. It can be viewed as a measure of redundancy, lack of diversity, isolation, segregation, inequality, non-randomness, and compressibility. It was proposed by a Dutch econometrician Henri Theil (1924–2000) at the Erasmus University Rotterdam.
In algebra, the Binet–Cauchy identity, named after Jacques Philippe Marie Binet and Augustin-Louis Cauchy, states that for every choice of real or complex numbers . Setting ai = ci and bj = dj, it gives Lagrange's identity, which is a stronger version of the Cauchy–Schwarz inequality for the Euclidean space . The Binet-Cauchy identity is a special case of the Cauchy–Binet formula for matrix determinants.
The Foster–Greer–Thorbeckeindices are a family ofpoverty metrics. The most commonly used index from the family, FGT2, puts higher weight on the poverty of the poorest individuals, making it a combined measure of poverty and income inequality and a popular choice within development economics. The indices were introduced in a 1984 paper by economists Erik Thorbecke, Joel Greer, and James Foster.
The generalized entropy index has been proposed as a measure of income inequality in a population. It is derived from information theory as a measure of redundancy in data. In information theory a measure of redundancy can be interpreted as non-randomness or data compression; thus this interpretation also applies to this index. In addition, interpretation of biodiversity as entropy has also been proposed leading to uses of generalized entropy to quantify biodiversity.
Coherent states have been introduced in a physical context, first as quasi-classical states in quantum mechanics, then as the backbone of quantum optics and they are described in that spirit in the article Coherent states. However, they have generated a huge variety of generalizations, which have led to a tremendous amount of literature in mathematical physics. In this article, we sketch the main directions of research on this line. For further details, we refer to several existing surveys.
In mathematics, a smooth maximum of an indexed family x1, ..., xn of numbers is a smooth approximation to the maximum function meaning a parametric family of functions such that for every α, the function is smooth, and the family converges to the maximum function as . The concept of smooth minimum is similarly defined. In many cases, a single family approximates both: maximum as the parameter goes to positive infinity, minimum as the parameter goes to negative infinity; in symbols, as and as . The term can also be used loosely for a specific smooth function that behaves similarly to a maximum, without necessarily being part of a parametrized family.
Functional regression is a version of regression analysis when responses or covariates include functional data. Functional regression models can be classified into four types depending on whether the responses or covariates are functional or scalar: (i) scalar responses with functional covariates, (ii) functional responses with scalar covariates, (iii) functional responses with functional covariates, and (iv) scalar or functional responses with functional and scalar covariates. In addition, functional regression models can be linear, partially linear, or nonlinear. In particular, functional polynomial models, functional single and multiple index models and functional additive models are three special cases of functional nonlinear models.
Buchholz's psi-functions are a hierarchy of single-argument ordinal functions introduced by German mathematician Wilfried Buchholz in 1986. These functions are a simplified version of the -functions, but nevertheless have the same strength as those. Later on this approach was extended by Jäger and Schütte.
Software: