Atkinson index

Last updated

The Atkinson index (also known as the Atkinson measure or Atkinson inequality measure) is a measure of income inequality developed by British economist Anthony Barnes Atkinson. The measure is useful in determining which end of the distribution contributed most to the observed inequality. [1]

Contents

Definition

The index can be turned into a normative measure by imposing a coefficient to weight incomes. Greater weight can be placed on changes in a given portion of the income distribution by choosing , the level of "inequality aversion", appropriately. The Atkinson index becomes more sensitive to changes at the lower end of the income distribution as increases. Conversely, as the level of inequality aversion falls (that is, as approaches 0) the Atkinson becomes less sensitive to changes in the lower end of the distribution. The Atkinson index is for no value of highly sensitive to top incomes because of the common restriction that is nonnegative. [2]

The Atkinson parameter is often called the "inequality aversion parameter", since it regulates the sensitivity of the implied social welfare losses from inequality to income inequality as measured by some corresponding generalised entropy index. The Atkinson index is defined in reference to a corresponding social welfare function, where mean income multiplied by one minus the Atkinson index gives the welfare equivalent equally distributed income. Thus the Atkinson index gives the share of current income which could be sacrificed, without reducing social welfare, if perfect inequality were instated. For , (no aversion to inequality), the marginal social welfare from income is invariant to income, i.e. marginal increases in income produce as much social welfare whether they go to a poor or rich individual. In this case, the welfare equivalent equally distributed income is equal to mean income, and the Atkinson index is zero.

For (infinite aversion to inequality) the marginal social welfare of income of the poorest individual is infinitely larger than any even slightly richer individual, and the Atkinson social welfare function is equal to the smallest income in the sample. In this case, the Atkinson index is equal to mean income minus the smallest income, divided by mean income. As in large typical income distributions incomes of zero or near zero are common, the Atkinson index will tend to be one or very close to one for very large .

The Atkinson index then varies between 0 and 1 and is a measure of the amount of social utility to be gained by complete redistribution of a given income distribution, for a given parameter. Under the utilitarian ethical standard and some restrictive assumptions (a homogeneous population and constant elasticity of substitution utility), is equal to the income elasticity of marginal utility of income.


The Atkinson index is defined as:

where is individual income (i = 1, 2, ..., N) and is the mean income.

In other words, the Atkinson index is the complement to 1 of the ratio of the Hölder generalized mean of exponent 1−ε to the arithmetic mean of the incomes (where as usual the generalized mean of exponent 0 is interpreted as the geometric mean).

Atkinson index satisfies the following properties:

  1. The index is symmetric in its arguments: for any permutation .
  2. The index is non-negative, and is equal to zero only if all incomes are the same: iff for all .
  3. The index satisfies the principle of transfers: if a transfer is made from an individual with income to another one with income such that , then the inequality index cannot increase.
  4. The index satisfies population replication axiom: if a new population is formed by replicating the existing population an arbitrary number of times, the inequality remains the same:
  5. The index satisfies mean independence, or income homogeneity, axiom: if all incomes are multiplied by a positive constant, the inequality remains the same: for any .
  6. The index is subgroup decomposable. [3] This means that overall inequality in the population can be computed as the sum of the corresponding Atkinson indices within each group, and the Atkinson index of the group mean incomes:
where indexes groups, , individuals within groups, is the mean income in group , and the weights depend on and . The class of the subgroup-decomposable inequality indices is very restrictive. Many popular indices, including Gini index, do not satisfy this property.

See also

Footnotes

  1. inter alia "Income, Poverty, and Health Insurance Coverage in the United States: 2010", U.S. Census Bureau, 2011, p.10
  2. The Atkinson index is related to the generalized entropy (GE) class of inequality indexes by - i.e an Atkinson index with high inequality aversion is derived from a GE index with small . GE indexes with large are sensitive to the existence of large top incomes but the corresponding Atkinson index would have negative . For a hypothetical Atkinson index with being negative, the implied social utility function would be convex in income, and the Atkinson index would be nonpositive.
  3. Shorrocks, AF (1980). The class of additively decomposable inequality indices. Econometrica, 48 (3), 613–625, doi : 10.2307/1913126

Related Research Articles

<span class="mw-page-title-main">Gini coefficient</span> Measure of inequality of a distribution

In economics, the Gini coefficient, also known as the Gini index or Gini ratio, is a measure of statistical dispersion intended to represent the income inequality, the wealth inequality, or the consumption inequality within a nation or a social group. It was developed by Italian statistician and sociologist Corrado Gini.

<span class="mw-page-title-main">Lorenz curve</span> Graphical representation of the distribution of income or of wealth

In economics, the Lorenz curve is a graphical representation of the distribution of income or of wealth. It was developed by Max O. Lorenz in 1905 for representing inequality of the wealth distribution.

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

<span class="mw-page-title-main">Maxwell–Boltzmann statistics</span> Statistical distribution used in many-particle mechanics

In statistical mechanics, Maxwell–Boltzmann statistics describes the distribution of classical material particles over various energy states in thermal equilibrium. It is applicable when the temperature is high enough or the particle density is low enough to render quantum effects negligible.

In mathematics, the Kronecker delta is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise:

In mathematics, particularly in linear algebra, tensor analysis, and differential geometry, the Levi-Civita symbol or Levi-Civita epsilon represents a collection of numbers; defined from the sign of a permutation of the natural numbers 1, 2, ..., n, for some positive integer n. It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the permutation symbol, antisymmetric symbol, or alternating symbol, which refer to its antisymmetric property and definition in terms of permutations.

In welfare economics, a social welfare function is a function that ranks social states as less desirable, more desirable, or indifferent for every possible pair of social states. Inputs of the function include any variables considered to affect the economic welfare of a society. In using welfare measures of persons in the society as inputs, the social welfare function is individualistic in form. One use of a social welfare function is to represent prospective patterns of collective choice as to alternative social states. The social welfare function provides the government with a simple guideline for achieving the optimal distribution of income.

In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.

Vapnik–Chervonenkis theory was developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkis. The theory is a form of computational learning theory, which attempts to explain the learning process from a statistical point of view.

Income inequality metrics or income distribution metrics are used by social scientists to measure the distribution of income and economic inequality among the participants in a particular economy, such as that of a specific country or of the world in general. While different theories may try to explain how income inequality comes about, income inequality metrics simply provide a system of measurement used to determine the dispersion of incomes. The concept of inequality is distinct from poverty and fairness.

In mathematics, the Bernoulli scheme or Bernoulli shift is a generalization of the Bernoulli process to more than two possible outcomes. Bernoulli schemes appear naturally in symbolic dynamics, and are thus important in the study of dynamical systems. Many important dynamical systems exhibit a repellor that is the product of the Cantor set and a smooth manifold, and the dynamics on the Cantor set are isomorphic to that of the Bernoulli shift. This is essentially the Markov partition. The term shift is in reference to the shift operator, which may be used to study Bernoulli schemes. The Ornstein isomorphism theorem shows that Bernoulli shifts are isomorphic when their entropy is equal.

The Theil index is a statistic primarily used to measure economic inequality and other economic phenomena, though it has also been used to measure racial segregation.

The Hoover index, also known as the Robin Hood index or the Schutz index, is a measure of income inequality. It is equal to the percentage of the total population's income that would have to be redistributed to make all the incomes equal.

In mathematics, the topological entropy of a topological dynamical system is a nonnegative extended real number that is a measure of the complexity of the system. Topological entropy was first introduced in 1965 by Adler, Konheim and McAndrew. Their definition was modelled after the definition of the Kolmogorov–Sinai, or metric entropy. Later, Dinaburg and Rufus Bowen gave a different, weaker definition reminiscent of the Hausdorff dimension. The second definition clarified the meaning of the topological entropy: for a system given by an iterated function, the topological entropy represents the exponential growth rate of the number of distinguishable orbits of the iterates. An important variational principle relates the notions of topological and measure-theoretic entropy.

The Foster–Greer–Thorbeckeindices are a family ofpoverty metrics. The most commonly used index from the family, FGT2, puts higher weight on the poverty of the poorest individuals, making it a combined measure of poverty and income inequality and a popular choice within development economics. The indices were introduced in a 1984 paper by economists Erik Thorbecke, Joel Greer, and James Foster.

<span class="mw-page-title-main">Generalized entropy index</span> Measure of income inequality

The generalized entropy index has been proposed as a measure of income inequality in a population. It is derived from information theory as a measure of redundancy in data. In information theory a measure of redundancy can be interpreted as non-randomness or data compression; thus this interpretation also applies to this index. In addition, interpretation of biodiversity as entropy has also been proposed leading to uses of generalized entropy to quantify biodiversity.

In quantum field theory, a Ward–Takahashi identity is an identity between correlation functions that follows from the global or gauge symmetries of the theory, and which remains valid after renormalization.

Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models. It is based on a variational principle of least action, formulated in generalized coordinates of motion. Note that "generalized coordinates of motion" are related to—but distinct from—generalized coordinates as used in (multibody) dynamical systems analysis. Generalized filtering furnishes posterior densities over hidden states generating observed data using a generalized gradient descent on variational free energy, under the Laplace assumption. Unlike classical filtering, generalized filtering eschews Markovian assumptions about random fluctuations. Furthermore, it operates online, assimilating data to approximate the posterior density over unknown quantities, without the need for a backward pass. Special cases include variational filtering, dynamic expectation maximization and generalized predictive coding.

In statistics and econometrics, the mean log deviation (MLD) is a measure of income inequality. The MLD is zero when everyone has the same income, and takes larger positive values as incomes become more unequal, especially at the high end.

In mathematics, the small boundary property is a property of certain topological dynamical systems. It is dynamical analog of the inductive definition of Lebesgue covering dimension zero.

References

Software: