Pareto efficiency or Pareto optimality is a situation that cannot be modified so as to make any one individual or preference criterion better off without making at least one individual or preference criterion worse off. The concept is named after Vilfredo Pareto (1848–1923), Italian engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The following three concepts are closely related:
The Pareto frontier is the set of all Pareto efficient allocations, conventionally shown graphically. It also is variously known as the Pareto front or Pareto set.
"Pareto efficiency" is considered as a minimal notion of efficiency that does not necessarily result in a socially desirable distribution of resources: it makes no statement about equality, or the overall well-being of a society. 46–49 It is a necessary, but not sufficient, condition of efficiency.:
In addition to the context of efficiency in allocation, the concept of Pareto efficiency also arises in the context of efficiency in production vs. x-inefficiency : a set of outputs of goods is Pareto efficient if there is no feasible re-allocation of productive inputs such that output of one product increases while the outputs of all other goods either increase or remain the same. 459:
Besides economics, the notion of Pareto efficiency has been applied to the selection of alternatives in engineering and biology. Each option is first assessed, under multiple criteria, and then a subset of options is ostensibly identified with the property that no other option can categorically outperform the specified option. It is a statement of impossibility of improving one variable without harming other variables in the subject of multi-objective optimization (also termed Pareto optimization).
"Pareto optimality" is a formally defined concept used to describe when an allocation is optimal. An allocation is not Pareto optimal if there is an alternative allocation where improvements can be made to at least one participant's well-being without reducing any other participant's well-being. If there is a transfer that satisfies this condition, the reallocation is called a "Pareto improvement". When no further Pareto improvements are possible, the allocation is a "Pareto optimum".
The formal presentation of the concept in an economy is as follows: Consider an economy with agents and goods. Then an allocation , where for all i, is Pareto optimal if there is no other feasible allocation such that, for utility function for each agent , for all with for some . Here, in this simple economy, "feasibility" refers to an allocation where the total amount of each good that is allocated sums to no more than the total amount of the good in the economy. In a more complex economy with production, an allocation would consist both of consumption vectors and production vectors, and feasibility would require that the total amount of each consumed good is no greater than the initial endowment plus the amount produced.
In principle, a change from a generally inefficient economic allocation to an efficient one is not necessarily considered to be a Pareto improvement. Even when there are overall gains in the economy, if a single agent is disadvantaged by the reallocation, the allocation is not Pareto optimal. For instance, if a change in economic policy eliminates a monopoly and that market subsequently becomes competitive, the gain to others may be large. However, since the monopolist is disadvantaged, this is not a Pareto improvement. In theory, if the gains to the economy are larger than the loss to the monopolist, the monopolist could be compensated for its loss while still leaving a net gain for others in the economy, allowing for a Pareto improvement. Thus, in practice, to ensure that nobody is disadvantaged by a change aimed at achieving Pareto efficiency, compensation of one or more parties may be required. It is acknowledged, in the real world, that such compensations may have unintended consequences leading to incentive distortions over time, as agents supposedly anticipate such compensations and change their actions accordingly.
Under the idealized conditions of the first welfare theorem, a system of free markets, also called a "competitive equilibrium", leads to a Pareto-efficient outcome. It was first demonstrated mathematically by economists Kenneth Arrow and Gérard Debreu.
However, the result only holds under the restrictive assumptions necessary for the proof: markets exist for all possible goods, so there are no externalities; all markets are in full equilibrium; markets are perfectly competitive; transaction costs are negligible; and market participants have perfect information.
In the absence of perfect information or complete markets, outcomes will generally be Pareto inefficient, per the Greenwald-Stiglitz theorem.
The second welfare theorem is essentially the reverse of the first welfare-theorem. It states that under similar, ideal assumptions, any Pareto optimum can be obtained by some competitive equilibrium, or free market system, although it may also require a lump-sum transfer of wealth.
Weak Pareto optimality is a situation that cannot be strictly improved for every individual.
Formally, we define a strong pareto improvement as a situation in which all agents are strictly better-off (in contrast to just "Pareto improvement", which requires that one agent is strictly better-off and the other agents are at least as good). A situation is weak Pareto-optimal if it has no strong Pareto-improvements.
Any strong Pareto-improvement is also a weak Pareto-improvement. The opposite is not true; for example, consider a resource allocation problem with two resources, which Alice values at 10, 0 and George values at 5, 5. Consider the allocation giving all resources to Alice, where the utility profile is (10,0).
A market doesn't require local nonsatiation to get to a weak Pareto-optimum.
Constrained Pareto optimality is a weakening of Pareto-optimality, accounting for the fact that a potential planner (e.g., the government) may not be able to improve upon a decentralized market outcome, even if that outcome is inefficient. This will occur if it is limited by the same informational or institutional constraints as are individual agents. 104:
An example is of a setting where individuals have private information (for example, a labor market where the worker's own productivity is known to the worker but not to a potential employer, or a used-car market where the quality of a car is known to the seller but not to the buyer) which results in moral hazard or an adverse selection and a sub-optimal outcome. In such a case, a planner who wishes to improve the situation is unlikely to have access to any information that the participants in the markets do not have. Hence, the planner cannot implement allocation rules which are based on the idiosyncratic characteristics of individuals; for example, "if a person is of type A, they pay price p1, but if of type B, they pay price p2" (see Lindahl prices). Essentially, only anonymous rules are allowed (of the sort "Everyone pays price p") or rules based on observable behavior; "if any person chooses x at price px, then they get a subsidy of ten dollars, and nothing otherwise". If there exists no allowed rule that can successfully improve upon the market outcome, then that outcome is said to be "constrained Pareto-optimal".
The concept of constrained Pareto optimality assumes benevolence on the part of the planner and hence is distinct from the concept of government failure, which occurs when the policy making politicians fail to achieve an optimal outcome simply because they are not necessarily acting in the public's best interest.
Fractional Pareto optimality is a strengthening of Pareto-optimality in the context of fair item allocation. An allocation of indivisible items is fractionally Pareto-optimal (fPO) if it is not Pareto-dominated even by an allocation in which some items are split between agents. This is in contrast to standard Pareto-optimality, which only considers domination by feasible (discrete) allocations.
As an example, consider an item allocation problem with two items, which Alice values at 3, 2 and George values at 4, 1. Consider the allocation giving the first item to Alice and the second to George, where the utility profile is (3,1).
Suppose each agent i is assigned a positive weight ai. For every allocation x, define the welfare of x as the weighted sum of utilities of all agents in x, i.e.:
Let xa be an allocation that maximizes the welfare over all allocations, i.e.:
It is easy to show that the allocation xa is Pareto-efficient: since all weights are positive, any Pareto-improvement would increase the sum, contradicting the definition of xa.
Japanese neo-Walrasian economist Takashi Negishi provedthat, under certain assumptions, the opposite is also true: for every Pareto-efficient allocation x, there exists a positive vector a such that x maximizes Wa. A shorter proof is provided by Hal Varian.
The notion of Pareto efficiency has been used in engineering. 111–148 Given a set of choices and a way of valuing them, the Pareto frontier or Pareto set or Pareto front is the set of choices that are Pareto efficient. By restricting attention to the set of choices that are Pareto-efficient, a designer can make tradeoffs within this set, rather than considering the full range of every parameter. :63–65:
For a given system, the Pareto frontier or Pareto set is the set of parameterizations (allocations) that are all Pareto efficient. Finding Pareto frontiers is particularly useful in engineering. By yielding all of the potentially optimal solutions, a designer can make focused tradeoffs within this constrained set of parameters, rather than needing to consider the full ranges of parameters. 399–412:
The Pareto frontier, P(Y), may be more formally described as follows. Consider a system with function , where X is a compact set of feasible decisions in the metric space , and Y is the feasible set of criterion vectors in , such that .
We assume that the preferred directions of criteria values are known. A point is preferred to (strictly dominates) another point , written as . The Pareto frontier is thus written as:
A significant aspect of the Pareto frontier in economics is that, at a Pareto-efficient allocation, the marginal rate of substitution is the same for all consumers. A formal statement can be derived by considering a system with m consumers and n goods, and a utility function of each consumer as where is the vector of goods, both for all i. The feasibility constraint is for . To find the Pareto optimal allocation, we maximize the Lagrangian:
where and are the vectors of multipliers. Taking the partial derivative of the Lagrangian with respect to each good for and and gives the following system of first-order conditions:
where denotes the partial derivative of with respect to . Now, fix any and . The above first-order condition imply that
Thus, in a Pareto-optimal allocation, the marginal rate of substitution must be the same for all consumers. 114:
Algorithms for computing the Pareto frontier of a finite set of alternatives have been studied in computer science and power engineering.They include:
Pareto optimisation has also been studied in biological processes. 87–102 In bacteria, genes were shown to be either inexpensive to make (resource efficient) or easier to read (translation efficient). Natural selection acts to push highly expressed genes towards the Pareto frontier for resource use and translational efficiency. Genes near the Pareto frontier were also shown to evolve more slowly (indicating that they are providing a selective advantage).:
It would be incorrect to treat Pareto efficiency as equivalent to societal optimization, 358–364 as the latter is a normative concept that is a matter of interpretation that typically would account for the consequence of degrees of inequality of distribution. :10–15 An example would be the interpretation of one school district with low property tax revenue versus another with much higher revenue as a sign that more equal distribution occurs with the help of government redistribution. :95–132:
Pareto efficiency does not require a totally equitable distribution of wealth. 222 An economy in which a wealthy few hold the vast majority of resources can be Pareto efficient. This possibility is inherent in the definition of Pareto efficiency; often the status quo is Pareto efficient regardless of the degree to which wealth is equitably distributed. A simple example is the distribution of a pie among three people. The most equitable distribution would assign one third to each person. However the assignment of, say, a half section to each of two individuals and none to the third is also Pareto optimal despite not being equitable, because none of the recipients could be made better off without decreasing someone else's share; and there are many other such distribution examples. An example of a Pareto inefficient distribution of the pie would be allocation of a quarter of the pie to each of the three, with the remainder discarded. :18 The origin (and utility value) of the pie is conceived as immaterial in these examples. In such cases, whereby a "windfall" is gained that none of the potential distributees actually produced (e.g., land, inherited wealth, a portion of the broadcast spectrum, or some other resource), the criterion of Pareto efficiency does not determine a unique optimal allocation. Wealth consolidation may exclude others from wealth accumulation because of bars to market entry, etc.:
The liberal paradox elaborated by Amartya Sen shows that when people have preferences about what other people do, the goal of Pareto efficiency can come into conflict with the goal of individual liberty. 92–94:
There are two fundamental theorems of welfare economics. The first theorem states that a market will tend toward a competitive equilibrium that is weakly Pareto optimal when the market maintains the following two attributes:
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.
In game theory, the core is the set of feasible allocations that cannot be improved upon by a subset of the economy's agents. A coalition is said to improve upon or block a feasible allocation if the members of that coalition are better off under another feasible allocation that is identical to the first except that every member of the coalition has a different consumption bundle that is part of an aggregate consumption bundle that can be constructed from publicly available technology and the initial endowments of each consumer in the coalition.
In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.
In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found.
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem. However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition.
Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.
In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form is a convex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to be quasiconcave.
Competitive equilibrium is the traditional concept of economic equilibrium, appropriate for the analysis of commodity markets with flexible prices and many traders, and serving as the benchmark of efficiency in economic analysis. It relies crucially on the assumption of a competitive environment where each trader decides upon a quantity that is so small compared to the total quantity traded in the market that their individual transactions have no influence on the prices. Competitive markets are an ideal standard by which other market structures are evaluated.
Multi-objective optimization is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.
Bilevel optimization is a special kind of optimization where one problem is embedded (nested) within another. The outer optimization task is commonly referred to as the upper-level optimization task, and the inner optimization task is commonly referred to as the lower-level optimization task. These problems involve two kinds of variables, referred to as the upper-level variables and the lower-level variables.
Network coding has been shown to optimally use bandwidth in a network, maximizing information flow but the scheme is very inherently vulnerable to pollution attacks by malicious nodes in the network. A node injecting garbage can quickly affect many receivers. The pollution of network packets spreads quickly since the output of honest node is corrupted if at least one of the incoming packets is corrupted. An attacker can easily corrupt a packet even if it is encrypted by either forging the signature or by producing a collision under the hash function. This will give an attacker access to the packets and the ability to corrupt them. Denis Charles, Kamal Jain and Kristin Lauter designed a new homomorphic encryption signature scheme for use with network coding to prevent pollution attacks. The homomorphic property of the signatures allows nodes to sign any linear combination of the incoming packets without contacting the signing authority. In this scheme it is computationally infeasible for a node to sign a linear combination of the packets without disclosing what linear combination was used in the generation of the packet. Furthermore, we can prove that the signature scheme is secure under well known cryptographic assumptions of the hardness of the discrete logarithm problem and the computational Elliptic curve Diffie–Hellman.
Optimal computing budget allocation (OCBA) is an approach to maximize the overall simulation efficiency for finding an optimal decision. The concept was introduced in the mid-1990s by Dr. Chun-Hung Chen. Simply put, OCBA is an approach to simulation that will help determine the number of replications or the simulation time that is needed in order to receive acceptable or best results within a set of given parameters. This is accomplished by using an asymptotic framework to analyze the structure of the optimal allocation. OCBA has also been shown effective in enhancing partition-based random search algorithms for solving deterministic global optimization problems.
Efficient cake-cutting is a problem in economics and computer science. It involves a heterogeneous resource, such as a cake with different toppings or a land with different coverings, that is assumed to be divisible - it is possible to cut arbitrarily small pieces of it without destroying their value. The resource has to be divided among several partners who have different preferences over different parts of the cake, i.e., some people prefer the chocolate toppings, some prefer the cherries, some just want as large a piece as possible, etc. The allocation should be economically efficient. Several notions of efficiency have been studied:
Fair item allocation is a kind of a fair division problem in which the items to divide are discrete rather than continuous. The items have to be divided among several partners who value them differently, and each item has to be given as a whole to a single person. This situation arises in various real-life scenarios:
Efficiency and fairness are two major goals of welfare economics. Given a set of resources and a set of agents, the goal is to divide the resources among the agents in a way that is both Pareto efficient (PE) and envy-free (EF). The goal was first defined by David Schmeidler and Menahem Yaari. Later, the existence of such allocations has been proved under various conditions.
Utilitarian cake-cutting is a rule for dividing a heterogeneous resource, such as a cake or a land-estate, among several partners with different cardinal utility functions, such that the sum of the utilities of the partners is as large as possible. It is inspired by the utilitarian philosophy. Utilitarian cake-cutting is often not "fair"; hence, utilitarianism is in conflict with fair cake-cutting.
Envy-free item allocation is a fair item allocation problem, in which the fairness criterion is envy-freeness - each agent should receive a bundle that he believes to be at least as good as the bundle of any other agent.
Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. The power of quantum computing may allow solving problems which are not practically feasible on classical computers, or suggest a considerable speed up with respect to the best known classical algorithm. Among other quantum algorithms, there are quantum optimization algorithms which might suggest improvement in solving optimization problems.
Fair river sharing is a kind of a fair division problem in which the waters of a river has to be divided among countries located along the river. It differs from other fair division problems in that the resource to be divided - the water - flows in one direction - from upstream countries to downstream countries. To attain any desired division, it may be required to limit the consumption of upstream countries, but this may require to give these countries some monetary compensation.