This article may be too technical for most readers to understand.(August 2024) |
Budget-proposal aggregation (BPA) is a problem in social choice theory. [1] [2] [3] A group has to decide on how to distribute its budget among several issues. Each group-member has a different idea about what the ideal budget-distribution should be. The problem is how to aggregate the different opinions into a single budget-distribution program.
BPA is a special case of participatory budgeting, with the following characteristics:
It is also a special case of fractional social choice (portioning), in which agents express their preferences by stating their ideal distribution, rather than by a ranking of the issues. [4] [5] [ clarification needed ]
Another sense in which aggregation in budgeting has been studied is as follows. Suppose a manager asks his worker to submit a budget-proposal for a project. The worker can over-report the project cost, in order to get the slack to himself. Knowing that, the manager might reject the worker's proposal when it is too high, even though the high cost might be real. To mitigate this effect, it is possible to ask the worker for aggregate budget-proposals (for several projects at once). The experiment shows that this approach can indeed improve the efficiency of the process. [6]
The same problem has been studied in the context of aggregating probability distributions. [7] Suppose each citizen in society has a certain probability-distribution over candidates, representing the probability that the citizen prefers each candidate. The goal is to aggregate all distributions to a single probability-distribution, representing the probability that society should choose each candidate.
The one-dimensional case is the special case in which there are only two issues, e.g. defense and education. In this case, distributions can be represented by a single parameter: the allocation to issue #1 (the allocation to issue #2 is simply the total budget minus the allocation to issue #1). It is natural to assume that the agents have single-peaked preferences, that is: between two options that are both larger, or both smaller, than their ideal allocation, they prefer the option that is closer to their ideal allocation.
This setting is similar to a one-dimensional facility location problem: a certain facility (e.g. a public school) has to be built on a line; each voter has an ideal spot on the line in which the facility should be built (nearest to his own house); and the problem is to aggregate the voters' preferences and decide where on the line the facility should be built.
The average voting rule is an aggregation rule that simply returns the arithmetic mean of all individual distributions. It is the unique rule that satisfies the following three axioms: [7]
But the average rule is not incentive-compatible, and is very easy to manipulate. For example, suppose there are two issues, the ideal distribution of Alice is (80%, 20%), and the average of the ideal distributions of the other voters is (60%, 40%). Then Alice would be better-off by reporting that her ideal distribution is (100%, 0%), since this will pull the average distribution closer to her ideal distribution.
The median voting rule for the one-dimensional case is an aggregation rule that returns the median of the ideal budgets of all citizens. It has several advanatages:
But the median rule may be considered unfair, as it ignores the minority opinion. For example, suppose the two issues are "investment in the north" vs. "investment in the south". 49% of the population live in the north and therefore their ideal distribution is (100%,0%), while 51% of the population live in the south and therefore their ideal distribution is (0%,100%), The median rule selects the distribution (0%,100%), which is unfair for the citizens living in the north.
This fairness notion is captured by proportionality (PROP), [1] which means that, if all agents are single-minded (want either 0% or 100%), then the allocation equals the fraction of agents who want 100%. The average rule is PROP but not strategyproof; the median rule is strategyproof but not PROP.
The median rule can be generalized by adding fixed votes, that do not depend on the citizen votes. These fixed votes are called "phantoms". Forr every set of phantoms, the rule that chooses the median of the set of real votes + phantoms is strategyproof; see median voting rule for examples and characterization. [9]
The Uniform Phantom median rule (UPM) is a special case of the median rule, with n-1 phantoms at 1/n, ..., (n-1)/n. This rule is strategyproof (like all phantom-median rules), but in addition, it is also proportional. It has several characerizations:
Aziz, Lam, Lee and Walsh [11] study the special case in which the preferences are single-peaked and symmetric, that is: each agent compares alternatives only by their distance from his ideal point, regardless of the direction. In particular, they assume that each agent's utility is 1 minus the distance between his ideal point and the chosen allocation. They consider several fairness axioms:
The following is known about existing rules:
They prove the following characterizations:
Border and Jordan [12] : Cor.1 prove that the only rule satisfying continuity, anonymity, proportionality and strategyproofness is UPM.
Rosar compares the average rule to the median rule when the voters have diverse private information and interdependent preferences. For uniformly distributed information, the average report dominates the median report from a utilitarian perspective, when the set of admissible reports is designed optimally. For general distributions, the results still hold when there are many agents. [13]
When there are more than two issues, the space of possible budget-allocations is multi-dimensional. Extending the median rule to the multi-dimensional case is challenging, since the sum of the medians might be different than the median of the sum. In other words, if we pick the median on each issue separately, we might not get a feasible distribution.
In the multi-dimensional case, aggregation rules depend on assumptions on the utility functions of the voters.
A common assumption is that the utility of voter i, with ideal budget (peak) pi, from a given budget allocation x, is minus the L1-distance between pi and x. Under this assumption, several aggregation rules were studied.
Lindner, Nehring and Puppe [14] consider BPA with discrete amounts (e.g. whole dollars). They define the midpoint rule: it chooses a budget-allocation that minimizes the sum of L1-distances to the voters' peaks. In other words, it maximizes the sum of utilities – it is a utilitarian rule. They prove that the set of midpoints is convex, and that it is locally determined (one can check if a point is a midpoint only by looking at its neighbors in the simplex of allocations). Moreover, they prove that the possibility of strategic manipulation is limited: a manipulating agent cannot make the closest midpoint closer to his peak, nor make the farthest midpoint closer to his peak. As a consequence, the midpoint rule is strategyproof if all agents have symmetric single-peaked preferences.
Goel, Krishnaswamy, Saskhuwong and Aitamurto [15] consider BPA in the context of participatory budgeting with divisible projects: they propose to replace the common voting format of approving k projects with "knapsack voting". With discrete projects, this means that each voter has to select a set of projects whose total cost is at most the available budget; with divisible projects, this means that each voter reports his ideal budget-allocation. Now, each project is partitioned into individual "dollars"; for dollar j of project i, the number of votes is the total number of agents whose ideal budget gives at least j to project i. Given the votes, the knapsack-voting rule selects the dollars with the highest amount of support (as in utilitarian approval voting). They prove that, with L1-utilities, the knapsack voting is strategyproof and utilitarian (and hence efficient).
Both utilitarian rules are not "fair" in the sense that they may ignore minorities. For example, if 60% of the voters vote for the distribution (100%,0%) whereas 40% vote for (0%,100%), then the utilitarian rules would choose (100%,0%) and give nothing to the issue important for the minority.
Freeman, Pennock, Peters and Vaughan [1] suggest a class of rules called moving-phantomsrules, where there are n+1 phantoms that increase continuously until the outcome equals the total budget. They prove that all these rules are strategyproof. The proof proceeds in two steps. (1) If an agent changes his reported peak, but all the phantoms are fixed in place, then we have a median voting rule in each issue, so the outcome in each issue either stays the same or goes farther from the agent's real peak. (2) as the phantoms move, the outcome in some issues may move nearer to the agent's real peak, but the agent's gain from this is at most the agent's loss in step 1.
Note that the proof of (2) crucially relies on the assumption of L1 utilities, and does not work with other distance metrics. For example, suppose there are 3 issues and two agents with peaks at (20,60,20) and (0,50,50). One moving-phantoms rule (the "independent markets" rule below) returns (20,50,30), so agent 1's L1 disutility is 10+10+0=20 and L2 disutility is sqrt(100+100+0)=sqrt(200). If agent 1 changes his peak to (30,50,20), then the rule returns (25,50,25). Agent 1's L1 disutility is 5+10+5=20 and L2 disutility is sqrt(25+100+25)=sqrt(150). Agent 1 does not gain in L1 disutility, but does gain in L2 disutility.
A particular moving-phantoms rule is the Independent Markets rule. In addition to strategyproofness, it satisifes a property that they call proportionality: if all agents are single-minded (each agent's ideal budget allocates 100% of the budget to a single issue), then the rule allocates the budget among the issues proportionally to the number of their supporters. However, this rule is not efficient (in fact, the only efficient moving-phantom rule is the utilitarian rule). A demo of the Independent Markets rule, and several other moving-phantoms rules, is available online..
Caragiannis, Christodoulou and Protopapas [2] extended the definition of proportionality from single-minded preference profiles to any preference profile. They define the proportional outcome as the arithmetic mean of peaks. The only mechanism which is always proportional is the average rule, which is not strategyproof. Hence, they define the L1 distance between the outcome of a rule and the average as the degree of dis-proportionality. The disproportionality of any budget-allocation is between 0 and 2. They evaluate BPA mechanisms by their worst-case disproportionality. In BPA with two issues, they show that UPM has worst-case disproportionality 1/2. With 3 issues, the independent-markets mechanism may have disproportionality 0.6862; they propose another moving-phantoms rule, called Piecewise-Uniform rule, which is still proportional, and has disproportionality ~2/3. They prove that the worst-case disproportionality of a moving-phantoms mechanism on m issues is at least 1-1/m, and the worst-case disproportionality of any truthful mechanism is at least 1/2; this implies that their mechanisms attain the optimal disproportionality.
Freeman and Schmidt-Kraepelin [3] study a different measure of dis-proportionality: the L-infinity distance between the outcome and the average (i.e., the maximum difference per project, rather than the sum of differences). They define a new moving-phantoms rule called the Ladder rule, which underfunds a project by at most 1/2-1/(2m), and overfunds a project by at most 1/4; both bounds are tight for moving-phantoms rules.
It is an open question whether every anonymous, neutral, continuous and strategyproof rule is a moving-phantoms rule. [1] [2]
Elkind, Suksompong and Teh [16] define various axioms for BPA with L1-disutilities, analyze the implications between axioms, and determine which axioms are satisfied by common aggregation rules. They study two classes of rules: those based on coordinate-wise aggregation (average, maximum, minimum, median, product), and those based on global optimization (utilitarian, egalitarian).
Nehring and Puppe [17] [18] aim to derive decision rules with as few assumptions as possible on agents' preferences; they call this the frugal model. They assume that the social planner knows the agents' peaks, but does not know their exact preferences; this leads to uncertainty regarding how many people prefer an alternative x to an alternative y.
Given two alternatives x and y, x is a necessary majority winner if it wins over y according to all preferenes in the domain that are consistent with the agents' peaks; x is majority admissible if no other alternative is a necessary-majority-winner over x. Given two alternatives x and y, x is an ex-ante majority winner if its smallest possible number of supporters is at least as high as the smallest possible number of supporters of y, which holds iff its largest possible number of supporters is at least as high as the largest possible number of supporters of y. x is an ex-ante Condorcet winner (EAC) if it is an ex-ante majority winner over every other alternative.
They assume that agents' preferences are convex, which in one dimension is equivalent to single-peakedness.[ clarification needed ] But convexity alone is not enough to attain meaningful results in two or more dimensions (if the peaks are in general position, then all peaks are EAC winners). So they consider two subsets of convex preferences: homogenous quadratic preferences, and separable convex preferences.
They study BPA allowing lower and upper bounds on the spending on each issue.
Fain, Goel and Munagala [21] assume that agents have additive concave utility functions, which represent convex preferences over bundles. In particular, for each agent i and issue j there is a coefficient ai,j, and for each issue j there is an increasing and strictly concave function gj; the total utility of agent i from budget allocation x is: . They study the Lindahl equilibrium of this problem, prove that it is in the core (which is a strong fairness property), and show that it can be computed in polynomial time.
Wagner and Meir [22] study a generalization of BPA in which each agent may propose, in addition to a budget-allocation, also an amount t of tax (positive or negative) that will be taken from all agents and added to the budget. For each agent i there is a coefficient ai,f that represents the utility of monetary gains and losses, and there is a function f which is strictly convex for negative values and strictly concave for positive values, and , where d is the monetary gain (which can be negative). For this utility model, they present a variant of the Vickrey–Clarke–Groves mechanism that is strategyproof, but requires side-payments (in addition to the tax).
Puppe and Rollmann present a lab experiment comparing the average voting rule and a normalized median voting rule in multidimensional budget aggregation setting. [23] Under the average rule, people act in equilibrium when the equilibrium strategies are easily identifiable.[ clarification needed ] Under normalized median rule, many people play best responses, but these best responses are usually not exactly truthful. Still, the median rule attains much higher social welfare than the average rule.
Social choice theory is a branch of welfare economics that analyzes methods of combining individual opinions, beliefs, or preferences to reach a collective decision or create measures of social well-being. It contrasts with political science in that it is a normative field that studies how societies should make decisions, whereas political science is descriptive. Social choice incorporates insights from economics, mathematics, philosophy, political science, and game theory to find the best ways to combine individual preferences into a coherent whole, called a social welfare function.
In social choice and operations research, the utilitarian rule is a rule saying that, among all possible alternatives, society should pick the alternative which maximizes the sum of the utilities of all individuals in society. It is a formal mathematical representation of the utilitarian philosophy, and is often justified by reference to Harsanyi's utilitarian theorem or the Von Neumann–Morgenstern theorem.
Fair item allocation is a kind of the fair division problem in which the items to divide are discrete rather than continuous. The items have to be divided among several partners who potentially value them differently, and each item has to be given as a whole to a single person. This situation arises in various real-life scenarios:
Rental harmony is a kind of a fair division problem in which indivisible items and a fixed monetary cost have to be divided simultaneously. The housemates problem and room-assignment-rent-division are alternative names to the same problem.
Resource monotonicity is a principle of fair division. It says that, if there are more resources to share, then all agents should be weakly better off; no agent should lose from the increase in resources. The RM principle has been studied in various division problems.
In economics, dichotomous preferences (DP) are preference relations that divide the set of alternatives to two subsets, "Good" and "Bad".
Random priority (RP), also called Random serial dictatorship (RSD), is a procedure for fair random assignment - dividing indivisible items fairly among people.
A major branch of social choice theory is devoted to the comparison of electoral systems, otherwise known as social choice functions. Viewed from the perspective of political science, electoral systems are rules for conducting elections and determining winners from the ballots cast. From the perspective of economics, mathematics, and philosophy, a social choice function is a mathematical function that determines how a society should make choices, given a collection of individual preferences.
Various experiments have been made to evaluate various procedures for fair division, the problem of dividing resources among several people. These include case studies, computerized simulations, and lab experiments.
Combinatorial participatory budgeting, also called indivisible participatory budgeting or budgeted social choice, is a problem in social choice. There are several candidate projects, each of which has a fixed costs. There is a fixed budget, that cannot cover all these projects. Each voter has different preferences regarding these projects. The goal is to find a budget-allocation - a subset of the projects, with total cost at most the budget, that will be funded. Combinatorial participatory budgeting is the most common form of participatory budgeting.
Truthful resource allocation is the problem of allocating resources among agents with different valuations over the resources, such that agents are incentivized to reveal their true valuations over the resources.
Multiwinner approval voting, sometimes also called approval-based committee (ABC) voting, refers to a family of multi-winner electoral systems that use approval ballots. Each voter may select ("approve") any number of candidates, and multiple candidates are elected.
Fractional, stochastic, or weighted social choice is a branch of social choice theory in which the collective decision is not a single alternative, but rather a weighted sum of two or more alternatives. For example, if society has to choose between three candidates, then in standard social choice exactly one of these candidates is chosen. By contrast, in fractional social choice it is possible to choose any linear combination of these, e.g. "2/3 of A and 1/3 of B".
In fractional social choice, fractional approval voting refers to a class of electoral systems using approval ballots, in which the outcome is fractional: for each alternative j there is a fraction pj between 0 and 1, such that the sum of pj is 1. It can be seen as a generalization of approval voting: in the latter, one candidate wins and the other candidates lose. The fractions pj can be interpreted in various ways, depending on the setting. Examples are:
Multi-issue voting is a setting in which several issues have to be decided by voting. Multi-issue voting raises several considerations, that are not relevant in single-issue voting.
Donor coordination is a problem in social choice. There are several donors, each of whom wants to donate some money. Each donor supports a different set of targets. The goal is to distribute the total donated amount among the various targets in a way that respects the donors' preferences.
Participatory budgeting experiments are experiments done in the laboratory and in computerized simulations, in order to check various ethical and practical aspects of participatory budgeting. These experiments aim to decide on two main issues:
Belief aggregation, also called risk aggregation,opinion aggregation or probabilistic opinion pooling, is a process in which different probability distributions, produced by different experts, are combined to yield a single probability distribution.
The median voting rule or median mechanism is a rule for group decision-making along a one-dimensional domain. Each person votes by writing down his/her ideal value, and the rule selects a single value which is the median of all votes.
The average voting rule is a rule for group decision-making when the decision is a distribution, and each of the voters reports his ideal distribution. This is a special case of budget-proposal aggregation. It is a simple aggregation rule, that returns the arithmetic mean of all individual ideal distributions. The average rule was first studied formally by Michael Intrilligator. This rule and its variants are commonly used in economics and sports.