Belief aggregation, [1] also called risk aggregation, [2] opinion aggregation [3] or probabilistic opinion pooling, [4] is a process in which different probability distributions, produced by different experts, are combined to yield a single probability distribution.
Expert opinions are often uncertain. Rather than saying e.g. "it will rain tomorrow", a weather expert may say "it will rain with probability 70% and be sunny with probability 30%". Such a statement is called a belief. Different experts may have different beliefs; for example, a different weather expert may say "it will rain with probability 60% and be sunny with probability 40%". In other words, each expert has a subjeciive probability distribution over a given set of outcomes.
A belief aggregation rule is a function that takes as input two or more probability distributions over the same set of outcomes, and returns a single probability distribution over the same space.
Documented applications of belief aggregation include:
During COVID-19, the European Academy of Neurology developed an ad-hoc three-round voting method to aggregate expert opinions and reach a consensus. [9]
Common belief aggregation rules include:
Dietrich and List [4] present axiomatic characterizations of each class of rules. They argue that that linear aggregation can be justified “procedurally” but not “epistemically”, while the other two rules can be justified epistemically. Geometric aggregation is justified when the experts' beliefs are based on the same information, and multiplicative aggregation is justified when the experts' beliefs are based on private information.
A belief aggregation rule should arguably satisfy some desirable properties, or axioms:
Most literature on belief aggregation assumes that the experts report their beliefs honestly, as their main goal is to help the decision-maker get to the truth. In practice, experts may have strategic incentives. For example, the FDA uses advisory committees, and there have been controversies due to conflicts of interests within these committees. [11] Therefore, a truthful mechanism for belief aggregation could be useful.
In some settings, it is possible to pay the experts a certain sum of money, depending both on their expressed belief and on the realized outcome. Careful design of the payment function (often called a "scoring rule") can lead to a truthful mechanism. Various truthful scoring rules exist. [12] [13] [14] [15]
In some settings, monetary transfers are not possible. For example, the realized outcome may happen in the far future, or a wrong decision may be catastrophic.
To develop truthful mechanisms, one must make assumptions about the experts' preferences over the set of accepted probability-distributions. If the space of possible preferences is too rich, then strong impossibility results imply that the only truthful mechanism is the dictatorship mechanism (see Gibbard–Satterthwaite theorem).
A useful domain restriction is that the experts have single-peaked preferences. An aggregation rule is called one-dimensional strategyproof(1D-SP) if whenever all experts have single-peaked preferences, and submit their peaks to the aggregation rule, no expert can impose a strictly better aggregated distribution by reporting a false peak. An equivalent property is called uncompromisingness: [16] it says that, if the belief of expert i is smaller than the aggregate distribution, and i changes his report, then the aggregate distribution will be weakly larger; and vice-versa.
Moulin [17] proved a characterization of all 1D-SP rules, as well as the following two characterizations:
Jennings, Laraki, Puppe and Varloot [18] present new characterizations of strategyproof mechanisms with single-peaked preferences.
A further restriction of the single-peaked domain is that agents have single-peaked preferences with L1 metric on the probability density function. That is: for each agent i, there is an "ideal" probability distribution pi, and his utility from a selected probability distribution p* is minus the L1 distance between pi and p*. An aggregation rule is called L1-metric-strategyproof(L1-metric-SP) if whenever all experts have single-peaked preferences with L1 metric, and submit their peaks to the aggregation rule, no expert can impose a strictly better aggregated distribution by reporting a false peak. Several L1-metric-SP aggregation rules were suggested in the context of budget-proposal aggregation:
However, such preferences may not be a good fit for belief aggregation, as they are neutral - they do not distinguish between different outcomes. For example, suppose there are three outcomes, and the expert's belief pi assigns 100% to outcome 1. Then, the L1 metric between pi and "100% outcome 2" is 2, and the L1 metric between pi and "100% outcome 3" is 2 too. The same is true for any neutral metric. This makes sense when 1,2,3 are budget items. However, if these outcomes describe the potential strength of an earthquake in the Richter scale, then the distance between pi to "100% outcome 2" should be much smaller than the distance to "100% outcome 3".
Varloot and Laraki [1] study a different preference domain, in which the outcomes are linearly ordered, and the preferences are single-peaked in the space of cumulative distribution function (cdf). That is: each agent i has an ideal cumulative distribution function ci, and his utility depends negatively on the distance between ci and the accepted distribution c*. They define a new concept called level-strategyproofness (Level-SP), which is relevant when society's decision is based on the question of whether the probability of some event is above or below a given threshold. Level-SP provably implies strategyproofness for a rich class of cdf-single-peaked preferences. They characterize two new aggregation rules:
Other results include:
ANDURIL [21] is a MATLAB toolbox for belief aggregation.
Several books on related topics are available. [22] [23] [3]
In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events.
The expected utility hypothesis is a foundational assumption in mathematical economics concerning decision making under uncertainty. It postulates that rational agents maximize utility, meaning the subjective desirability of their actions. Rational choice theory, a cornerstone of microeconomics, builds this postulate to model aggregate social behaviour.
In gambling, economics, and the philosophy of probability, a Dutch book or lock is a set of odds and bets that ensures a guaranteed profit. It is generally used as a thought experiment to motivate Von Neumann–Morgenstern axioms or the axioms of probability by showing they are equivalent to philosophical coherence or Pareto efficiency.
In mechanism design, a strategyproof (SP) mechanism is a game in which each player has a weakly-dominant strategy, so that no player can gain by "spying" over the other players to know what they are going to play. When the players have private information, and the strategy space of each player consists of the possible information values, a truthful mechanism is a game in which revealing the true information is a weakly-dominant strategy for each player. An SP mechanism is also called dominant-strategy-incentive-compatible (DSIC), to distinguish it from other kinds of incentive compatibility.
In social choice theory, a dictatorship mechanism is a rule by which, among all possible alternatives, the results of voting mirror a single pre-determined person's preferences, without consideration of the other voters. Dictatorship by itself is not considered a good mechanism in practice, but it is theoretically important: by Arrow's impossibility theorem, when there are at least three alternatives, dictatorship is the only ranked voting electoral system that satisfies unrestricted domain, Pareto efficiency, and independence of irrelevant alternatives. Similarly, by Gibbard's theorem, when there are at least three alternatives, dictatorship is the only strategyproof rule.
Single-peaked preferences are a class of preference relations. A group of agents is said to have single-peaked preferences over a set of possible outcomes if the outcomes can be ordered along a line such that:
Majority judgment (MJ) is a single-winner voting system proposed in 2010 by Michel Balinski and Rida Laraki. It is a kind of highest median rule, a cardinal voting system that elects the candidate with the highest median rating.
Maximal lotteries refers to a probabilistic voting system first considered by the French mathematician and social scientist Germain Kreweras in 1965. The method uses preferential ballots and returns so-called maximal lotteries, i.e., probability distributions over the alternatives that are weakly preferred to any other probability distribution. Maximal lotteries satisfy the Condorcet criterion, the Smith criterion, polynomial runtime, and probabilistic versions of reinforcement, participation, and independence of clones.
In economics, dichotomous preferences (DP) are preference relations that divide the set of alternatives to two subsets: "Good" versus "Bad".
Random priority (RP), also called Random serial dictatorship (RSD), is a procedure for fair random assignment - dividing indivisible items fairly among people.
A simultaneous eating algorithm(SE) is an algorithm for allocating divisible objects among agents with ordinal preferences. "Ordinal preferences" means that each agent can rank the items from best to worst, but cannot (or does not want to) specify a numeric value for each item. The SE allocation satisfies SD-efficiency - a weak ordinal variant of Pareto-efficiency (it means that the allocation is Pareto-efficient for at least one vector of additive utility functions consistent with the agents' item rankings).
Multiwinner approval voting, also called approval-based committee (ABC) voting, is a multi-winner electoral system that uses approval ballots. Each voter may select ("approve") any number of candidates, and multiple candidates are elected. The number of elected candidates is usually fixed in advance. For example, it can be the number of seats in a country's parliament, or the required number of members in a committee.
Fractional social choice is a branch of social choice theory in which the collective decision is not a single alternative, but rather a weighted sum of two or more alternatives. For example, if society has to choose between three candidates: A B or C, then in standard social choice, exactly one of these candidates is chosen, while in fractional social choice, it is possible to choose "2/3 of A and 1/3 of B". A common interpretation of the weighted sum is as a lottery, in which candidate A is chosen with probability 2/3 and candidate B is chosen with probability 1/3. Due to this interpretation, fractional social choice is also called random social choice, probabilistic social choice, or stochastic social choice. But it can also be interpreted as a recipe for sharing, for example:
Fractional approval voting is an electoral system using approval ballots, in which the outcome is fractional: for each alternative j there is a fraction pj between 0 and 1, such that the sum of pj is 1. It can be seen as a generalization of approval voting: in the latter, one candidate wins and the other candidates lose. The fractions pj can be interpreted in various ways, depending on the setting. Examples are:
A median mechanism is a voting rule that allows people to decide on a value in a one-dimensional domain. Each person votes by writing down his/her ideal value, and the rule selects a single value which is the median of all votes.
Lexicographic dominance is a total order between random variables. It is a form of stochastic ordering. It is defined as follows. Random variable A has lexicographic dominance over random variable B if one of the following holds:
Budget-proposal aggregation (BPA) is a problem in social choice theory. A group has to decide on how to distribute its budget among several issues. Each group-member has a different idea about what the ideal budget-distribution should be. The problem is how to aggregate the different opinions into a single budget-distribution program.
Participatory budgeting experiments are experiments done in the laboratory and in computerized simulations, in order to check various ethical and practical aspects of participatory budgeting. These experiments aim to decide on two main issues:
Belief merging, also called belief fusion or propositional belief merging, is a process in which an individual agent aggregates possibly conflicting pieces of information, expressed in logical formulae, into a consistent knowledge-base. Applications include combining conflicting sensor information received by the same agent and combining multiple databases to build an expert system. It also has applications in multi-agent systems.
The average voting rule is a rule for group decision-making when the decision is a distribution, and each of the voters reports his ideal distribution. This is a special case of budget-proposal aggregation. It is a simple aggregation rule, that returns the arithmetic mean of all individual ideal distributions. The average rule was first studied formally by Michael Intrilligator. This rule and its variants are commonly used in economics and sports.