WikiMili The Free Encyclopedia

In game theory, **Aumann's agreement theorem** is a theorem which demonstrates that rational agents with common knowledge of each other's beliefs cannot agree to disagree. It was first formulated in the 1976 paper titled "Agreeing to Disagree" by Robert Aumann, after whom the theorem is named.

**Game theory** is the study of mathematical models of strategic interaction among rational decision-makers. It has applications in all fields of social science, as well as in logic, systems science, and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.

In mathematics, a **theorem** is a statement that has been proven on the basis of previously established statements, such as other theorems, and generally accepted statements, such as axioms. A theorem is a logical consequence of the axioms. The proof of a mathematical theorem is a logical argument for the theorem statement given in accord with the rules of a deductive system. The proof of a theorem is often interpreted as justification of the truth of the theorem statement. In light of the requirement that theorems be proved, the concept of a theorem is fundamentally *deductive*, in contrast to the notion of a scientific law, which is *experimental*.

In economics, game theory, decision theory, and artificial intelligence, a **rational agent** is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.

Aumann's agreement theorem says that two people acting rationally (in a certain precise sense) and with common knowledge of each other's beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal.^{ [1] } This theorem holds even if the people's individual posteriors are based on different observed information about the world. Simply knowing that another agent observed some information and came to their respective conclusion will force each to revise their beliefs, resulting eventually in total agreement on the correct posterior. Thus, two rational Bayesian agents with the same priors and who know each other's posteriors will have to agree.

**Rational choice theory**, also known as **choice theory** or **rational action theory**, is a framework for understanding and often formally modeling social and economic behavior. The basic premise of rational choice theory is that aggregate social behavior results from the behavior of individual actors, each of whom is making their individual decisions. The theory also focuses on the determinants of the individual choices. Rational choice theory then assumes that an individual has preferences among the available choice alternatives that allow them to state which option they prefer. These preferences are assumed to be complete and transitive. The rational agent is assumed to take account of available information, probabilities of events, and potential costs and benefits in determining preferences, and to act consistently in choosing the self-determined best choice of action.

**Common knowledge** is a special kind of knowledge for a group of agents. There is *common knowledge* of *p* in a group of agents *G* when all the agents in *G* know *p*, they all know that they know *p*, they all know that they all know that they know *p*, and so on *ad infinitum*.

"**Agree to disagree**" or "agreeing to disagree" is a phrase in English referring to the resolution of a conflict whereby all parties tolerate but do not accept the opposing position(s). It generally occurs when all sides recognise that further conflict would be unnecessary, ineffective or otherwise undesirable. They may also remain on amicable terms while continuing to disagree about the unresolved issues.

A question arises whether such an agreement can be reached in a reasonable time and, from a mathematical perspective, whether this can be done efficiently. Scott Aaronson has shown that this is indeed the case.^{ [2] } Of course, the assumption of common priors is a rather strong one and may not hold in practice. However, Robin Hanson has presented an argument that Bayesians who agree about the processes that gave rise to their priors (e.g., genetic and environmental influences) should, if they adhere to a certain *pre-rationality condition*, have common priors.^{ [3] }

**Scott Joel Aaronson** is an American theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.

**Robin Dale Hanson** is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is known as an expert on idea futures and markets, and he was involved in the creation of the Foresight Institute's Foresight Exchange and DARPA’s FutureMAP project. He invented market scoring rules like LMSR used by prediction markets such as Consensus Point, and has conducted research on signalling.

Studying the same issue from a different perspective, a research paper by Ziv Hellman considers what happens if priors are not common. The paper presents a way to measure how distant priors are from being common. If this distance is ε then, under common knowledge, disagreement on events is always bounded from above by ε. When ε goes to zero, Aumann's original agreement theorem is recapitulated.^{ [4] } In a 2013 paper, Joseph Halpern and Willemien Kets argued that "players can agree to disagree in the presence of ambiguity, even if there is a common prior, but that allowing for ambiguity is more restrictive than assuming heterogeneous priors."^{ [5] }

**Robert John Aumann** is an Israeli-American mathematician, and a member of the United States National Academy of Sciences. He is a professor at the Center for the Study of Rationality in the Hebrew University of Jerusalem in Israel. He also holds a visiting position at Stony Brook University, and is one of the founding members of the Stony Brook Center for Game Theory.

**Joseph Yehuda Halpern** is a professor of computer science at Cornell University. Most of his research is on reasoning about knowledge and uncertainty.

**Bayesian probability** is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.

In probability theory and statistics, **Bayes’ theorem** describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if cancer is related to age, then, using Bayes’ theorem, a person's age can be used to more accurately assess the probability that they have cancer than can be done without knowledge of the person’s age.

**Bayesian inference** is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

In statistics, **point estimation** involves the use of sample data to calculate a single value which is to serve as a "best guess" or "best estimate" of an unknown population parameter. More formally, it is the application of a point estimator to the data to obtain a point estimate.

**Scientific evidence** is evidence which serves to either support or counter a scientific theory or hypothesis. Such evidence is expected to be empirical evidence and interpretation in accordance with scientific method. Standards for scientific evidence vary according to the field of inquiry, but the strength of scientific evidence is generally based on the results of statistical analysis and the strength of scientific controls.

A **Bayesian network**, **Bayes network**, **belief network**, **decision network**, **Bayes(ian) model** or **probabilistic directed acyclic graphical model** is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

**Bayesian statistics** is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a *degree of belief* in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation that views probability as the limit of the relative frequency of an event after many trials.

In Bayesian statistical inference, a **prior probability distribution**, often simply called the **prior**, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable.

In philosophy and the social sciences, **social software** is an interdisciplinary research program that borrows mathematical tools and techniques from game theory and computer science in order to analyze and design social procedures. The goals of research in this field are modeling social situations, developing theories of correctness, and designing social procedures.

In game theory, a player's **strategy** is any of the options which he or she chooses in a setting where the outcome depends *not only* on their own actions *but* on the actions of others. A player's strategy will determine the action which the player will take at any stage of the game.

In game theory, a **Bayesian game** is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

In game theory, **folk theorems** are a class of theorems about possible Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept subgame-perfect Nash equilibria rather than Nash equilibrium.

**Formal epistemology** uses formal methods from decision theory, logic, probability theory and computability theory to model and reason about issues of epistemological interest. Work in this area spans several academic fields, including philosophy, computer science, economics, and statistics. The focus of formal epistemology has tended to differ somewhat from that of traditional epistemology, with topics like uncertainty, induction, and belief revision garnering more attention than the analysis of knowledge, skepticism, and issues with justification.

**Approximate Bayesian computation** (**ABC**) constitutes a class of computational methods rooted in Bayesian statistics that can be used to estimate the posterior distributions of model parameters.

**Bayesian econometrics** is a branch of econometrics which applies Bayesian principles to economic modelling. Bayesianism is based on a degree-of-belief interpretation of probability, as opposed to a relative-frequency interpretation.

Construction by Jean-François Mertens and Zamir implementing with John Harsanyi's proposal to model games with incomplete information by supposing that each player is characterized by a privately known type that describes his feasible strategies and payoffs as well as a probability distribution over other players' types.

In marketing, Bayesian inference allows for decision making and market research evaluation under uncertainty and with limited data.

- ↑ Aumann, Robert J. (1976). "Agreeing to Disagree" (PDF).
*The Annals of Statistics*.**4**(6): 1236–1239. doi:10.1214/aos/1176343654. ISSN 0090-5364. JSTOR 2958591. - ↑ Aaronson, Scott (2005).
*The complexity of agreement*(PDF).*Proceedings of ACM STOC*. pp. 634–643. doi:10.1145/1060590.1060686. ISBN 978-1-58113-960-0 . Retrieved 2010-08-09. - ↑ Hanson, Robin (2006). "Uncommon Priors Require Origin Disputes".
*Theory and Decision*.**61**(4): 319–328. CiteSeerX 10.1.1.63.4669 . doi:10.1007/s11238-006-9004-4. - ↑ Hellman, Ziv (2013). "Almost Common Priors".
*International Journal of Game Theory*.**42**(2): 399–410. doi:10.1007/s00182-012-0347-5. - ↑ Halpern, Joseph; Willemien Kets (2013-10-28). "Ambiguous Language and Consensus" (PDF). Retrieved 2014-01-13.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.