Common knowledge (logic)

Last updated

Common knowledge is a special kind of knowledge for a group of agents. There is common knowledge of p in a group of agents G when all the agents in G know p, they all know that they know p, they all know that they all know that they know p, and so on ad infinitum . [1] It can be denoted as .

Contents

The concept was first introduced in the philosophical literature by David Kellogg Lewis in his study Convention (1969). The sociologist Morris Friedell defined common knowledge in a 1969 paper. [2] It was first given a mathematical formulation in a set-theoretical framework by Robert Aumann (1976). Computer scientists grew an interest in the subject of epistemic logic in general – and of common knowledge in particular – starting in the 1980s. There are numerous puzzles based upon the concept which have been extensively investigated by mathematicians such as John Conway. [3]

The philosopher Stephen Schiffer, in his 1972 book Meaning, independently developed a notion he called "mutual knowledge" () which functions quite similarly to Lewis's and Friedel's 1969 "common knowledge". [4] If a trustworthy announcement is made in public, then it becomes common knowledge; However, if it is transmitted to each agent in private, it becomes mutual knowledge but not common knowledge. Even if the fact that "every agent in the group knows p" () is transmitted to each agent in private, it is still not common knowledge: . But, if any agent publicly announces their knowledge of p, then it becomes common knowledge that they know p (viz. ). If every agent publicly announces their knowledge of p, p becomes common knowledge .

Example

Puzzle

The idea of common knowledge is often introduced by some variant of induction puzzles (e.g. Muddy children puzzle):

On an island, there are k people who have blue eyes, and the rest of the people have green eyes. At the start of the puzzle, no one on the island ever knows their own eye color. By rule, if a person on the island ever discovers they have blue eyes, that person must leave the island at dawn; anyone not making such a discovery always sleeps until after dawn. On the island, each person knows every other person's eye color, there are no reflective surfaces, and there is no communication of eye color.

At some point, an outsider comes to the island, calls together all the people on the island, and makes the following public announcement: "At least one of you has blue eyes". The outsider, furthermore, is known by all to be truthful, and all know that all know this, and so on: it is common knowledge that he is truthful, and thus it becomes common knowledge that there is at least one islander who has blue eyes (). The problem: finding the eventual outcome, assuming all persons on the island are completely logical (every participant's knowledge obeys the axiom schemata for epistemic logic) and that this too is common knowledge.

Solution

The answer is that, on the kth dawn after the announcement, all the blue-eyed people will leave the island.

Proof

The solution can be seen with an inductive argument. If k = 1 (that is, there is exactly one blue-eyed person), the person will recognize that they alone have blue eyes (by seeing only green eyes in the others) and leave at the first dawn. If k = 2, no one will leave at the first dawn, and the inaction (and the implied lack of knowledge for every agent) is observed by everyone, which then becomes common knowledge as well (). The two blue-eyed people, seeing only one person with blue eyes, and that no one left on the first dawn (and thus that k > 1; and also that the other blue-eyed person does not think that everyone except themself are not blue-eyed , so another blue-eyed person ), will leave on the second dawn. Inductively, it can be reasoned that no one will leave at the first k  1 dawns if and only if there are at least k blue-eyed people. Those with blue eyes, seeing k  1 blue-eyed people among the others and knowing there must be at least k, will reason that they must have blue eyes and leave.

For k > 1, the outsider is only telling the island citizens what they already know: that there are blue-eyed people among them. However, before this fact is announced, the fact is not common knowledge, but instead mutual knowledge.

For k = 2, it is merely "first-order" knowledge (). Each blue-eyed person knows that there is someone with blue eyes, but each blue eyed person does not know that the other blue-eyed person has this same knowledge.

For k = 3, it is "second order" knowledge (). Each blue-eyed person knows that a second blue-eyed person knows that a third person has blue eyes, but no one knows that there is a third blue-eyed person with that knowledge, until the outsider makes their statement.

In general: For k > 1, it is "(k  1)th order" knowledge (). Each blue-eyed person knows that a second blue-eyed person knows that a third blue-eyed person knows that.... (repeat for a total of k  1 levels) a kth person has blue eyes, but no one knows that there is a "kth" blue-eyed person with that knowledge, until the outsider makes his statement. The notion of common knowledge therefore has a palpable effect. Knowing that everyone knows does make a difference. When the outsider's public announcement (a fact already known to all, unless k=1 then the one person with blue eyes would not know until the announcement) becomes common knowledge, the blue-eyed people on this island eventually deduce their status, and leave.

In particular:

  1. is free (i.e. known prior to the outsider's statement) iff .
  2. , with a passing day where no one leaves, implies the next day .
  3. for is thus reached iff it is reached for .
  4. The outsider gives for .

Formalization

Common knowledge can be given a logical definition in multi-modal logic systems in which the modal operators are interpreted epistemically. At the propositional level, such systems are extensions of propositional logic. The extension consists of the introduction of a group G of agents, and of n modal operators Ki (with i = 1, ..., n) with the intended meaning that "agent i knows." Thus Ki (where is a formula of the logical calculus) is read "agent i knows ." We can define an operator EG with the intended meaning of "everyone in group G knows" by defining it with the axiom

By abbreviating the expression with and defining , common knowledge could then be defined with the axiom

There is, however, a complication. The languages of epistemic logic are usually finitary, whereas the axiom above defines common knowledge as an infinite conjunction of formulas, hence not a well-formed formula of the language. To overcome this difficulty, a fixed-point definition of common knowledge can be given. Intuitively, common knowledge is thought of as the fixed point of the "equation" . Here, is the Aleph-naught. In this way, it is possible to find a formula implying from which, in the limit, we can infer common knowledge of .

From this definition it can be seen that if is common knowledge, then is also common knowledge ().

This syntactic characterization is given semantic content through so-called Kripke structures . A Kripke structure is given by a set of states (or possible worlds) S, naccessibility relations, defined on , intuitively representing what states agent i considers possible from any given state, and a valuation function assigning a truth value, in each state, to each primitive proposition in the language. The Kripke semantics for the knowledge operator is given by stipulating that is true at state s iff is true at all states t such that . The semantics for the common knowledge operator, then, is given by taking, for each group of agents G, the reflexive (modal axiom T) and transitive closure (modal axiom 4) of the , for all agents i in G, call such a relation , and stipulating that is true at state s iff is true at all states t such that .

Set theoretic (semantic characterization)

Alternatively (yet equivalently) common knowledge can be formalized using set theory (this was the path taken by the Nobel laureate Robert Aumann in his seminal 1976 paper). Starting with a set of states S. An event E can then be defined as a subset of the set of states S. For each agent i, define a partition on S, Pi. This partition represents the state of knowledge of an agent in a state. Intuitively, if two states s1 and s2 are elements of the same part of partition of an agent, it means that s1 and s2 are indistinguishable to that agent. In general, in state s, agent i knows that one of the states in Pi(s) obtains, but not which one. (Here Pi(s) denotes the unique element of Pi containing s. This model excludes cases in which agents know things that are not true.)

A knowledge function K can now be defined in the following way:

That is, Ki(e) is the set of states where the agent will know that event e obtains. It is a subset of e.

Similar to the modal logic formulation above, an operator for the idea that "everyone knows can be defined as e".

As with the modal operator, we will iterate the E function, and . Using this we can then define a common knowledge function,

The equivalence with the syntactic approach sketched above can easily be seen: consider an Aumann structure as the one just defined. We can define a correspondent Kripke structure by taking the same space S, accessibility relations that define the equivalence classes corresponding to the partitions , and a valuation function such that it yields value true to the primitive proposition p in all and only the states s such that , where is the event of the Aumann structure corresponding to the primitive proposition p. It is not difficult to see that the common knowledge accessibility function defined in the previous section corresponds to the finest common coarsening of the partitions for all , which is the finitary characterization of common knowledge also given by Aumann in the 1976 article.

Applications

Common knowledge was used by David Lewis in his pioneering game-theoretical account of convention. In this sense, common knowledge is a concept still central for linguists and philosophers of language (see Clark 1996) maintaining a Lewisian, conventionalist account of language.

Robert Aumann introduced a set theoretical formulation of common knowledge (theoretically equivalent to the one given above) and proved the so-called agreement theorem through which: if two agents have common prior probability over a certain event, and the posterior probabilities are common knowledge, then such posterior probabilities are equal. A result based on the agreement theorem and proven by Milgrom shows that, given certain conditions on market efficiency and information, speculative trade is impossible.

The concept of common knowledge is central in game theory. For several years it has been thought that the assumption of common knowledge of rationality for the players in the game was fundamental. It turns out (Aumann and Brandenburger 1995) that, in two-player games, common knowledge of rationality is not needed as an epistemic condition for Nash equilibrium strategies.

Computer scientists use languages incorporating epistemic logics (and common knowledge) to reason about distributed systems. Such systems can be based on logics more complicated than simple propositional epistemic logic, see Wooldridge Reasoning about Artificial Agents, 2000 (in which he uses a first-order logic incorporating epistemic and temporal operators) or van der Hoek et al. "Alternating Time Epistemic Logic".

In his 2007 book, The Stuff of Thought: Language as a Window into Human Nature, Steven Pinker uses the notion of common knowledge to analyze the kind of indirect speech involved in innuendoes.

The comedy movie Hot Lead and Cold Feet has an example of a chain of logic that is collapsed by common knowledge. The Denver Kid tells his allies that Rattlesnake is in town, but that he [the Kid] has “the edge”: “He's here and I know he's here, and he knows I know he's here, but he doesn't know I know he knows I know he's here.” So both protagonists know the main fact (Rattlesnake is here), but it is not “common knowledge”. Note that this is true even if the Kid is wrong: maybe Rattlesnake does know that the Kid knows that he knows that he knows, the chain still breaks because the Kid doesn't know that. Moments later, Rattlesnake confronts the Kid. We see the Kid realizing that his carefully constructed “edge” has collapsed into common knowledge.

See also

Notes

  1. ^ See the textbooks Reasoning about knowledge by Fagin, Halpern, Moses and Vardi (1995), and Epistemic Logic for computer science by Meyer and van der Hoek (1995).
  2. ^ A structurally identical problem is provided by Herbert Gintis (2000); he calls it "The Women of Sevitan".

Related Research Articles

First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Since there is no function having this property, to model the delta "function" rigorously involves the use of limits or, as is common in mathematics, measure theory and the theory of distributions.

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

In classical deductive logic, a consistent theory is one that does not lead to a logical contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent if it has a model, i.e., there exists an interpretation under which all formulas in the theory are true. This is the sense used in traditional Aristotelian logic, although in contemporary mathematical logic the term satisfiable is used instead. The syntactic definition states a theory is consistent if there is no formula such that both and its negation are elements of the set of consequences of . Let be a set of closed sentences and the set of closed sentences provable from under some formal deductive system. The set of axioms is consistent when for no formula .

In computability theory, Kleene's recursion theorems are a pair of fundamental results about the application of computable functions to their own descriptions. The theorems were first proved by Stephen Kleene in 1938 and appear in his 1952 book Introduction to Metamathematics. A related theorem, which constructs fixed points of a computable function, is known as Rogers's theorem and is due to Hartley Rogers, Jr.

<span class="mw-page-title-main">Shapley value</span> Concept in game theory

The Shapley value is a solution concept in cooperative game theory. It was named in honor of Lloyd Shapley, who introduced it in 1951 and won the Nobel Memorial Prize in Economic Sciences for it in 2012. To each cooperative game it assigns a unique distribution of a total surplus generated by the coalition of all players. The Shapley value is characterized by a collection of desirable properties. Hart (1989) provides a survey of the subject.

In mathematics, a linear form is a linear map from a vector space to its field of scalars.

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

Modal logic is a kind of logic used to represent statements about necessity and possibility. It plays a major role in philosophy and related fields as a tool for understanding concepts such as knowledge, obligation, and causation. For instance, in epistemic modal logic, the formula can be used to represent the statement that is known. In deontic modal logic, that same formula can represent that is a moral obligation.

In mathematics, a sesquilinear form is a generalization of a bilinear form that, in turn, is a generalization of the concept of the dot product of Euclidean space. A bilinear form is linear in each of its arguments, but a sesquilinear form allows one of the arguments to be "twisted" in a semilinear manner, thus the name; which originates from the Latin numerical prefix sesqui- meaning "one and a half". The basic concept of the dot product – producing a scalar from a pair of vectors – can be generalized by allowing a broader range of scalar values and, perhaps simultaneously, by widening the definition of a vector.

In mathematics and physics, the Christoffel symbols are an array of numbers describing a metric connection. The metric connection is a specialization of the affine connection to surfaces or other manifolds endowed with a metric, allowing distances to be measured on that surface. In differential geometry, an affine connection can be defined without reference to a metric, and many additional concepts follow: parallel transport, covariant derivatives, geodesics, etc. also do not require the concept of a metric. However, when a metric is available, these concepts can be directly tied to the "shape" of the manifold itself; that shape is determined by how the tangent space is attached to the cotangent space by the metric tensor. Abstractly, one would say that the manifold has an associated (orthonormal) frame bundle, with each "frame" being a possible choice of a coordinate frame. An invariant metric implies that the structure group of the frame bundle is the orthogonal group O(p, q). As a result, such a manifold is necessarily a (pseudo-)Riemannian manifold. The Christoffel symbols provide a concrete representation of the connection of (pseudo-)Riemannian geometry in terms of coordinates on the manifold. Additional concepts, such as parallel transport, geodesics, etc. can then be expressed in terms of Christoffel symbols.

Independence-friendly logic is an extension of classical first-order logic (FOL) by means of slashed quantifiers of the form and , where is a finite set of variables. The intended reading of is "there is a which is functionally independent from the variables in ". IF logic allows one to express more general patterns of dependence between variables than those which are implicit in first-order logic. This greater level of generality leads to an actual increase in expressive power; the set of IF sentences can characterize the same classes of structures as existential second-order logic.

<span class="mw-page-title-main">Characteristic function (probability theory)</span> Fourier transform of the probability density function

In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.

Epistemic modal logic is a subfield of modal logic that is concerned with reasoning about knowledge. While epistemology has a long philosophical tradition dating back to Ancient Greece, epistemic logic is a much more recent development with applications in many fields, including philosophy, theoretical computer science, artificial intelligence, economics and linguistics. While philosophers since Aristotle have discussed modal logic, and Medieval philosophers such as Avicenna, Ockham, and Duns Scotus developed many of their observations, it was C. I. Lewis who created the first symbolic and systematic approach to the topic, in 1912. It continued to mature as a field, reaching its modern form in 1963 with the work of Kripke.

Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.

<span class="mw-page-title-main">Classical group</span>

In mathematics, the classical groups are defined as the special linear groups over the reals R, the complex numbers C and the quaternions H together with special automorphism groups of symmetric or skew-symmetric bilinear forms and Hermitian or skew-Hermitian sesquilinear forms defined on real, complex and quaternionic finite-dimensional vector spaces. Of these, the complex classical Lie groups are four infinite families of Lie groups that together with the exceptional groups exhaust the classification of simple Lie groups. The compact classical groups are compact real forms of the complex classical groups. The finite analogues of the classical groups are the classical groups of Lie type. The term "classical group" was coined by Hermann Weyl, it being the title of his 1939 monograph The Classical Groups.

Aumann's agreement theorem was stated and proved by Robert Aumann in a paper titled "Agreeing to Disagree", which introduced the set theoretic description of common knowledge. The theorem concerns agents who share a common prior and update their probabilistic beliefs by Bayes' rule. It states that if the probabilistic beliefs of such agents, regarding a fixed event, are common knowledge then these probabilities must coincide. Thus, agents cannot agree to disagree, that is have common knowledge of a disagreement over the posterior probability of a given event.

Dynamic semantics is a framework in logic and natural language semantics that treats the meaning of a sentence as its potential to update a context. In static semantics, knowing the meaning of a sentence amounts to knowing when it is true; in dynamic semantics, knowing the meaning of a sentence means knowing "the change it brings about in the information state of anyone who accepts the news conveyed by it." In dynamic semantics, sentences are mapped to functions called context change potentials, which take an input context and return an output context. Dynamic semantics was originally developed by Irene Heim and Hans Kamp in 1981 to model anaphora, but has since been applied widely to phenomena including presupposition, plurals, questions, discourse relations, and modality.

In mathematical logic, a witness is a specific value t to be substituted for variable x of an existential statement of the form ∃xφ(x) such that φ(t) is true.

Dynamic epistemic logic (DEL) is a logical framework dealing with knowledge and information change. Typically, DEL focuses on situations involving multiple agents and studies how their knowledge changes when events occur. These events can change factual properties of the actual world : for example a red card is painted in blue. They can also bring about changes of knowledge without changing factual properties of the world : for example a card is revealed publicly to be red. Originally, DEL focused on epistemic events. We only present in this entry some of the basic ideas of the original DEL framework; more details about DEL in general can be found in the references.

References

  1. Osborne, Martin J., and Ariel Rubinstein. A Course in Game Theory. Cambridge, MA: MIT, 1994. Print.
  2. Morris Friedell, "On the Structure of Shared Awareness," Behavioral Science 14 (1969): 28–39.
  3. Ian Stewart (2004). "I Know That You Know That...". Math Hysteria. OUP.
  4. Stephen Schiffer, Meaning, 2nd edition, Oxford University Press, 1988. The first edition was published by OUP in 1972. For a discussion of both Lewis's and Schiffer's notions, see Russell Dale, The Theory of Meaning (1996).

Further reading