Condensation lemma

Last updated

In set theory, a branch of mathematics, the condensation lemma is a result about sets in the constructible universe.

It states that if X is a transitive set and is an elementary submodel of some level of the constructible hierarchy Lα, that is, , then in fact there is some ordinal such that .

More can be said: If X is not transitive, then its transitive collapse is equal to some , and the hypothesis of elementarity can be weakened to elementarity only for formulas which are in the Lévy hierarchy. [1] Also, Devlin showed the assumption that X be transitive automatically holds when . [2]

The lemma was formulated and proved by Kurt Gödel in his proof that the axiom of constructibility implies GCH.

Related Research Articles

<span class="mw-page-title-main">Context-free grammar</span> Type of formal grammar

In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules can be applied to a nonterminal symbol regardless of its context. In particular, in a context-free grammar, each production rule is of the form

<span class="mw-page-title-main">Transfinite induction</span> Mathematical concept

Transfinite induction is an extension of mathematical induction to well-ordered sets, for example to sets of ordinal numbers or cardinal numbers. Its correctness is a theorem of ZFC.

In the mathematical discipline of set theory, forcing is a technique for proving consistency and independence results. Intuitively, forcing can be thought of as a technique to expand the set theoretical universe to a larger universe by introducing a new "generic" object .

In the mathematical discipline of set theory, 0# is the set of true formulae about indiscernibles and order-indiscernibles in the Gödel constructible universe. It is often encoded as a subset of the natural numbers, or as a subset of the hereditarily finite sets, or as a real number. Its existence is unprovable in ZFC, the standard form of axiomatic set theory, but follows from a suitable large cardinal axiom. It was first introduced as a set of formulae in Silver's 1966 thesis, later published as Silver (1971), where it was denoted by Σ, and rediscovered by Solovay, who considered it as a subset of the natural numbers and introduced the notation O#.

In set theory, a branch of mathematics, a Q-indescribable cardinal is a certain kind of large cardinal number that is hard to axiomatize in some language Q. There are many different types of indescribable cardinals corresponding to different choices of languages Q. They were introduced by Hanf & Scott (1961).

In set theory, a Woodin cardinal is a cardinal number such that for all functions , there exists a cardinal with and an elementary embedding from the Von Neumann universe into a transitive inner model with critical point and .

In mathematics, in set theory, the constructible universe, denoted by , is a particular class of sets that can be described entirely in terms of simpler sets. is the union of the constructible hierarchy. It was introduced by Kurt Gödel in his 1938 paper "The Consistency of the Axiom of Choice and of the Generalized Continuum-Hypothesis". In this paper, he proved that the constructible universe is an inner model of ZF set theory, and also that the axiom of choice and the generalized continuum hypothesis are true in the constructible universe. This shows that both propositions are consistent with the basic axioms of set theory, if ZF itself is consistent. Since many other theorems only hold in systems in which one or both of the propositions is true, their consistency is an important result.

In set theory and related branches of mathematics, the von Neumann universe, or von Neumann hierarchy of sets, denoted by V, is the class of hereditary well-founded sets. This collection, which is formalized by Zermelo–Fraenkel set theory (ZFC), is often used to provide an interpretation or motivation of the axioms of ZFC. The concept is named after John von Neumann, although it was first published by Ernst Zermelo in 1930.

In statistics, the Neyman–Pearson lemma describes the existence and uniqueness of the likelihood ratio as a uniformly most powerful test in certain contexts. It was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman–Pearson lemma is part of the Neyman–Pearson theory of statistical testing, which introduced concepts like errors of the second kind, power function, and inductive behavior. The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman–Pearsonian flavor of statistical testing allows investigating the two types of errors. The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all level tests while subsequently minimizing type II error, traditionally denoted by . Their seminal paper of 1933, including the Neyman–Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power that retain a prespecified level of type I error, but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman–Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios.

The Kripke–Platek set theory (KP), pronounced, is an axiomatic set theory developed by Saul Kripke and Richard Platek. The theory can be thought of as roughly the predicative part of ZFC and is considerably weaker than it.

In set theory, a branch of mathematics, a reflection principle says that it is possible to find sets that, with respect to any given property, resemble the class of all sets. There are several different forms of the reflection principle depending on exactly what is meant by "resemble". Weak forms of the reflection principle are theorems of ZF set theory due to Montague (1961), while stronger forms can be new and very powerful axioms for set theory.

In the mathematical discipline of set theory, there are many ways of describing specific countable ordinals. The smallest ones can be usefully and non-circularly expressed in terms of their Cantor normal forms. Beyond that, many ordinals of relevance to proof theory still have computable ordinal notations. However, it is not possible to decide effectively whether a given putative ordinal notation is a notation or not ; various more-concrete ways of defining ordinals that definitely have notations are available.

In set theory, an ordinal number α is an admissible ordinal if Lα is an admissible set ; in other words, α is admissible when α is a limit ordinal and Lα ⊧ Σ0-collection. The term was coined by Richard Platek in 1966.

In set theory, Silver machines are devices used for bypassing the use of fine structure in proofs of statements holding in L. They were invented by set theorist Jack Silver as a means of proving global square holds in the constructible universe.

In recursion theory, α recursion theory is a generalisation of recursion theory to subsets of admissible ordinals . An admissible set is closed under functions, where denotes a rank of Godel's constructible hierarchy. is an admissible ordinal if is a model of Kripke–Platek set theory. In what follows is considered to be fixed.

In set theory, a mathematical discipline, the Jensen hierarchy or J-hierarchy is a modification of Gödel's constructible hierarchy, L, that circumvents certain technical difficulties that exist in the constructible hierarchy. The J-Hierarchy figures prominently in fine structure theory, a field pioneered by Ronald Jensen, for whom the Jensen hierarchy is named. Rudimentary functions describe a method for iterating through the Jensen hierarchy.

In mathematics, particularly set theory, non-recursive ordinals are large countable ordinals greater than all the recursive ordinals, and therefore can not be expressed using recursive ordinal notations.

In mathematics, primitive recursive set functions or primitive recursive ordinal functions are analogs of primitive recursive functions, defined for sets or ordinals rather than natural numbers. They were introduced by Jensen & Karp (1971).

This is a glossary of set theory.

<span class="mw-page-title-main">Suffix automaton</span> Deterministic finite automaton accepting set of all suffixes of particular string

In computer science, a suffix automaton is an efficient data structure for representing the substring index of a given string which allows the storage, processing, and retrieval of compressed information about all its substrings. The suffix automaton of a string is the smallest directed acyclic graph with a dedicated initial vertex and a set of "final" vertices, such that paths from the initial vertex to final vertices represent the suffixes of the string.

References

Inline citations

  1. R. B. Jensen, The Fine Structure of the Constructible Hierarchy (1972), p.246. Accessed 13 January 2023.
  2. W. Marek, M. Srebrny, "Gaps in the Constructible Universe" (1973), p.364.