Scope (logic)

Last updated

In logic, the scope of a quantifier or connective is the shortest formula in which it occurs, [1] determining the range in the formula to which the quantifier or connective is applied. [2] [3] [4] The notions of a free variable and bound variable are defined in terms of whether that formula is within the scope of a quantifier, [2] [5] and the notions of a dominant connective and subordinate connective are defined in terms of whether a connective includes another within its scope. [6] [7]

Contents

Connectives

The scope of a logical connective occurring within a formula is the smallest well-formed formula that contains the connective in question. [2] [6] [8] The connective with the largest scope in a formula is called its dominant connective, [9] [10] main connective, [6] [8] [7] main operator, [2] major connective, [4] or principal connective; [4] a connective within the scope of another connective is said to be subordinate to it. [6]

For instance, in the formula , the dominant connective is ↔, and all other connectives are subordinate to it; the → is subordinate to the ∨, but not to the ∧; the first ¬ is also subordinate to the ∨, but not to the →; the second ¬ is subordinate to the ∧, but not to the ∨ or the →; and the third ¬ is subordinate to the second ¬, as well as to the ∧, but not to the ∨ or the →. [6] If an order of precedence is adopted for the connectives, viz., with ¬ applying first, then ∧ and ∨, then →, and finally ↔, this formula may be written in the less parenthesized form , which some may find easier to read. [6]

Quantifiers

The scope of a quantifier is the part of a logical expression over which the quantifier exerts control. [3] It is the shortest full sentence [5] written right after the quantifier, [3] [5] often in parentheses; [3] some authors [11] describe this as including the variable written right after the universal or existential quantifier. In the formula xP, for example, P [5] (or xP) [11] is the scope of the quantifier x [5] (or ). [11]

This gives rise to the following definitions: [lower-alpha 1]

See also

Notes

  1. These definitions follow the common practice of using Greek letters as metalogical symbols which may stand for symbols in a formal language for propositional or predicate logic. In particular, and are used to stand for any formulae whatsoever, whereas and are used to stand for propositional variables. [1]

Related Research Articles

First-order logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables. Rather than propositions such as "all men are mortal", in first-order logic one can have expressions in the form "for all x, if x is a man, then x is mortal"; where "for all x" is a quantifier, x is a variable, and "... is a man" and "... is mortal" are predicates. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

<span class="mw-page-title-main">Original proof of Gödel's completeness theorem</span>

The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem.

The propositional calculus is a branch of logic. It is also called (first-order) propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives representing the truth functions of conjunction, disjunction, implication, biconditional, and negation. Some sources include other connectives, as in the table below.

In logic and proof theory, natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. This contrasts with Hilbert-style systems, which instead use axioms as much as possible to express the logical laws of deductive reasoning.

In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is a product of sums or an AND of ORs. As a canonical normal form, it is useful in automated theorem proving and circuit theory.

Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.

In propositional logic, the double negation of a statement states that "it is not the case that the statement is not true". In classical logic, every statement is logically equivalent to its double negation, but this is not true in intuitionistic logic; this can be expressed by the formula A ≡ ~(~A) where the sign ≡ expresses logical equivalence and the sign ~ expresses negation.

In mathematical logic, propositional logic and predicate logic, a well-formed formula, abbreviated WFF or wff, often simply formula, is a finite sequence of symbols from a given alphabet that is part of a formal language.

A formula of the predicate calculus is in prenex normal form (PNF) if it is written as a string of quantifiers and bound variables, called the prefix, followed by a quantifier-free part, called the matrix. Together with the normal forms in propositional logic, it provides a canonical normal form useful in automated theorem proving.

Computation tree logic (CTL) is a branching-time logic, meaning that its model of time is a tree-like structure in which the future is not determined; there are different paths in the future, any one of which might be an actual path that is realized. It is used in formal verification of software or hardware artifacts, typically by software applications known as model checkers, which determine if a given artifact possesses safety or liveness properties. For example, CTL can specify that when some initial condition is satisfied, then all possible executions of a program avoid some undesirable condition. In this example, the safety property could be verified by a model checker that explores all possible transitions out of program states satisfying the initial condition and ensures that all such executions satisfy the property. Computation tree logic belongs to a class of temporal logics that includes linear temporal logic (LTL). Although there are properties expressible only in CTL and properties expressible only in LTL, all properties expressible in either logic can also be expressed in CTL*.

Independence-friendly logic is an extension of classical first-order logic (FOL) by means of slashed quantifiers of the form and , where is a finite set of variables. The intended reading of is "there is a which is functionally independent from the variables in ". IF logic allows one to express more general patterns of dependence between variables than those which are implicit in first-order logic. This greater level of generality leads to an actual increase in expressive power; the set of IF sentences can characterize the same classes of structures as existential second-order logic.

In mathematical logic, Craig's interpolation theorem is a result about the relationship between different logical theories. Roughly stated, the theorem says that if a formula φ implies a formula ψ, and the two have at least one atomic variable symbol in common, then there is a formula ρ, called an interpolant, such that every non-logical symbol in ρ occurs both in φ and ψ, φ implies ρ, and ρ implies ψ. The theorem was first proved for first-order logic by William Craig in 1957. Variants of the theorem hold for other logics, such as propositional logic. A stronger form of Craig's interpolation theorem for first-order logic was proved by Roger Lyndon in 1959; the overall result is sometimes called the Craig–Lyndon theorem.

In abstract algebraic logic, a branch of mathematical logic, the Leibniz operator is a tool used to classify deductive systems, which have a precise technical definition and capture a large number of logics. The Leibniz operator was introduced by Wim Blok and Don Pigozzi, two of the founders of the field, as a means to abstract the well-known Lindenbaum–Tarski process, that leads to the association of Boolean algebras to classical propositional calculus, and make it applicable to as wide a variety of sentential logics as possible. It is an operator that assigns to a given theory of a given sentential logic, perceived as a term algebra with a consequence operation on its universe, the largest congruence on the algebra that is compatible with the theory.

In logic, more specifically proof theory, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style system, Hilbert-style proof system, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of formal proof system attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.

In logic and mathematics, contraposition, or transposition, refers to the inference of going from a conditional statement into its logically equivalent contrapositive, and an associated proof method known as § Proof by contrapositive. The contrapositive of a statement has its antecedent and consequent inverted and flipped.

In mathematical logic, more specifically in the proof theory of first-order theories, extensions by definitions formalize the introduction of new symbols by means of a definition. For example, it is common in naive set theory to introduce a symbol for the set that has no member. In the formal setting of first-order theories, this can be done by adding to the theory a new constant and the new axiom , meaning "for all x, x is not a member of ". It can then be proved that doing so adds essentially nothing to the old theory, as should be expected from a definition. More precisely, the new theory is a conservative extension of the old one.

Dependence logic is a logical formalism, created by Jouko Väänänen, which adds dependence atoms to the language of first-order logic. A dependence atom is an expression of the form , where are terms, and corresponds to the statement that the value of is functionally dependent on the values of .

Rayo's number is a large number named after Mexican philosophy professor Agustín Rayo which has been claimed to be the largest named number. It was originally defined in a "big number duel" at MIT on 26 January 2007.

In mathematical logic, fixed-point logics are extensions of classical predicate logic that have been introduced to express recursion. Their development has been motivated by descriptive complexity theory and their relationship to database query languages, in particular to Datalog.

References

  1. 1 2 3 4 5 6 Bostock, David (1997). Intermediate logic. Oxford : New York: Clarendon Press; Oxford University Press. pp. 8, 79. ISBN   978-0-19-875141-0.
  2. 1 2 3 4 Cook, Roy T. (March 20, 2009). Dictionary of Philosophical Logic. Edinburgh University Press. pp. 99, 180, 254. ISBN   978-0-7486-3197-1.
  3. 1 2 3 4 Rich, Elaine; Cline, Alan Kaylor. Quantifier Scope.
  4. 1 2 3 Makridis, Odysseus (February 21, 2022). Symbolic Logic. Springer Nature. pp. 93–95. ISBN   978-3-030-67396-3.
  5. 1 2 3 4 5 6 7 "3.3.2: Quantifier Scope, Bound Variables, and Free Variables". Humanities LibreTexts. January 21, 2017. Retrieved June 10, 2024.
  6. 1 2 3 4 5 6 Lemmon, Edward John (1998). Beginning logic. Boca Raton, FL: Chapman & Hall/CRC. pp. 45–48. ISBN   978-0-412-38090-7.
  7. 1 2 Gillon, Brendan S. (March 12, 2019). Natural Language Semantics: Formation and Valuation. MIT Press. pp. 250–253. ISBN   978-0-262-03920-8.
  8. 1 2 "Examples | Logic Notes - ANU". users.cecs.anu.edu.au. Retrieved June 10, 2024.
  9. Suppes, Patrick; Hill, Shirley (April 30, 2012). First Course in Mathematical Logic. Courier Corporation. pp. 23–26. ISBN   978-0-486-15094-9.
  10. Kirk, Donna (March 22, 2023). "2.2. Compound Statements". Contemporary Mathematics. OpenStax.
  11. 1 2 3 Bell, John L.; Machover, Moshé (April 15, 2007). "Chapter 1. Beginning mathematical logic". A Course in Mathematical Logic. Elsevier Science Ltd. p.  17. ISBN   978-0-7204-2844-5.
  12. 1 2 Uzquiano, Gabriel (2022), "Quantifiers and Quantification", in Zalta, Edward N.; Nodelman, Uri (eds.), The Stanford Encyclopedia of Philosophy (Winter 2022 ed.), Metaphysics Research Lab, Stanford University, retrieved June 10, 2024
  13. Allen, Colin; Hand, Michael (2001). Logic primer (2nd ed.). Cambridge, Mass: MIT Press. p. 66. ISBN   978-0-262-51126-1.