Well-formed formula

Last updated

In mathematical logic, propositional logic and predicate logic, a well-formed formula, abbreviated WFF or wff, often simply formula, is a finite sequence of symbols from a given alphabet that is part of a formal language. [1]

Contents

The abbreviation wff is pronounced "woof", or sometimes "wiff", "weff", or "whiff". [12]

A formal language can be identified with the set of formulas in the language. A formula is a syntactic object that can be given a semantic meaning by means of an interpretation. Two key uses of formulas are in propositional logic and predicate logic.

Introduction

A key use of formulas is in propositional logic and predicate logic such as first-order logic. In those contexts, a formula is a string of symbols φ for which it makes sense to ask "is φ true?", once any free variables in φ have been instantiated. In formal logic, proofs can be represented by sequences of formulas with certain properties, and the final formula in the sequence is what is proven.

Although the term "formula" may be used for written marks (for instance, on a piece of paper or chalkboard), it is more precisely understood as the sequence of symbols being expressed, with the marks being a token instance of formula. This distinction between the vague notion of "property" and the inductively-defined notion of well-formed formula has roots in Weyl's 1910 paper "Uber die Definitionen der mathematischen Grundbegriffe". [13] Thus the same formula may be written more than once, and a formula might in principle be so long that it cannot be written at all within the physical universe.

Formulas themselves are syntactic objects. They are given meanings by interpretations. For example, in a propositional formula, each propositional variable may be interpreted as a concrete proposition, so that the overall formula expresses a relationship between these propositions. A formula need not be interpreted, however, to be considered solely as a formula.

Propositional calculus

The formulas of propositional calculus, also called propositional formulas, [14] are expressions such as . Their definition begins with the arbitrary choice of a set V of propositional variables. The alphabet consists of the letters in V along with the symbols for the propositional connectives and parentheses "(" and ")", all of which are assumed to not be in V. The formulas will be certain expressions (that is, strings of symbols) over this alphabet.

The formulas are inductively defined as follows:

This definition can also be written as a formal grammar in Backus–Naur form, provided the set of variables is finite:

<alpha set>::= p | q | r | s | t | u | ... (the arbitrary finite set of propositional variables) <form>::=<alpha set> | ¬<form> | (<form><form>) | (<form><form>) | (<form><form>) | (<form><form>) 

Using this grammar, the sequence of symbols

(((pq) (rs)) (¬q¬s))

is a formula, because it is grammatically correct. The sequence of symbols

((pq)(qq))p))

is not a formula, because it does not conform to the grammar.

A complex formula may be difficult to read, owing to, for example, the proliferation of parentheses. To alleviate this last phenomenon, precedence rules (akin to the standard mathematical order of operations) are assumed among the operators, making some operators more binding than others. For example, assuming the precedence (from most binding to least binding) 1. ¬  2.   3.   4. . Then the formula

(((pq) (rs)) (¬q¬s))

may be abbreviated as

pqrs¬q¬s

This is, however, only a convention used to simplify the written representation of a formula. If the precedence was assumed, for example, to be left-right associative, in following order: 1. ¬  2.   3.   4. , then the same formula above (without parentheses) would be rewritten as

(p (qr)) (s (¬q¬s))

Predicate logic

The definition of a formula in first-order logic is relative to the signature of the theory at hand. This signature specifies the constant symbols, predicate symbols, and function symbols of the theory at hand, along with the arities of the function and predicate symbols.

The definition of a formula comes in several parts. First, the set of terms is defined recursively. Terms, informally, are expressions that represent objects from the domain of discourse.

  1. Any variable is a term.
  2. Any constant symbol from the signature is a term
  3. an expression of the form f(t1,...,tn), where f is an n-ary function symbol, and t1,...,tn are terms, is again a term.

The next step is to define the atomic formulas.

  1. If t1 and t2 are terms then t1=t2 is an atomic formula
  2. If R is an n-ary predicate symbol, and t1,...,tn are terms, then R(t1,...,tn) is an atomic formula

Finally, the set of formulas is defined to be the smallest set containing the set of atomic formulas such that the following holds:

  1. is a formula when is a formula
  2. and are formulas when and are formulas;
  3. is a formula when is a variable and is a formula;
  4. is a formula when is a variable and is a formula (alternatively, could be defined as an abbreviation for ).

If a formula has no occurrences of or , for any variable , then it is called quantifier-free. An existential formula is a formula starting with a sequence of existential quantification followed by a quantifier-free formula.

Atomic and open formulas

An atomic formula is a formula that contains no logical connectives nor quantifiers, or equivalently a formula that has no strict subformulas. The precise form of atomic formulas depends on the formal system under consideration; for propositional logic, for example, the atomic formulas are the propositional variables. For predicate logic, the atoms are predicate symbols together with their arguments, each argument being a term.

According to some terminology, an open formula is formed by combining atomic formulas using only logical connectives, to the exclusion of quantifiers. [15] This is not to be confused with a formula which is not closed.

Closed formulas

A closed formula, also ground formula or sentence, is a formula in which there are no free occurrences of any variable. If A is a formula of a first-order language in which the variables v1, …, vn have free occurrences, then A preceded by v1vn is a universal closure of A.

Properties applicable to formulas

Usage of the terminology

In earlier works on mathematical logic (e.g. by Church [16] ), formulas referred to any strings of symbols and among these strings, well-formed formulas were the strings that followed the formation rules of (correct) formulas.

Several authors simply say formula. [17] [18] [19] [20] Modern usages (especially in the context of computer science with mathematical software such as model checkers, automated theorem provers, interactive theorem provers) tend to retain of the notion of formula only the algebraic concept and to leave the question of well-formedness, i.e. of the concrete string representation of formulas (using this or that symbol for connectives and quantifiers, using this or that parenthesizing convention, using Polish or infix notation, etc.) as a mere notational problem.

The expression "well-formed formulas" (WFF) also crept into popular culture. WFF is part of an esoteric pun used in the name of the academic game "WFF 'N PROOF: The Game of Modern Logic", by Layman Allen, [21] developed while he was at Yale Law School (he was later a professor at the University of Michigan). The suite of games is designed to teach the principles of symbolic logic to children (in Polish notation). [22] Its name is an echo of whiffenpoof , a nonsense word used as a cheer at Yale University made popular in The Whiffenpoof Song and The Whiffenpoofs. [23]

See also

Notes

  1. Formulas are a standard topic in introductory logic, and are covered by all introductory textbooks, including Enderton (2001), Gamut (1990), and Kleene (1967)
  2. Gensler, Harry (2002-09-11). Introduction to Logic. Routledge. p. 35. ISBN   978-1-134-58880-0.
  3. Hall, Cordelia; O'Donnell, John (2013-04-17). Discrete Mathematics Using a Computer. Springer Science & Business Media. p. 44. ISBN   978-1-4471-3657-6.
  4. Agler, David W. (2013). Symbolic Logic: Syntax, Semantics, and Proof. Rowman & Littlefield. p. 41. ISBN   978-1-4422-1742-3.
  5. Simpson, R. L. (2008-03-17). Essentials of Symbolic Logic - Third Edition. Broadview Press. p. 14. ISBN   978-1-77048-495-5.
  6. Laderoute, Karl (2022-10-24). A Pocket Guide to Formal Logic. Broadview Press. p. 59. ISBN   978-1-77048-868-7.
  7. Maurer, Stephen B.; Ralston, Anthony (2005-01-21). Discrete Algorithmic Mathematics, Third Edition. CRC Press. p. 625. ISBN   978-1-56881-166-6.
  8. Martin, Robert M. (2002-05-06). The Philosopher's Dictionary - Third Edition. Broadview Press. p. 323. ISBN   978-1-77048-215-9.
  9. Date, Christopher (2008-10-14). The Relational Database Dictionary, Extended Edition. Apress. p. 211. ISBN   978-1-4302-1042-9.
  10. Date, C. J. (2015-12-21). The New Relational Database Dictionary: Terms, Concepts, and Examples. "O'Reilly Media, Inc.". p. 241. ISBN   978-1-4919-5171-2.
  11. Simpson, R. L. (1998-12-10). Essentials of Symbolic Logic. Broadview Press. p. 12. ISBN   978-1-55111-250-3.
  12. All sources supported "woof". The sources cited for "wiff", "weff", and "whiff" gave these pronunciations as alternatives to "woof". The Gensler source gives "wood" and "woofer" as examples of how to pronounce the vowel in "woof".
  13. W. Dean, S. Walsh, The Prehistory of the Subsystems of Second-order Arithmetic (2016), p.6
  14. First-order logic and automated theorem proving, Melvin Fitting, Springer, 1996
  15. Handbook of the history of logic, (Vol 5, Logic from Russell to Church), Tarski's logic by Keith Simmons, D. Gabbay and J. Woods Eds, p568 .
  16. Alonzo Church, [1996] (1944), Introduction to mathematical logic, page 49
  17. Hilbert, David; Ackermann, Wilhelm (1950) [1937], Principles of Mathematical Logic, New York: Chelsea
  18. Hodges, Wilfrid (1997), A shorter model theory, Cambridge University Press, ISBN   978-0-521-58713-6
  19. Barwise, Jon, ed. (1982), Handbook of Mathematical Logic, Studies in Logic and the Foundations of Mathematics, Amsterdam: North-Holland, ISBN   978-0-444-86388-1
  20. Cori, Rene; Lascar, Daniel (2000), Mathematical Logic: A Course with Exercises, Oxford University Press, ISBN   978-0-19-850048-3
  21. Ehrenburg 2002
  22. More technically, propositional logic using the Fitch-style calculus.
  23. Allen (1965) acknowledges the pun.

Related Research Articles

First-order logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

<span class="mw-page-title-main">Original proof of Gödel's completeness theorem</span>

The proof of Gödel's completeness theorem given by Kurt Gödel in his doctoral dissertation of 1929 is not easy to read today; it uses concepts and formalisms that are no longer used and terminology that is often obscure. The version given below attempts to represent all the steps in the proof and all the important ideas faithfully, while restating the proof in the modern language of mathematical logic. This outline should not be considered a rigorous proof of the theorem.

The propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives representing the truth functions of conjunction, disjunction, implication, biconditional, and negation. Some sources include other connectives, as in the table below.

In mathematical logic, model theory is the study of the relationship between formal theories, and their models. The aspects investigated include the number and size of models of a theory, the relationship of different models to each other, and their interaction with the formal language itself. In particular, model theorists also investigate the sets that can be defined in a model of a theory, and the relationship of such definable sets to each other. As a separate discipline, model theory goes back to Alfred Tarski, who first used the term "Theory of Models" in publication in 1954. Since the 1970s, the subject has been shaped decisively by Saharon Shelah's stability theory.

In logic and proof theory, natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. This contrasts with Hilbert-style systems, which instead use axioms as much as possible to express the logical laws of deductive reasoning.

In set theory, Zermelo–Fraenkel set theory, named after mathematicians Ernst Zermelo and Abraham Fraenkel, is an axiomatic system that was proposed in the early twentieth century in order to formulate a theory of sets free of paradoxes such as Russell's paradox. Today, Zermelo–Fraenkel set theory, with the historically controversial axiom of choice (AC) included, is the standard form of axiomatic set theory and as such is the most common foundation of mathematics. Zermelo–Fraenkel set theory with the axiom of choice included is abbreviated ZFC, where C stands for "choice", and ZF refers to the axioms of Zermelo–Fraenkel set theory with the axiom of choice excluded.

Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic.

A formula of the predicate calculus is in prenex normal form (PNF) if it is written as a string of quantifiers and bound variables, called the prefix, followed by a quantifier-free part, called the matrix. Together with the normal forms in propositional logic, it provides a canonical normal form useful in automated theorem proving.

Computation tree logic (CTL) is a branching-time logic, meaning that its model of time is a tree-like structure in which the future is not determined; there are different paths in the future, any one of which might be an actual path that is realized. It is used in formal verification of software or hardware artifacts, typically by software applications known as model checkers, which determine if a given artifact possesses safety or liveness properties. For example, CTL can specify that when some initial condition is satisfied, then all possible executions of a program avoid some undesirable condition. In this example, the safety property could be verified by a model checker that explores all possible transitions out of program states satisfying the initial condition and ensures that all such executions satisfy the property. Computation tree logic belongs to a class of temporal logics that includes linear temporal logic (LTL). Although there are properties expressible only in CTL and properties expressible only in LTL, all properties expressible in either logic can also be expressed in CTL*.

In mathematical logic, a predicate variable is a predicate letter which functions as a "placeholder" for a relation, but which has not been specifically assigned any particular relation. Common symbols for denoting predicate variables include capital roman letters such as , and , or lower case roman letters, e.g., . In first-order logic, they can be more properly called metalinguistic variables. In higher-order logic, predicate variables correspond to propositional variables which can stand for well-formed formulas of the same logic, and such variables can be quantified by means of second-order quantifiers.

Independence-friendly logic is an extension of classical first-order logic (FOL) by means of slashed quantifiers of the form and , where is a finite set of variables. The intended reading of is "there is a which is functionally independent from the variables in ". IF logic allows one to express more general patterns of dependence between variables than those which are implicit in first-order logic. This greater level of generality leads to an actual increase in expressive power; the set of IF sentences can characterize the same classes of structures as existential second-order logic.

In mathematical logic, Craig's interpolation theorem is a result about the relationship between different logical theories. Roughly stated, the theorem says that if a formula φ implies a formula ψ, and the two have at least one atomic variable symbol in common, then there is a formula ρ, called an interpolant, such that every non-logical symbol in ρ occurs both in φ and ψ, φ implies ρ, and ρ implies ψ. The theorem was first proved for first-order logic by William Craig in 1957. Variants of the theorem hold for other logics, such as propositional logic. A stronger form of Craig's interpolation theorem for first-order logic was proved by Roger Lyndon in 1959; the overall result is sometimes called the Craig–Lyndon theorem.

Quasi-quotation or Quine quotation is a linguistic device in formal languages that facilitates rigorous and terse formulation of general rules about linguistic expressions while properly observing the use–mention distinction. It was introduced by the philosopher and logician Willard Van Orman Quine in his book Mathematical Logic, originally published in 1940. Put simply, quasi-quotation enables one to introduce symbols that stand for a linguistic expression in a given instance and are used as that linguistic expression in a different instance.

In mathematical logic, an atomic formula is a formula with no deeper propositional structure, that is, a formula that contains no logical connectives or equivalently a formula that has no strict subformulas. Atoms are thus the simplest well-formed formulas of the logic. Compound formulas are formed by combining the atomic formulas using the logical connectives.

In abstract algebraic logic, a branch of mathematical logic, the Leibniz operator is a tool used to classify deductive systems, which have a precise technical definition and capture a large number of logics. The Leibniz operator was introduced by Wim Blok and Don Pigozzi, two of the founders of the field, as a means to abstract the well-known Lindenbaum–Tarski process, that leads to the association of Boolean algebras to classical propositional calculus, and make it applicable to as wide a variety of sentential logics as possible. It is an operator that assigns to a given theory of a given sentential logic, perceived as a term algebra with a consequence operation on its universe, the largest congruence on the algebra that is compatible with the theory.

An interpretation is an assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics.

In mathematical logic, formation rules are rules for describing which strings of symbols formed from the alphabet of a formal language are syntactically valid within the language. These rules only address the location and manipulation of the strings of the language. It does not describe anything else about a language, such as its semantics. .

Dependence logic is a logical formalism, created by Jouko Väänänen, which adds dependence atoms to the language of first-order logic. A dependence atom is an expression of the form , where are terms, and corresponds to the statement that the value of is functionally dependent on the values of .

In logic, the scope of a quantifier or connective is the shortest formula in which it occurs, determining the range in the formula to which the quantifier or connective is applied. The notions of a free variable and bound variable are defined in terms of whether that formula is within the scope of a quantifier, and the notions of a dominant connective and subordinate connective are defined in terms of whether a connective includes another within its scope.

In model checking, a field of computer science, timed propositional temporal logic (TPTL) is an extension of propositional linear temporal logic (LTL) in which variables are introduced to measure times between two events. For example, while LTL allows to state that each event p is eventually followed by an event q, TPTL furthermore allows to give a time limit for q to occur.

References