Formal proof

Last updated

In logic and mathematics, a formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language), each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference. It differs from a natural language argument in that it is rigorous, unambiguous and mechanically verifiable. [1] If the set of assumptions is empty, then the last sentence in a formal proof is called a theorem of the formal system. The notion of theorem is not in general effective, therefore there may be no method by which we can always find a proof of a given sentence or determine that none exists. The concepts of Fitch-style proof, sequent calculus and natural deduction are generalizations of the concept of proof. [2] [3]

Contents

The theorem is a syntactic consequence of all the well-formed formulas preceding it in the proof. For a well-formed formula to qualify as part of a proof, it must be the result of applying a rule of the deductive apparatus (of some formal system) to the previous well-formed formulas in the proof sequence.

Formal proofs often are constructed with the help of computers in interactive theorem proving (e.g., through the use of proof checker and automated theorem prover). [4] Significantly, these proofs can be checked automatically, also by computer. Checking formal proofs is usually simple, while the problem of finding proofs (automated theorem proving) is usually computationally intractable and/or only semi-decidable, depending upon the formal system in use.

Background

Formal language

A formal language is a set of finite sequences of symbols. Such a language can be defined without reference to any meanings of any of its expressions; it can exist before any interpretation is assigned to it that is, before it has any meaning. Formal proofs are expressed in some formal languages.

Formal grammar

A formal grammar (also called formation rules) is a precise description of the well-formed formulas of a formal language. It is synonymous with the set of strings over the alphabet of the formal language which constitute well formed formulas. However, it does not describe their semantics (i.e. what they mean).

Formal systems

A formal system (also called a logical calculus, or a logical system) consists of a formal language together with a deductive apparatus (also called a deductive system). The deductive apparatus may consist of a set of transformation rules (also called inference rules) or a set of axioms, or have both. A formal system is used to derive one expression from one or more other expressions.

Interpretations

An interpretation of a formal system is the assignment of meanings to the symbols, and truth values to the sentences of a formal system. The study of interpretations is called formal semantics. Giving an interpretation is synonymous with constructing a model.

See also

Related Research Articles

<span class="mw-page-title-main">Formal language</span> Sequence of words formed by specific rules

In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules.

First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

<span class="mw-page-title-main">Gödel's completeness theorem</span> Fundamental theorem in mathematical logic

Gödel's completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic.

Propositional calculus is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions and relations between propositions, including the construction of arguments based on them. Compound propositions are formed by connecting propositions by logical connectives. Propositions that contain no logical connectives are called atomic propositions.

In logic, more precisely in deductive reasoning, an argument is sound if it is both valid in form and its premises are true. Soundness also has a related meaning in mathematical logic, wherein logical systems are sound if and only if every formula that can be proved in the system is logically valid with respect to the semantics of the system.

<span class="mw-page-title-main">Theorem</span> In mathematics, a statement that has been proved

In mathematics, a theorem is a statement that has been proved, or can be proved. The proof of a theorem is a logical argument that uses the inference rules of a deductive system to establish that the theorem is a logical consequence of the axioms and previously proved theorems.

Curry's paradox is a paradox in which an arbitrary claim F is proved from the mere existence of a sentence C that says of itself "If C, then F", requiring only a few apparently innocuous logical deduction rules. Since F is arbitrary, any logic having these rules allows one to prove everything. The paradox may be expressed in natural language and in various logics, including certain forms of set theory, lambda calculus, and combinatory logic.

Proof theory is a major branch of mathematical logic that represents proofs as formal mathematical objects, facilitating their analysis by mathematical techniques. Proofs are typically presented as inductively-defined data structures such as lists, boxed lists, or trees, which are constructed according to the axioms and rules of inference of the logical system. Consequently, proof theory is syntactic in nature, in contrast to model theory, which is semantic in nature.

In the philosophy of logic, a rule of inference, inference rule or transformation rule is a logical form consisting of a function which takes premises, analyzes their syntax, and returns a conclusion. For example, the rule of inference called modus ponens takes two premises, one in the form "If p then q" and another in the form "p", and returns the conclusion "q". The rule is valid with respect to the semantics of classical logic, in the sense that if the premises are true, then so is the conclusion.

In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than to David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems in a first-order language rather than conditional tautologies.

In mathematical logic, a sequent is a very general kind of conditional assertion.

In logic and mathematics, second-order logic is an extension of first-order logic, which itself is an extension of propositional logic. Second-order logic is in turn extended by higher-order logic and type theory.

Metalogic is the study of the metatheory of logic. Whereas logic studies how logical systems can be used to construct valid and sound arguments, metalogic studies the properties of logical systems. Logic concerns the truths that may be derived using a logical system; metalogic concerns the truths that may be derived about the languages and systems that are used to express truths.

A formal system is an abstract structure used for inferring theorems from axioms according to a set of rules. These rules, which are used for carrying out the inference of theorems from axioms, are the logical calculus of the formal system. A formal system is essentially an "axiomatic system".

<span class="mw-page-title-main">Syntax (logic)</span> Rules used for constructing, or transforming the symbols and words of a language

In logic, syntax is anything having to do with formal languages or formal systems without regard to any interpretation or meaning given to them. Syntax is concerned with the rules used for constructing, or transforming the symbols and words of a language, as contrasted with the semantics of a language which is concerned with its meaning.

In mathematical logic, a theory is a set of sentences in a formal language. In most scenarios, a deductive system is first understood from context, after which an element of a deductively closed theory is then called a theorem of the theory. In many deductive systems there is usually a subset that is called "the set of axioms" of the theory , in which case the deductive system is also called an "axiomatic system". By definition, every axiom is automatically a theorem. A first-order theory is a set of first-order sentences (theorems) recursively obtained by the inference rules of the system applied to the set of axioms.

In logic, a metatheorem is a statement about a formal system proven in a metalanguage. Unlike theorems proved within a given formal system, a metatheorem is proved within a metatheory, and may reference concepts that are present in the metatheory but not the object theory.

Logic is the formal science of using reason and is considered a branch of both philosophy and mathematics and to a lesser extent computer science. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct and incorrect inferences. Logicians study the criteria for the evaluation of arguments.

In mathematical logic, formation rules are rules for describing which strings of symbols formed from the alphabet of a formal language are syntactically valid within the language. These rules only address the location and manipulation of the strings of the language. It does not describe anything else about a language, such as its semantics. .

References

  1. Kassios, Yannis (February 20, 2009). "Formal Proof" (PDF). cs.utoronto.ca. Retrieved 2019-12-12.
  2. The Cambridge Dictionary of Philosophy, deduction
  3. Barwise, Jon; Etchemendy, John Etchemendy (1999). Language, Proof and Logic (1st ed.). Seven Bridges Press and CSLI.
  4. Harrison, John (December 2008). "Formal Proof—Theory and Practice" (PDF). ams.org. Retrieved 2019-12-12.