Object theory

Last updated

Object theory is a theory in mathematical logic concerning objects and the statements that can be made about objects.

Contents

In some cases "objects" can be concretely thought of as symbols and strings of symbols, here illustrated by a string of four symbols " ←←↑↓←→←↓" as composed from the 4-symbol alphabet { ←, ↑, →, ↓ } . When they are "known only through the relationships of the system [in which they appear], the system is [said to be] abstract ... what the objects are, in any respect other than how they fit into the structure, is left unspecified." (Kleene 1952:25) A further specification of the objects results in a model or representation of the abstract system, "i.e. a system of objects which satisfy the relationships of the abstract system and have some further status as well" (ibid).

A system, in its general sense, is a collection of objects O = {o1, o2, ... on, ... } and (a specification of) the relationshipr or relationships r1, r2, ... rn between the objects.

Example: Given a simple system = { { ←, ↑, →, ↓ }, } for a very simple relationship between the objects as signified by the symbol : [1]
→ => ↑, ↑ => ←, ← => ↓, ↓ => →

A model of this system would occur when we assign, for example the familiar natural numbers { 0, 1, 2, 3 }, to the symbols { ←, ↑, →, ↓ }, i.e. in this manner: → = 0, ↑ = 1, ← = 2, ↓ = 3 . Here, the symbol indicates the "successor function" (often written as an apostrophe ' to distinguish it from +) operating on a collection of only 4 objects, thus 0' = 1, 1' = 2, 2' = 3, 3' = 0.

Or, we might specify that represents 90-degree counter-clockwise rotations of a simple object → .

The genetic versus axiomatic method

The following is an example of the genetic or constructive method of making objects in a system, the other being the axiomatic or postulational method. Kleene states that a genetic method is intended to "generate" all the objects of the system and thereby "determine the abstract structure of the system completely" and uniquely (and thus define the system categorically). If axioms rather than a genetic method is used, such axiom-sets are said to be categorical. [2]

Unlike the example above, the following creates an unbounded number of objects. The fact that O is a set, and □ is an element of O, and ■ is an operation, must be specified at the outset; this is being done in the language of the metatheory (see below):

Given the system ( O, □, ■ ): O = { □, ■□, ■■□, ■■■□, ■■■■□, ■■■■■□, ..., ■n□, etc. }

Abbreviations

The object ■n□ demonstrates the use of "abbreviation", a way to simplify the denoting of objects, and consequently discussions about them, once they have been created "officially". Done correctly the definition would proceed as follows:

■□ ≡ ■1□, ■■□ ≡ ■2□, ■■■□ ≡ ■3□, etc, where the notions of ≡ ("defined as") and "number" are presupposed to be understood intuitively in the metatheory.

Kurt Gödel 1931 virtually constructed the entire proof of his incompleteness theorems (actually he proved Theorem IV and sketched a proof of Theorem XI) by use of this tactic, proceeding from his axioms using substitution, concatenation and deduction of modus ponens to produce a collection of 45 "definitions" (derivations or theorems more accurately) from the axioms.

A more familiar tactic is perhaps the design of subroutines that are given names, e.g. in Excel the subroutine " =INT(A1)" that returns to the cell where it is typed (e.g. cell B1) the integer it finds in cell A1.

Models

A model of the above example is a left-ended Post–Turing machine tape with its fixed "head" located on the left-end square; the system's relation is equivalent to: "To the left end, tack on a new square □, right-shift the tape, then print ■ on the new square". Another model is the natural numbers as created by the "successor" function. Because the objects in the two systems e.g. ( □, ■□, ■■□, ■■■□ ... ) and (0, 0′, 0′′, 0′′′, ...) can be put into a 1-1 correspondence, the systems are said to be (simply) isomorphic (meaning "same shape"). Yet another isomorphic model is the little sequence of instructions for a counter machine e.g. "Do the following in sequence: (1) Dig a hole. (2) Into the hole, throw a pebble. (3) Go to step 2."

As long as their objects can be placed in one-to-one correspondence ("while preserving the relationships") models can be considered "equivalent" no matter how their objects are generated (e.g. genetically or axiomatically):

"Any two simply isomorphic systems constitute representations [models] of the same abstract system, which is obtained by abstracting from either of them, i.e. by leaving out of account all relationships and properties except the ones to be considered for the abstract system." (Kleene 1935:25)

Tacit assumptions, tacit knowledge

An alert reader may have noticed that writing symbols □, ■□, ■■□, ■■■□, etc. by concatenating a marked square, i.e. ■, to an existing string is different from writing the completed symbols one after another on a Turing-machine tape. Another entirely possible scenario would be to generate the symbol-strings one after another on different sections of tape e.g. after three symbols: ■■■□■■□■□□. The proof that these two possibilities are different is easy: they require different "programs". But in a sense both versions create the same objects; in the second case the objects are preserved on the tape. In the same way, if a person were to write 0, then erase it, write 1 in the same place, then erase it, write 2, erase it, ad infinitum, the person is generating the same objects as if they were writing down 0 1 2 3 ... writing one symbol after another to the right on the paper.

Once the step has been taken to write down the symbols 3 2 1 0 one after another on a piece of paper (writing the new symbol on the left this time), or writing ∫∫∫※∫∫※∫※※ in a similar manner, then putting them in 1-1 correspondence with the Turing-tape symbols seems obvious. Digging holes one after the other, starting with a hole at "the origin", then a hole to its left with one pebble in it, then a hole to its left with two pebbles in it, ad infinitum, raises practical questions, but in the abstract it too can be seen to be conducive to the same 1-1 correspondence.

However, nothing in particular in the definition of genetic versus axiomatic methods clears this up—these are issues to be discussed in the metatheory. The mathematician or scientist is to be held responsible for sloppy specifications. Breger cautions that axiomatic methods are susceptible to tacit knowledge, in particular, the sort that involves "know-how of a human being" (Breger 2000:227).

A formal system

In general, in mathematics a formal system or "formal theory" consists of "objects" in a structure:

Informal theory, object theory, and metatheory

A metatheory exists outside the formalized object theory—the meaningless symbols and relations and (well-formed-) strings of symbols. The metatheory comments on (describes, interprets, illustrates) these meaningless objects using "intuitive" notions and "ordinary language". Like the object theory, the metatheory should be disciplined, perhaps even quasi-formal itself, but in general the interpretations of objects and rules are intuitive rather than formal. Kleene requires that the methods of a metatheory (at least for the purposes of metamathematics) be finite, conceivable, and performable; these methods cannot appeal to the completed infinite. "Proofs of existence shall give, at least implicitly, a method for constructing the object which is being proved to exist." [3] (p. 64)

Kleene summarizes this as follows: "In the full picture there will be three separate and distinct "theories""

"(a) the informal theory of which the formal system constitutes a formalization
"(b) the formal system or object theory, and
"(c) the metatheory, in which the formal system is described and studied" (p. 65)

He goes on to say that object theory (b) is not a "theory" in the conventional sense, but rather is "a system of symbols and of objects built from symbols (described from (c))".

Expansion of the notion of formal system

Well-formed objects

If a collection of objects (symbols and symbol-sequences) is to be considered "well-formed", an algorithm must exist to determine, by halting with a "yes" or "no" answer, whether or not the object is well-formed (in mathematics a wff abbreviates well-formed formula). This algorithm, in the extreme, might require (or be) a Turing machine or Turing-equivalent machine that "parses" the symbol-string as presented as "data" on its tape; before a universal Turing machine can execute an instruction on its tape, it must parse the symbols to determine the exact nature of the instruction and/or datum encoded there. In simpler cases a finite state machine or a pushdown automaton can do the job. Enderton describes the use of "trees" to determine whether or not a logic formula (in particular a string of symbols with parentheses) is well formed. [4] Alonzo Church 1934 [5] describes the construction of "formulas" (again: sequences of symbols) as written in his λ-calculus by use of a recursive description of how to start a formula and then build on the starting-symbol using concatenation and substitution.

Example: Church specified his λ-calculus as follows (the following is simplified version leaving out notions of free- and bound-variable). This example shows how an object theory begins with a specification of an object system of symbols and relations (in particular by use of concatenation of symbols):

(1) Declare the symbols: {, }, (, ), λ, [, ] plus an infinite number of variablesa, b, c, ..., x, ...
(2) Define formula: a sequence of symbols
(3) Define the notion of "well-formed formula" (wff) recursively starting with the "basis" (3.i):
  • (3.1) (basis) A variable x is a wff
  • (3.2) If F and X are wffs, then {F}(X) is a wff; if x occurs in F or X then it is said to be a variable in {F}(X).
  • (3.3) If M is well-formed and x occurs in M then λx[M] is a wff.
(4) Define various abbreviations:
  • {F}[X] abbreviates to F(X) if F is a single symbol
  • abbreviates to {F}(X,Y) or F(X,Y) if F is a single symbol
  • λx1λx2[...λxn[M]...] abbreviates to λx1x2...xn•M
  • λab•a(b) abbreviates to 1
  • λab•a(a(b)) abbreviates to 2, etc.
(5) Define the notion of "substitution" of formula N for variable x throughout M [6] (Church 1936)

Undefined (primitive) objects

Certain objects may be "undefined" or "primitive" and receive definition (in the terms of their behaviors) by the introduction of the axioms.

In the next example, the undefined symbols will be { ※, , }. The axioms will describe their behaviors.

Axioms

Kleene observes that the axioms are made up of two sets of symbols: (i) the undefined or primitive objects and those that are previously known. In the following example, it is previously known in the following system ( O, ※, , ) that O constitutes a set of objects (the "domain"), ※ is an object in the domain, and are symbols for relations between the objects, => indicates the "IF THEN" logical operator, ε is the symbol that indicates "is an element of the set O", and "n" will be used to indicate an arbitrary element of set-of-objects O.

After (i) a definition of "string S"—an object that is a symbol ※ or concatenated symbols ※, ↀ or ∫, and (ii) a definition of "well-formed" strings -- (basis) ※ and ↀS, ∫S where S is any string, come the axioms:

So what might be the (intended) interpretation [7] of these symbols, definitions, and axioms?

If we define ※ as "0", ∫ as "successor", and ↀ as "predecessor" then ↀ※ => ※ indicates "proper subtraction" (sometimes designated by the symbol ∸, where "predecessor" subtracts a unit from a number, thus 0 ∸1 = 0). The string " ↀ∫n => n " indicates that if first the successor is applied to an arbitrary object n and then the predecessor ↀ is applied to ∫n, the original n results."

Is this set of axioms "adequate"? The proper answer would be a question: "Adequate to describe what, in particular?" "The axioms determine to which systems, defined from outside the theory, the theory applies." (Kleene 1952:27). In other words, the axioms may be sufficient for one system but not for another.

In fact, it is easy to see that this axiom set is not a very good one—in fact, it is inconsistent (that is, it yields inconsistent outcomes, no matter what its interpretation):

Example: Define ※ as 0, ∫※ as 1, and ↀ1 = 0. From the first axiom, ↀ※ = 0, so ∫ↀ※ = ∫0 = 1. But the last axiom specifies that for any arbitrary n including ※ = 0, ∫ↀn => n, so this axiom stipulates that ∫ↀ0 => 0, not 1.

Observe also that the axiom set does not specify that ∫n ≠ n. Or, excepting the case n = ※, ↀn ≠ n. If we were to include these two axioms we would need to describe the intuitive notions "equals" symbolized by = and not-equals symbolized by ≠.

See also

Notes

  1. Abstractly, the relationship is defined by the collection of ordered pairs { ( →, ↑ ), ( ↑, ← ), ( ←, ↓ ), (↓, →) }
  2. Kleene 1952:26. This distinction between the constructive and axiomatic methods, and the words used to describe them, are Kleene's per his reference to Hilbert 1900.
  3. This is an intuitionist requirement: It formally proscribes the use of the law of excluded middle over infinite collections (sets) of objects.
  4. Enderton 2002:30
  5. Church 1934 reprinted in Davis 1965:88ff
  6. The substitution gets complicated and requires more information (e.g. definitions of "free-" and "bound-" variables and three varieties of substitution) than has been given in this brief example.
  7. Kleene defines the intended interpretation as "the meanings which are intended to be attached to the symbols, formulas, etc. of a given formal system, in consideration of the system as a formalization of an informal theory....(p. 64)

Related Research Articles

An axiom, postulate or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Greek axíōma 'that which is thought worthy or fit' or 'that which commends itself as evident.'

Naive set theory is any of several theories of sets used in the discussion of the foundations of mathematics. Unlike axiomatic set theories, which are defined using formal logic, naive set theory is defined informally, in natural language. It describes the aspects of mathematical sets familiar in discrete mathematics, and suffices for the everyday use of set theory concepts in contemporary mathematics.

In computability theory, the Church–Turing thesis is a hypothesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:

Formal language Words whose letters are taken from an alphabet and are well-formed according to a specific set of rules

In logic, mathematics, computer science, and linguistics, a formal language consists of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules.

First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

Lambda calculus is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation that can be used to simulate any Turing machine. It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics.

Mathematical logic is the study of logic within mathematics. Major subareas include model theory, proof theory, set theory, and recursion theory. Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic such as their expressive or deductive power. However, it can also include uses of logic to characterize correct mathematical reasoning or to establish foundations of mathematics.

<i>Principia Mathematica</i> Three-volume work on the foundations of mathematics

The Principia Mathematica is a three-volume work on the foundations of mathematics written by the mathematicians Alfred North Whitehead and Bertrand Russell and published in 1910, 1912, and 1913. In 1925–27, it appeared in a second edition with an important Introduction to the Second Edition, an Appendix A that replaced ✸9 and all-new Appendix B and Appendix C. PM is not to be confused with Russell's 1903 The Principles of Mathematics. PM was originally conceived as a sequel volume to Russell's 1903 Principles, but as PM states, this became an unworkable suggestion for practical and philosophical reasons: "The present work was originally intended by us to be comprised in a second volume of Principles of Mathematics... But as we advanced, it became increasingly evident that the subject is a very much larger one than we had supposed; moreover on many fundamental questions which had been left obscure and doubtful in the former work, we have now arrived at what we believe to be satisfactory solutions."

Turing machine Computation model defining an abstract machine

A Turing machine is a mathematical model of computation that defines an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, given any computer algorithm, a Turing machine capable of simulating that algorithm's logic can be constructed.

Foundations of mathematics is the study of the philosophical and logical and/or algorithmic basis of mathematics, or, in a broader sense, the mathematical investigation of what underlies the philosophical theories concerning the nature of mathematics. In this latter sense, the distinction between foundations of mathematics and philosophy of mathematics turns out to be quite vague. Foundations of mathematics can be conceived as the study of the basic mathematical concepts and how they form hierarchies of more complex structures and concepts, especially the fundamentally important structures that form the language of mathematics also called metamathematical concepts, with an eye to the philosophical aspects and the unity of mathematics. The search for foundations of mathematics is a central question of the philosophy of mathematics; the abstract nature of mathematical objects presents special philosophical challenges.

Metamathematics

Metamathematics is the study of mathematics itself using mathematical methods. This study produces metatheories, which are mathematical theories about other mathematical theories. Emphasis on metamathematics owes itself to David Hilbert's attempt to secure the foundations of mathematics in the early part of the 20th century. Metamathematics provides "a rigorous mathematical technique for investigating a great variety of foundation problems for mathematics and logic". An important feature of metamathematics is its emphasis on differentiating between reasoning from inside a system and from outside a system. An informal illustration of this is categorizing the proposition "2+2=4" as belonging to mathematics while categorizing the proposition "'2+2=4' is valid" as belonging to metamathematics.

In mathematics and logic, an axiomatic system is any set of axioms from which some or all axioms can be used in conjunction to logically derive theorems. A theory is a consistent, relatively-self-contained body of knowledge which usually contains an axiomatic system and all its derived theorems. An axiomatic system that is completely described is a special kind of formal system. A formal theory is an axiomatic system that describes a set of sentences that is closed under logical implication. A formal proof is a complete rendition of a mathematical proof within a formal system.

A formal system is an abstract structure used for inferring theorems from axioms according to a set of rules. These rules, which are used for carrying out the inference of theorems from axioms, are the logical calculus of the formal system. A formal system is essentially an "axiomatic system".

In mathematical logic, propositional logic and predicate logic, a well-formed formula, abbreviated WFF or wff, often simply formula, is a finite sequence of symbols from a given alphabet that is part of a formal language. A formal language can be identified with the set of formulas in the language.

In propositional logic, a propositional formula is a type of syntactic formula which is well formed and has a truth value. If the values of all variables in a propositional formula are given, it determines a unique truth value. A propositional formula may also be called a propositional expression, a sentence, or a sentential formula.

Metamath is a formal language and an associated computer program for archiving, verifying, and studying mathematical proofs. Several databases of proved theorems have been developed using Metamath covering standard results in logic, set theory, number theory, algebra, topology and analysis, among others.

Algorithm characterizations are attempts to formalize the word algorithm. Algorithm does not have a generally accepted formal definition. Researchers are actively working on this problem. This article will present some of the "characterizations" of the notion of "algorithm" in more detail.

The history of the Church–Turing thesis ("thesis") involves the history of the development of the study of the nature of functions whose values are effectively calculable; or, in more modern terms, functions whose values are algorithmically computable. It is an important topic in modern mathematical theory and computer science, particularly associated with the work of Alonzo Church and Alan Turing.

In the philosophy of mathematics, formalism is the view that holds that statements of mathematics and logic can be considered to be statements about the consequences of the manipulation of strings using established manipulation rules. A central idea of formalism "is that mathematics is not a body of propositions representing an abstract sector of reality, but is much more akin to a game, bringing with it no more commitment to an ontology of objects or properties than ludo or chess." According to formalism, the truths expressed in logic and mathematics are not about numbers, sets, or triangles or any other coextensive subject matter — in fact, they aren't "about" anything at all. Rather, mathematical statements are syntactic forms whose shapes and locations have no meaning unless they are given an interpretation. In contrast to logicism or intuitionism, formalism's contours are less defined due to broad approaches that can be categorized as formalist.

In a foundational controversy in twentieth-century mathematics, L. E. J. Brouwer, a proponent of the constructivist school of intuitionism, opposed David Hilbert, a proponent of formalism. The debate concerned fundamental questions about the consistency of axioms and the role of semantics and syntax in mathematics. Much of the controversy took place while both were involved with the prestigious Mathematische Annalen journal, with Hilbert as editor-in-chief and Brouwer as a member of its editorial board.

References