Free variables and bound variables

Last updated

In mathematics, and in other disciplines involving formal languages, including mathematical logic and computer science, a variable may be said to be either free or bound. Some older books use the terms real variable and apparent variable for free variable and bound variable, respectively. A free variable is a notation (symbol) that specifies places in an expression where substitution may take place and is not a parameter of this or any container expression. The idea is related to a placeholder (a symbol that will later be replaced by some value), or a wildcard character that stands for an unspecified symbol.

Contents

In computer programming, the term free variable refers to variables used in a function that are neither local variables nor parameters of that function. The term non-local variable is often a synonym in this context.

An instance of a variable symbol is bound, in contrast, if the value of that variable symbol has been bound to a specific value or range of values in the domain of discourse or universe. This may be achieved through the use of logical quantifiers, variable-binding operators, or an explicit statement of allowed values for the variable (such as, "...where is a positive integer".) A variable symbol overall is bound if at least one occurrence of it is bound. [1] pp.142--143 Since the same variable symbol may appear in multiple places in an expression, some occurrences of the variable symbol may be free while others are bound, [1] p.78 hence "free" and "bound" are at first defined for occurrences and then generalized over all occurrences of said variable symbol in the expression. However it is done, the variable ceases to be an independent variable on which the value of the expression depends, whether that value be a truth value or the numerical result of a calculation, or, more generally, an element of an image set of a function.

While the domain of discourse in many contexts is understood, when an explicit range of values for the bound variable has not been given, it may be necessary to specify the domain in order to properly evaluate the expression. For example, consider the following expression in which both variables are bound by logical quantifiers:

This expression evaluates to false if the domain of and is the real numbers, but true if the domain is the complex numbers.

The term "dummy variable" is also sometimes used for a bound variable (more commonly in general mathematics than in computer science), but this should not be confused with the identically named but unrelated concept of dummy variable as used in statistics, most commonly in regression analysis. [2] p.17

Examples

Before stating a precise definition of free variable and bound variable, the following are some examples that perhaps make these two concepts clearer than the definition would:

In the expression

n is a free variable and k is a bound variable; consequently the value of this expression depends on the value of n, but there is nothing called k on which it could depend.

In the expression

y is a free variable and x is a bound variable; consequently the value of this expression depends on the value of y, but there is nothing called x on which it could depend.

In the expression

x is a free variable and h is a bound variable; consequently the value of this expression depends on the value of x, but there is nothing called h on which it could depend.

In the expression

z is a free variable and x and y are bound variables, associated with logical quantifiers; consequently the logical value of this expression depends on the value of z, but there is nothing called x or y on which it could depend.

More widely, in most proofs, bound variables are used. For example, the following proof shows that all squares of positive even integers are divisible by

Let be a positive even integer. Then there is an integer such that . Since , we have divisible by

not only k but also n have been used as bound variables as a whole in the proof.

Variable-binding operators

The following

are some common variable-binding operators. Each of them binds the variable x for some set S.

Many of these are operators which act on functions of the bound variable. In more complicated contexts, such notations can become awkward and confusing. It can be useful to switch to notations which make the binding explicit, such as

for sums or

for differentiation.

Formal explanation

Tree summarizing the syntax of the expression
[?]
x
(
(
[?]
y
A
(
x
)
)
[?]
B
(
z
)
)
{\displaystyle \forall x\,((\exists y\,A(x))\vee B(z))} Binary math expression tree.svg
Tree summarizing the syntax of the expression

Variable-binding mechanisms occur in different contexts in mathematics, logic and computer science. In all cases, however, they are purely syntactic properties of expressions and variables in them. For this section we can summarize syntax by identifying an expression with a tree whose leaf nodes are variables, constants, function constants or predicate constants and whose non-leaf nodes are logical operators. This expression can then be determined by doing an inorder traversal of the tree. Variable-binding operators are logical operators that occur in almost every formal language. A binding operator Q takes two arguments: a variable v and an expression P, and when applied to its arguments produces a new expression Q(v, P). The meaning of binding operators is supplied by the semantics of the language and does not concern us here.

Variable binding relates three things: a variable v, a location a for that variable in an expression and a non-leaf node n of the form Q(v, P). Note: we define a location in an expression as a leaf node in the syntax tree. Variable binding occurs when that location is below the node n.

In the lambda calculus, x is a bound variable in the term M = λx. T and a free variable in the term T. We say x is bound in M and free in T. If T contains a subterm λx. U then x is rebound in this term. This nested, inner binding of x is said to "shadow" the outer binding. Occurrences of x in U are free occurrences of the new x. [3]

Variables bound at the top level of a program are technically free variables within the terms to which they are bound but are often treated specially because they can be compiled as fixed addresses. Similarly, an identifier bound to a recursive function is also technically a free variable within its own body but is treated specially.

A closed term is one containing no free variables.

Function expressions

To give an example from mathematics, consider an expression which defines a function

where t is an expression. t may contain some, all or none of the x1, …, xn and it may contain other variables. In this case we say that function definition binds the variables x1, …, xn.

In this manner, function definition expressions of the kind shown above can be thought of as the variable binding operator, analogous to the lambda expressions of lambda calculus. Other binding operators, like the summation sign, can be thought of as higher-order functions applying to a function. So, for example, the expression

could be treated as a notation for

where is an operator with two parameters—a one-parameter function, and a set to evaluate that function over. The other operators listed above can be expressed in similar ways; for example, the universal quantifier can be thought of as an operator that evaluates to the logical conjunction of the Boolean-valued function P applied over the (possibly infinite) set S.

Natural language

When analyzed in formal semantics, natural languages can be seen to have free and bound variables. In English, personal pronouns like he, she, they, etc. can act as free variables.

Lisa found her book.

In the sentence above, the possessive pronoun her is a free variable. It may refer to the previously mentioned Lisa or to any other female. In other words, her book could be referring to Lisa's book (an instance of coreference) or to a book that belongs to a different female (e.g. Jane's book). Whoever the referent of her is can be established according to the situational (i.e. pragmatic) context. The identity of the referent can be shown using coindexing subscripts where i indicates one referent and j indicates a second referent (different from i). Thus, the sentence Lisa found her book has the following interpretations:

Lisai found heri book. (interpretation #1: her = of Lisa)
Lisai found herj book. (interpretation #2: her = of a female that is not Lisa)

The distinction is not purely of academic interest, as some languages do actually have different forms for heri and herj: for example, Norwegian and Swedish translate coreferent heri as sin and noncoreferent herj as hennes.

English does allow specifying coreference, but it is optional, as both interpretations of the previous example are valid (the ungrammatical interpretation is indicated with an asterisk):

Lisai found heri own book. (interpretation #1: her = of Lisa)
*Lisai found herj own book. (interpretation #2: her = of a female that is not Lisa)

However, reflexive pronouns, such as himself, herself, themselves, etc., and reciprocal pronouns, such as each other, act as bound variables. In a sentence like the following:

Jane hurt herself.

the reflexive herself can only refer to the previously mentioned antecedent, in this case Jane, and can never refer to a different female person. In this example, the variable herself is bound to the noun Jane that occurs in subject position. Indicating the coindexation, the first interpretation with Jane and herself coindexed is permissible, but the other interpretation where they are not coindexed is ungrammatical:

Janei hurt herselfi. (interpretation #1: herself = Jane)
*Janei hurt herselfj. (interpretation #2: herself = a female that is not Jane)

The coreference binding can be represented using a lambda expression as mentioned in the previous Formal explanation section. The sentence with the reflexive could be represented as

x.x hurt x)Jane

in which Jane is the subject referent argument and λx.x hurt x is the predicate function (a lambda abstraction) with the lambda notation and x indicating both the semantic subject and the semantic object of sentence as being bound. This returns the semantic interpretation JANE hurt JANE with JANE being the same person.

Pronouns can also behave in a different way. In the sentence below

Ashley hit her.

the pronoun her can only refer to a female that is not Ashley. This means that it can never have a reflexive meaning equivalent to Ashley hit herself. The grammatical and ungrammatical interpretations are:

*Ashleyi hit heri. (interpretation #1: her = Ashley)
Ashleyi hit herj. (interpretation #2: her = a female that is not Ashley)

The first interpretation is impossible. Only the second interpretation is permitted by the grammar.

Thus, it can be seen that reflexives and reciprocals are bound variables (known technically as anaphors) while true pronouns are free variables in some grammatical structures but variables that cannot be bound in other grammatical structures. The binding phenomena found in natural languages was particularly important to the syntactic government and binding theory (see also: Binding (linguistics)).

See also

Related Research Articles

First-order logic—also called predicate logic, predicate calculus, quantificational logic—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "all men are mortal", one can have expressions in the form "for all x, if x is a man, then x is mortal", where "for all x" is a quantifier, while x is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.

<span class="mw-page-title-main">Logical connective</span> Symbol connecting sentential formulas in logic

In logic, a logical connective is a logical constant. Connectives can be used to connect logical formulas. For instance in the syntax of propositional logic, the binary connective can be used to join the two atomic formulas and , rendering the complex formula .

Lambda calculus is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. Untyped lambda calculus, the topic of this article, is a universal model of computation that can be used to simulate any Turing machine. It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics. In 1936, Church found a formulation which was logically consistent, and documented it in 1940.

In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function. In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines. The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function.

In logic and computer science, specifically automated reasoning, unification is an algorithmic process of solving equations between symbolic expressions, each of the form Left-hand side = Right-hand side. For example, using x,y,z as variables, and taking f to be an uninterpreted function, the singleton equation set { f(1,y) = f(x,2) } is a syntactic first-order unification problem that has the substitution { x ↦ 1, y ↦ 2 } as its only solution.

In logic, a truth function is a function that accepts truth values as input and produces a unique truth value as output. In other words: the input and output of a truth function are all truth values; a truth function will always output exactly one truth value, and inputting the same truth value(s) will always output the same truth value. The typical example is in propositional logic, wherein a compound statement is constructed using individual statements connected by logical connectives; if the truth value of the compound statement is entirely determined by the truth value(s) of the constituent statement(s), the compound statement is called a truth function, and any logical connectives used are said to be truth functional.

In mathematics, an expression is a written arrangement of symbols following the context-dependent, syntactic conventions of mathematical notation. Symbols can denote numbers (constants), variables, operations, functions. Other symbols include punctuation signs and brackets.

System F is a typed lambda calculus that introduces, to simply typed lambda calculus, a mechanism of universal quantification over types. System F formalizes parametric polymorphism in programming languages, thus forming a theoretical basis for languages such as Haskell and ML. It was discovered independently by logician Jean-Yves Girard (1972) and computer scientist John C. Reynolds.

In functional analysis, a branch of mathematics, the Borel functional calculus is a functional calculus, which has particularly broad scope. Thus for instance if T is an operator, applying the squaring function ss2 to T yields the operator T2. Using the functional calculus for larger classes of functions, we can for example define rigorously the "square root" of the (negative) Laplacian operator −Δ or the exponential

Categorial grammar is a family of formalisms in natural language syntax that share the central assumption that syntactic constituents combine as functions and arguments. Categorial grammar posits a close relationship between the syntax and semantic composition, since it typically treats syntactic categories as corresponding to semantic types. Categorial grammars were developed in the 1930s by Kazimierz Ajdukiewicz and in the 1950s by Yehoshua Bar-Hillel and Joachim Lambek. It saw a surge of interest in the 1970s following the work of Richard Montague, whose Montague grammar assumed a similar view of syntax. It continues to be a major paradigm, particularly within formal semantics.

The simply typed lambda calculus, a form of type theory, is a typed interpretation of the lambda calculus with only one type constructor that builds function types. It is the canonical and simplest example of a typed lambda calculus. The simply typed lambda calculus was originally introduced by Alonzo Church in 1940 as an attempt to avoid paradoxical use of the untyped lambda calculus.

Primitive recursive arithmetic (PRA) is a quantifier-free formalization of the natural numbers. It was first proposed by Norwegian mathematician Skolem (1923), as a formalization of his finitistic conception of the foundations of arithmetic, and it is widely agreed that all reasoning of PRA is finitistic. Many also believe that all of finitism is captured by PRA, but others believe finitism can be extended to forms of recursion beyond primitive recursion, up to ε0, which is the proof-theoretic ordinal of Peano arithmetic. PRA's proof theoretic ordinal is ωω, where ω is the smallest transfinite ordinal. PRA is sometimes called Skolem arithmetic, although that has another meaning, see Skolem arithmetic.

In mathematical logic, the de Bruijn index is a tool invented by the Dutch mathematician Nicolaas Govert de Bruijn for representing terms of lambda calculus without naming the bound variables. Terms written using these indices are invariant with respect to α-conversion, so the check for α-equivalence is the same as that for syntactic equality. Each de Bruijn index is a natural number that represents an occurrence of a variable in a λ-term, and denotes the number of binders that are in scope between that occurrence and its corresponding binder. The following are some examples:

An interpretation is an assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics.

<span class="mw-page-title-main">Sloppy identity</span> Concept in linguistics

In linguistics, sloppy identity is an interpretive property that is found with verb phrase ellipsis where the identity of the pronoun in an elided VP is not identical to the antecedent VP.

In mathematical logic, a term denotes a mathematical object while a formula denotes a mathematical fact. In particular, terms appear as components of a formula. This is analogous to natural language, where a noun phrase refers to an object and a whole sentence refers to a fact.

In mathematics, in the field of harmonic analysis, an oscillatory integral operator is an integral operator of the form

A Hindley–Milner (HM) type system is a classical type system for the lambda calculus with parametric polymorphism. It is also known as Damas–Milner or Damas–Hindley–Milner. It was first described by J. Roger Hindley and later rediscovered by Robin Milner. Luis Damas contributed a close formal analysis and proof of the method in his PhD thesis.

In logic, a quantifier is an operator that specifies how many individuals in the domain of discourse satisfy an open formula. For instance, the universal quantifier in the first order formula expresses that everything in the domain satisfies the property denoted by . On the other hand, the existential quantifier in the formula expresses that there exists something in the domain which satisfies that property. A formula where a quantifier takes widest scope is called a quantified formula. A quantified formula must contain a bound variable and a subformula specifying a property of the referent of that variable.

Lambda calculus is a formal mathematical system based on lambda abstraction and function application. Two definitions of the language are given here: a standard definition, and a definition using mathematical formulas.

References

  1. 1 2 W. V. O. Quine, Mathematical Logic (1981). Harvard University Press, 0-674-55451-5.
  2. Robert S. Wolf, A Tour through Mathematical Logic (2005). 978-0-88385-036-7
  3. Thompson 1991, p. 33.

Further reading