In the lambda calculus, a term is in beta normal form if no beta reduction is possible. [1] A term is in beta-eta normal form if neither a beta reduction nor an eta reduction is possible. A term is in head normal form if there is no beta-redex in head position. The normal form of a term, if one exists, is unique (as a corollary of the Church–Rosser theorem). [2] However, a term may have more than one head normal form.
In the lambda calculus, a beta redex is a term of the form: [3] [4]
A redex is in head position in a term , if has the following shape (note that application has higher priority than abstraction, and that the formula below is meant to be a lambda-abstraction, not an application):
A beta reduction is an application of the following rewrite rule to a beta redex contained in a term:
where is the result of substituting the term for the variable in the term .
A head beta reduction is a beta reduction applied in head position, that is, of the following form:
Any other reduction is an internal beta reduction.
A normal form is a term that does not contain any beta redex, [3] [5] i.e. that cannot be further reduced. A head normal form is a term that does not contain a beta redex in head position, i.e. that cannot be further reduced by a head reduction. When considering the simple lambda calculus (viz. without the addition of constant or function symbols, meant to be reduced by additional delta rule), head normal forms are the terms of the following shape:
A head normal form is not always a normal form, [5] because the applied arguments need not be normal. However, the converse is true: any normal form is also a head normal form. [5] In fact, the normal forms are exactly the head normal forms in which the subterms are themselves normal forms. This gives an inductive syntactic description of normal forms.
There is also the notion of weak head normal form: a term in weak head normal form is either a term in head normal form or a lambda abstraction. [6] This means a redex may appear inside a lambda body.
In general, a given term can contain several redexes, hence several different beta reductions could be applied. We may specify a strategy to choose which redex to reduce.
Normal-order reduction is complete, in the sense that if a term has a head normal form, then normal-order reduction will eventually reach it. By the syntactic description of normal forms above, this entails the same statement for a “fully” normal form (this is the standardization theorem). By contrast, applicative order reduction may not terminate, even when the term has a normal form. For example, using applicative order reduction, the following sequence of reductions is possible:
But using normal-order reduction, the same starting point reduces quickly to normal form:
Sinot's director strings is one method by which the computational complexity of beta reduction can be optimized.
Lambda calculus is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. Untyped lambda calculus, the topic of this article, is a universal model of computation that can be used to simulate any Turing machine. It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics. In 1936, Church found a formulation which was logically consistent, and documented it in 1940.
In lambda calculus, the Church–Rosser theorem states that, when applying reduction rules to terms, the ordering in which the reductions are chosen does not make a difference to the eventual result.
In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.
In mathematical logic and computer science, the calculus of constructions (CoC) is a type theory created by Thierry Coquand. It can serve as both a typed programming language and as constructive foundation for mathematics. For this second reason, the CoC and its variants have been the basis for Coq and other proof assistants.
System F is a typed lambda calculus that introduces, to simply typed lambda calculus, a mechanism of universal quantification over types. System F formalizes parametric polymorphism in programming languages, thus forming a theoretical basis for languages such as Haskell and ML. It was discovered independently by logician Jean-Yves Girard (1972) and computer scientist John C. Reynolds.
In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.
In mathematics, Schubert calculus is a branch of algebraic geometry introduced in the nineteenth century by Hermann Schubert in order to solve various counting problems of projective geometry and, as such, is viewed as part of enumerative geometry. Giving it a more rigorous foundation was the aim of Hilbert's 15th problem. It is related to several more modern concepts, such as characteristic classes, and both its algorithmic aspects and applications remain of current interest. The term Schubert calculus is sometimes used to mean the enumerative geometry of linear subspaces of a vector space, which is roughly equivalent to describing the cohomology ring of Grassmannians. Sometimes it is used to mean the more general enumerative geometry of algebraic varieties that are homogenous spaces of simple Lie groups. Even more generally, Schubert calculus is sometimes understood as encompassing the study of analogous questions in generalized cohomology theories.
A matrix grammar is a formal grammar in which instead of single productions, productions are grouped together into finite sequences. A production cannot be applied separately, it must be applied in sequence. In the application of such a sequence of productions, the rewriting is done in accordance to each production in sequence, the first one, second one etc. till the last production has been used for rewriting. The sequences are referred to as matrices.
In abstract rewriting, an object is in normal form if it cannot be rewritten any further, i.e. it is irreducible. Depending on the rewriting system, an object may rewrite to several normal forms or none at all. Many properties of rewriting systems relate to normal forms.
In the mathematical discipline of graph theory, the expander walk sampling theorem intuitively states that sampling vertices in an expander graph by doing relatively short random walk can simulate sampling the vertices independently from a uniform distribution. The earliest version of this theorem is due to Ajtai, Komlós & Szemerédi (1987), and the more general version is typically attributed to Gillman (1998).
In statistics, principal component regression (PCR) is a regression analysis technique that is based on principal component analysis (PCA). PCR is a form of reduced rank regression. More specifically, PCR is used for estimating the unknown regression coefficients in a standard linear regression model.
In probability theory and statistics, the generalized chi-squared distribution is the distribution of a quadratic form of a multinormal variable, or a linear combination of different normal variables and squares of normal variables. Equivalently, it is also a linear sum of independent noncentral chi-square variables and a normal variable. There are several other such generalizations for which the same term is sometimes used; some of them are special cases of the family discussed here, for example the gamma distribution.
In the study of denotational semantics of the lambda calculus, Böhm trees, Lévy-Longo trees, and Berarducci trees are tree-like mathematical objects that capture the "meaning" of a term up to some set of "meaningless" terms.
For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:
In mathematics, the Fox–Wright function (also known as Fox–Wright Psi function, not to be confused with Wright Omega function) is a generalisation of the generalised hypergeometric function pFq(z) based on ideas of Charles Fox (1928) and E. Maitland Wright (1935):
Computable topology is a discipline in mathematics that studies the topological and algebraic structure of computation. Computable topology is not to be confused with algorithmic or computational topology, which studies the application of computation to topology.
In rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation. Some authors use the term to refer to an evaluation strategy.
Lambda calculus is a formal mathematical system based on lambda abstraction and function application. Two definitions of the language are given here: a standard definition, and a definition using mathematical formulas.
In theoretical computer science, the Krivine machine is an abstract machine. As an abstract machine, it shares features with Turing machines and the SECD machine. The Krivine machine explains how to compute a recursive function. More specifically it aims to define rigorously head normal form reduction of a lambda term using call-by-name reduction. Thanks to its formalism, it tells in details how a kind of reduction works and sets the theoretical foundation of the operational semantics of functional programming languages. On the other hand, Krivine machine implements call-by-name because it evaluates the body of a β-redex before it applies the body to its parameter. In other words, in an expression u it evaluates first λx. t before applying it to u. In functional programming, this would mean that in order to evaluate a function applied to a parameter, it evaluates first the function before applying it to the parameter.
This article summarizes several identities in exterior calculus, a mathematical notation used in differential geometry.