In programming languages, defunctionalization is a compile-time transformation which eliminates higher-order functions, replacing them by a single first-order apply function. The technique was first described by John C. Reynolds in his 1972 paper, "Definitional Interpreters for Higher-Order Programming Languages". Reynolds' observation was that a given program contains only finitely many function abstractions, so that each can be assigned and replaced by a unique identifier. Every function application within the program is then replaced by a call to the apply function with the function identifier as the first argument. The apply function's only job is to dispatch on this first argument, and then perform the instructions denoted by the function identifier on the remaining arguments.
One complication to this basic idea is that function abstractions may reference free variables. In such situations, defunctionalization must be preceded by closure conversion (lambda lifting), so that any free variables of a function abstraction are passed as extra arguments to apply. In addition, if closures are supported as first-class values, it becomes necessary to represent these captured bindings by creating data structures.
Instead of having a single apply function dispatch on all function abstractions in a program, various kinds of control flow analysis (including simple distinctions based on arity or type signature) can be employed to determine which function(s) may be called at each function application site, and a specialized apply function may be referenced instead. Alternatively, the target language may support indirect calls through function pointers, which may be more efficient and extensible than a dispatch-based approach.
Besides its use as a compilation technique for higher-order functional languages, defunctionalization has been studied (particularly by Olivier Danvy and collaborators) as a way of mechanically transforming interpreters into abstract machines. Defunctionalization is also related to the technique from object-oriented programming of representing functions by function objects (as an alternative to closures).
This is an example given by Olivier Danvy, translated to Haskell:
Given the Tree datatype:
dataTreea=Leafa|Node(Treea)(Treea)
We will defunctionalize the following program:
cons::a->[a]->[a]consxxs=x:xso::(b->c)->(a->b)->a->cofgx=f(gx)flatten::Treet->[t]flattent=walkt[]walk::Treet->[t]->[t]walk(Leafx)=consxwalk(Nodet1t2)=o(walkt1)(walkt2)
We defunctionalize by replacing all higher-order functions (in this case, o
is the only higher-order function) with a value of the Lam
datatype, and instead of calling them directly, we introduce an apply
function that interprets the datatype:
dataLama=LamConsa|LamO(Lama)(Lama)apply::Lama->[a]->[a]apply(LamConsx)xs=x:xsapply(LamOf1f2)xs=applyf1(applyf2xs)cons_def::a->Lamacons_defx=LamConsxo_def::Lama->Lama->Lamao_deff1f2=LamOf1f2flatten_def::Treet->[t]flatten_deft=apply(walk_deft)[]walk_def::Treet->Lamtwalk_def(Leafx)=cons_defxwalk_def(Nodet1t2)=o_def(walk_deft1)(walk_deft2)
In mathematics and computer science, mutual recursion is a form of recursion where two mathematical or computational objects, such as functions or datatypes, are defined in terms of each other. Mutual recursion is very common in functional programming and in some problem domains, such as recursive descent parsers, where the datatypes are naturally mutually recursive.
Standard ML (SML) is a general-purpose, modular, functional programming language with compile-time type checking and type inference. It is popular for writing compilers, for programming language research, and for developing theorem provers.
Generic programming is a style of computer programming in which algorithms are written in terms of data types to-be-specified-later that are then instantiated when needed for specific types provided as parameters. This approach, pioneered by the ML programming language in 1973, permits writing common functions or types that differ only in the set of types on which they operate when used, thus reducing duplicate code.
In mathematics and computer science, a higher-order function (HOF) is a function that does at least one of the following:
In computer science, the treap and the randomized binary search tree are two closely related forms of binary search tree data structures that maintain a dynamic set of ordered keys and allow binary searches among the keys. After any sequence of insertions and deletions of keys, the shape of the tree is a random variable with the same probability distribution as a random binary tree; in particular, with high probability its height is proportional to the logarithm of the number of keys, so that each search, insertion, or deletion operation takes logarithmic time to perform.
In computer programming, especially functional programming and type theory, an algebraic data type (ADT) is a kind of composite type, i.e., a type formed by combining other types.
System F is a typed lambda calculus that introduces, to simply typed lambda calculus, a mechanism of universal quantification over types. System F formalizes parametric polymorphism in programming languages, thus forming a theoretical basis for languages such as Haskell and ML. It was discovered independently by logician Jean-Yves Girard (1972) and computer scientist John C. Reynolds.
In computer science, a programming language is said to have first-class functions if it treats functions as first-class citizens. This means the language supports passing functions as arguments to other functions, returning them as the values from other functions, and assigning them to variables or storing them in data structures. Some programming language theorists require support for anonymous functions as well. In languages with first-class functions, the names of functions do not have any special status; they are treated like ordinary variables with a function type. The term was coined by Christopher Strachey in the context of "functions as first-class citizens" in the mid-1960s.
In computer science, function composition is an act or mechanism to combine simple functions to build more complicated ones. Like the usual composition of functions in mathematics, the result of each function is passed as the argument of the next, and the result of the last one is the result of the whole.
In computing, a meta-circular evaluator (MCE) or meta-circular interpreter (MCI) is an interpreter which defines each feature of the interpreted language using a similar facility of the interpreter's host language. For example, interpreting a lambda application may be implemented using function application. Meta-circular evaluation is most prominent in the context of Lisp. A self-interpreter is a meta-circular interpreter where the interpreted language is nearly identical to the host language; the two terms are often used synonymously.
In computer science, weight-balanced binary trees (WBTs) are a type of self-balancing binary search trees that can be used to implement dynamic sets, dictionaries (maps) and sequences. These trees were introduced by Nievergelt and Reingold in the 1970s as trees of bounded balance, or BB[α] trees. Their more common name is due to Knuth.
In functional programming, fold refers to a family of higher-order functions that analyze a recursive data structure and through use of a given combining operation, recombine the results of recursively processing its constituent parts, building up a return value. Typically, a fold is presented with a combining function, a top node of a data structure, and possibly some default values to be used under certain conditions. The fold then proceeds to combine elements of the data structure's hierarchy, using the function in a systematic way.
In functional programming, a generalized algebraic data type is a generalization of parametric algebraic data types.
In computer programming, an anonymous function is a function definition that is not bound to an identifier. Anonymous functions are often arguments being passed to higher-order functions or used for constructing the result of a higher-order function that needs to return a function. If the function is only used once, or a limited number of times, an anonymous function may be syntactically lighter than using a named function. Anonymous functions are ubiquitous in functional programming languages and other languages with first-class functions, where they fulfil the same role for the function type as literals do for other data types.
In computer science, Peter Landin's J operator is a programming construct that post-composes a lambda expression with the continuation to the current lambda-context. The resulting “function” is first-class and can be passed on to subsequent functions, where if applied it will return its result to the continuation of the function in which it was created.
In programming language semantics, normalisation by evaluation (NBE) is a method of obtaining the normal form of terms in the λ-calculus by appealing to their denotational semantics. A term is first interpreted into a denotational model of the λ-term structure, and then a canonical (β-normal and η-long) representative is extracted by reifying the denotation. Such an essentially semantic, reduction-free, approach differs from the more traditional syntactic, reduction-based, description of normalisation as reductions in a term rewrite system where β-reductions are allowed deep inside λ-terms.
Irvine Dataflow (Id) is a general-purpose parallel programming language, started at the University of California at Irvine in 1975 by Arvind and K. P. Gostelow. Arvind continued work with Id at MIT into the 1990s.
In mathematical logic, a term denotes a mathematical object while a formula denotes a mathematical fact. In particular, terms appear as components of a formula. This is analogous to natural language, where a noun phrase refers to an object and a whole sentence refers to a fact.
In theoretical computer science, the Krivine machine is an abstract machine. As an abstract machine, it shares features with Turing machines and the SECD machine. The Krivine machine explains how to compute a recursive function. More specifically it aims to define rigorously head normal form reduction of a lambda term using call-by-name reduction. Thanks to its formalism, it tells in details how a kind of reduction works and sets the theoretical foundation of the operational semantics of functional programming languages. On the other hand, Krivine machine implements call-by-name because it evaluates the body of a β-redex before it applies the body to its parameter. In other words, in an expression u it evaluates first λx. t before applying it to u. In functional programming, this would mean that in order to evaluate a function applied to a parameter, it evaluates first the function before applying it to the parameter.
A CEK Machine is an abstract machine invented by Matthias Felleisen and Daniel P. Friedman that implements left-to-right call by value. It is generally implemented as an interpreter for functional programming languages, but can also be used to implement simple imperative programming languages. A state in a CEK machine includes a control statement, environment and continuation. The control statement is the term being evaluated at that moment, the environment is (usually) a map from variable names to values, and the continuation stores another state, or a special halt case. It is a simplified form of another abstract machine called the SECD machine.