In computing, a meta-circular evaluator (MCE) or meta-circular interpreter (MCI) is an interpreter which defines each feature of the interpreted language using a similar facility of the interpreter's host language. For example, interpreting a lambda application may be implemented using function application. [1] Meta-circular evaluation is most prominent in the context of Lisp. [1] [2] A self-interpreter is a meta-circular interpreter where the interpreted language is nearly identical to the host language; the two terms are often used synonymously. [3]
The dissertation of Corrado Böhm [4] describes the design of a self-hosting compiler. [5] Due to the difficulty of compiling higher-order functions, many languages were instead defined via interpreters, most prominently Lisp. [1] [6] The term itself was coined by John C. Reynolds, [1] and popularized through its use in the book Structure and Interpretation of Computer Programs . [3] [7]
A self-interpreter is a meta-circular interpreter where the host language is also the language being interpreted. [8] A self-interpreter displays a universal function for the language in question, and can be helpful in learning certain aspects of the language. [2] A self-interpreter will provide a circular, vacuous definition of most language constructs and thus provides little insight into the interpreted language's semantics, for example evaluation strategy. Addressing these issues produces the more general notion of a "definitional interpreter". [1]
This part is based on Section 3.2.4 of Danvy's thesis. [9]
Here is the core of a self-evaluator for the calculus. The abstract syntax of the calculus is implemented as follows in OCaml, representing variables with their de Bruijn index, i.e., with their lexical offset (starting from 0):
typeterm=INDofint(* de Bruijn index *)|ABSofterm|APPofterm*term
The evaluator uses an environment:
typevalue=FUNof(value->value)letreceval(t:term)(e:valuelist):value=matchtwithINDn->List.nthen|ABSt'->FUN(funv->evalt'(v::e))|APP(t0,t1)->apply(evalt0e)(evalt1e)andapply(FUNf:value)(a:value)=faletmain(t:term):value=evalt[]
Values (of type value
) conflate expressible values (the result of evaluating an expression in an environment) and denotable values (the values denoted by variables in the environment), a terminology that is due to Christopher Strachey. [10] [11]
Environments are represented as lists of denotable values.
The core evaluator has three clauses:
This evaluator is compositional in that each of its recursive calls is made over a proper sub-part of the given term. It is also higher order since the domain of values is a function space.
In "Definitional Interpreters", Reynolds answered the question as to whether such a self-interpreter is well defined. He answered in the negative because the evaluation strategy of the defined language (the source language) is determined by the evaluation strategy of the defining language (the meta-language). If the meta-language follows call by value (as OCaml does), the source language follows call by value. If the meta-language follows call by name (as Algol 60 does), the source language follows call by name. And if the meta-language follows call by need (as Haskell does), the source language follows call by need.
In "Definitional Interpreters", Reynolds made a self-interpreter well defined by making it independent of the evaluation strategy of its defining language. He fixed the evaluation strategy by transforming the self-interpreter into continuation-passing style, which is evaluation-strategy independent, as later captured in Gordon Plotkin's Independence Theorems. [12]
Furthermore, because logical relations had yet to be discovered, Reynolds made the resulting continuation-passing evaluator first order by (1) closure-converting it and (2) defunctionalizing the continuation.
He pointed out the "machine-like quality" of the resulting interpreter, which is the origin of the CEK machine since Reynolds's CPS transformation was for call by value. [13] For call by name, these transformations map the self-interpreter to an early instance of the Krivine machine. [14] The SECD machine and many other abstract machines can be inter-derived this way. [15] [16]
It is remarkable that the three most famous abstract machines for the calculus functionally correspond to the same self-interpreter.
Total functional programming languages that are strongly normalizing cannot be Turing complete, otherwise one could solve the halting problem by seeing if the program type-checks. That means that there are computable functions that cannot be defined in the total language. [17] In particular it is impossible to define a self-interpreter in a total programming language, for example in any of the typed lambda calculi such as the simply typed lambda calculus, Jean-Yves Girard's System F, or Thierry Coquand's calculus of constructions. [18] [19] Here, by "self-interpreter" we mean a program that takes a source term representation in some plain format (such as a string of characters) and returns a representation of the corresponding normalized term. This impossibility result does not hold for other definitions of "self-interpreter". For example, some authors have referred to functions of type as self-interpreters, where is the type of representations of -typed terms. To avoid confusion, we will refer to these functions as self-recognizers. Brown and Palsberg showed that self-recognizers could be defined in several strongly-normalizing languages, including System F and System Fω. [20] This turned out to be possible because the types of encoded terms being reflected in the types of their representations prevents constructing a diagonal argument. In their paper, Brown and Palsberg claim to disprove the "conventional wisdom" that self-interpretation is impossible (and they refer to Wikipedia as an example of the conventional wisdom), but what they actually disprove is the impossibility of self-recognizers, a distinct concept. In their follow-up work, they switch to the more specific "self-recognizer" terminology used here, notably distinguishing these from "self-evaluators", of type . [21] They also recognize that implementing self-evaluation seems harder than self-recognition, and leave the implementation of the former in a strongly-normalizing language as an open problem.
In combination with an existing language implementation, meta-circular interpreters provide a baseline system from which to extend a language, either upwards by adding more features or downwards by compiling away features rather than interpreting them. [22] They are also useful for writing tools that are tightly integrated with the programming language, such as sophisticated debuggers.[ citation needed ] A language designed with a meta-circular implementation in mind is often more suited for building languages in general, even ones completely different from the host language.[ citation needed ]
This section needs additional citations for verification .(September 2008) |
Many languages have one or more meta-circular implementations. Here below is a partial list.
Some languages with a meta-circular implementation designed from the bottom up, in grouped chronological order:
Some languages with a meta-circular implementation via third parties:
In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program.
Lisp is a family of programming languages with a long history and a distinctive, fully parenthesized prefix notation. Originally specified in the late 1950s, it is the second-oldest high-level programming language still in common use, after Fortran. Lisp has changed since its early days, and many dialects have existed over its history. Today, the best-known general-purpose Lisp dialects are Common Lisp, Scheme, Racket, and Clojure.
Lambda calculus is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. Untyped lambda calculus, the topic of this article, is a universal machine, a model of computation that can be used to simulate any Turing machine. It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics. In 1936, Church found a formulation which was logically consistent, and documented it in 1940.
Scheme is a dialect of the Lisp family of programming languages. Scheme was created during the 1970s at the MIT Computer Science and Artificial Intelligence Laboratory and released by its developers, Guy L. Steele and Gerald Jay Sussman, via a series of memos now known as the Lambda Papers. It was the first dialect of Lisp to choose lexical scope and the first to require implementations to perform tail-call optimization, giving stronger support for functional programming and associated techniques such as recursive algorithms. It was also one of the first programming languages to support first-class continuations. It had a significant influence on the effort that led to the development of Common Lisp.
In mathematics and theoretical computer science, a type theory is the formal presentation of a specific type system. Type theory is the academic study of type systems.
In computer science, an interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program. An interpreter generally uses one of the following strategies for program execution:
In combinatory logic for computer science, a fixed-point combinator, is a higher-order function that returns some fixed point of its argument function, if one exists.
In computer programming, a type system is a logical system comprising a set of rules that assigns a property called a type to every term. Usually the terms are various language constructs of a computer program, such as variables, expressions, functions, or modules. A type system dictates the operations that can be performed on a term. For variables, the type system determines the allowed values of that term.
ISWIM is an abstract computer programming language devised by Peter Landin and first described in his article "The Next 700 Programming Languages", published in the Communications of the ACM in 1966.
In type theory, a typing rule is an inference rule that describes how a type system assigns a type to a syntactic construction. These rules may be applied by the type system to determine if a program is well-typed and what type expressions have. A prototypical example of the use of typing rules is in defining type inference in the simply typed lambda calculus, which is the internal language of Cartesian closed categories.
In computer science, a continuation is an abstract representation of the control state of a computer program. A continuation implements (reifies) the program control state, i.e. the continuation is a data structure that represents the computational process at a given point in the process's execution; the created data structure can be accessed by the programming language, instead of being hidden in the runtime environment. Continuations are useful for encoding other control mechanisms in programming languages such as exceptions, generators, coroutines, and so on.
In functional programming, continuation-passing style (CPS) is a style of programming in which control is passed explicitly in the form of a continuation. This is contrasted with direct style, which is the usual style of programming. Gerald Jay Sussman and Guy L. Steele, Jr. coined the phrase in AI Memo 349 (1975), which sets out the first version of the Scheme programming language. John C. Reynolds gives a detailed account of the numerous discoveries of continuations.
The simply typed lambda calculus, a form of type theory, is a typed interpretation of the lambda calculus with only one type constructor that builds function types. It is the canonical and simplest example of a typed lambda calculus. The simply typed lambda calculus was originally introduced by Alonzo Church in 1940 as an attempt to avoid paradoxical use of the untyped lambda calculus.
Programming language theory (PLT) is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of formal languages known as programming languages. Programming language theory is closely related to other fields including mathematics, software engineering, and linguistics. There are a number of academic conferences and journals in the area.
In programming language semantics, normalisation by evaluation (NBE) is a method of obtaining the normal form of terms in the λ-calculus by appealing to their denotational semantics. A term is first interpreted into a denotational model of the λ-term structure, and then a canonical (β-normal and η-long) representative is extracted by reifying the denotation. Such an essentially semantic, reduction-free, approach differs from the more traditional syntactic, reduction-based, description of normalisation as reductions in a term rewrite system where β-reductions are allowed deep inside λ-terms.
In type theory, an intersection type can be allocated to values that can be assigned both the type and the type . This value can be given the intersection type in an intersection type system. Generally, if the ranges of values of two types overlap, then a value belonging to the intersection of the two ranges can be assigned the intersection type of these two types. Such a value can be safely passed as argument to functions expecting either of the two types. For example, in Java the class Boolean
implements both the Serializable
and the Comparable
interfaces. Therefore, an object of type Boolean
can be safely passed to functions expecting an argument of type Serializable
and to functions expecting an argument of type Comparable
.
A Hindley–Milner (HM) type system is a classical type system for the lambda calculus with parametric polymorphism. It is also known as Damas–Milner or Damas–Hindley–Milner. It was first described by J. Roger Hindley and later rediscovered by Robin Milner. Luis Damas contributed a close formal analysis and proof of the method in his PhD thesis.
In theoretical computer science, the Krivine machine is an abstract machine. As an abstract machine, it shares features with Turing machines and the SECD machine. The Krivine machine explains how to compute a recursive function. More specifically it aims to define rigorously head normal form reduction of a lambda term using call-by-name reduction. Thanks to its formalism, it tells in details how a kind of reduction works and sets the theoretical foundation of the operational semantics of functional programming languages. On the other hand, Krivine machine implements call-by-name because it evaluates the body of a β-redex before it applies the body to its parameter. In other words, in an expression u it evaluates first λx. t before applying it to u. In functional programming, this would mean that in order to evaluate a function applied to a parameter, it evaluates first the function before applying it to the parameter.
A CEK Machine is an abstract machine invented by Matthias Felleisen and Daniel P. Friedman that implements left-to-right call by value. It is generally implemented as an interpreter for functional programming languages, but can also be used to implement simple imperative programming languages. A state in a CEK machine includes a control statement, environment and continuation. The control statement is the term being evaluated at that moment, the environment is (usually) a map from variable names to values, and the continuation stores another state, or a special halt case. It is a simplified form of another abstract machine called the SECD machine.
In computer science, refocusing is a program transformation used to implement a reduction semantics—i.e., a small-step operational semantics with an explicit representation of the reduction context—more efficiently. It is a step towards implementing a deterministic semantics as a deterministic abstract machine.