Operational semantics

Last updated

Operational semantics is a category of formal programming language semantics in which certain desired properties of a program, such as correctness, safety or security, are verified by constructing proofs from logical statements about its execution and procedures, rather than by attaching mathematical meanings to its terms (denotational semantics). Operational semantics are classified in two categories: structural operational semantics (or small-step semantics) formally describe how the individual steps of a computation take place in a computer-based system; by opposition natural semantics (or big-step semantics) describe how the overall results of the executions are obtained. Other approaches to providing a formal semantics of programming languages include axiomatic semantics and denotational semantics.

Contents

The operational semantics for a programming language describes how a valid program is interpreted as sequences of computational steps. These sequences then are the meaning of the program. In the context of functional programming, the final step in a terminating sequence returns the value of the program. (In general there can be many return values for a single program, because the program could be nondeterministic, and even for a deterministic program there can be many computation sequences since the semantics may not specify exactly what sequence of operations arrives at that value.)

Perhaps the first formal incarnation of operational semantics was the use of the lambda calculus to define the semantics of Lisp. [1] Abstract machines in the tradition of the SECD machine are also closely related.

History

The concept of operational semantics was used for the first time in defining the semantics of Algol 68. The following statement is a quote from the revised ALGOL 68 report:

The meaning of a program in the strict language is explained in terms of a hypothetical computer which performs the set of actions that constitute the elaboration of that program. (Algol68, Section 2)

The first use of the term "operational semantics" in its present meaning is attributed to Dana Scott (Plotkin04). What follows is a quote from Scott's seminal paper on formal semantics, in which he mentions the "operational" aspects of semantics.

It is all very well to aim for a more ‘abstract’ and a ‘cleaner’ approach to semantics, but if the plan is to be any good, the operational aspects cannot be completely ignored. (Scott70)

Approaches

Gordon Plotkin introduced the structural operational semantics, Matthias Felleisen and Robert Hieb the reduction semantics, [2] and Gilles Kahn the natural semantics.

Small-step semantics

Structural operational semantics

Structural operational semantics (SOS, also called structured operational semantics or small-step semantics) was introduced by Gordon Plotkin in (Plotkin81) as a logical means to define operational semantics. The basic idea behind SOS is to define the behavior of a program in terms of the behavior of its parts, thus providing a structural, i.e., syntax-oriented and inductive, view on operational semantics. An SOS specification defines the behavior of a program in terms of a (set of) transition relation(s). SOS specifications take the form of a set of inference rules that define the valid transitions of a composite piece of syntax in terms of the transitions of its components.

For a simple example, we consider part of the semantics of a simple programming language; proper illustrations are given in Plotkin81 and Hennessy90, and other textbooks. Let range over programs of the language, and let range over states (e.g. functions from memory locations to values). If we have expressions (ranged over by ), values () and locations (), then a memory update command would have semantics:

Informally, the rule says that "if the expression in state reduces to value , then the program will update the state with the assignment ".

The semantics of sequencing can be given by the following three rules:

Informally, the first rule says that, if program in state finishes in state , then the program in state will reduce to the program in state . (You can think of this as formalizing "You can run , and then run using the resulting memory store.) The second rule says that if the program in state can reduce to the program with state , then the program in state will reduce to the program in state . (You can think of this as formalizing the principle for an optimizing compiler: "You are allowed to transform as if it were stand-alone, even if it is just the first part of a program.") The semantics is structural, because the meaning of the sequential program , is defined by the meaning of and the meaning of .

If we also have Boolean expressions over the state, ranged over by , then we can define the semantics of the while command:

Such a definition allows formal analysis of the behavior of programs, permitting the study of relations between programs. Important relations include simulation preorders and bisimulation. These are especially useful in the context of concurrency theory.

Thanks to its intuitive look and easy-to-follow structure, SOS has gained great popularity and has become a de facto standard in defining operational semantics. As a sign of success, the original report (so-called Aarhus report) on SOS (Plotkin81) has attracted more than 1000 citations according to the CiteSeer , making it one of the most cited technical reports in Computer Science.

Reduction semantics

Reduction semantics is an alternative presentation of operational semantics. Its key ideas were first applied to purely functional call by name and call by value variants of the lambda calculus by Gordon Plotkin in 1975 [3] and generalized to higher-order functional languages with imperative features by Matthias Felleisen in his 1987 dissertation. [4] The method was further elaborated by Matthias Felleisen and Robert Hieb in 1992 into a fully equational theory for control and state. [2] The phrase “reduction semantics” itself was first coined by Felleisen and Daniel P. Friedman in a PARLE 1987 paper. [5]

Reduction semantics are given as a set of reduction rules that each specify a single potential reduction step. For example, the following reduction rule states that an assignment statement can be reduced if it sits immediately beside its variable declaration:

To get an assignment statement into such a position it is “bubbled up” through function applications and the right-hand side of assignment statements until it reaches the proper point. Since intervening expressions may declare distinct variables, the calculus also demands an extrusion rule for expressions. Most published uses of reduction semantics define such “bubble rules” with the convenience of evaluation contexts. For example, the grammar of evaluation contexts in a simple call by value language can be given as

where denotes arbitrary expressions and denotes fully-reduced values. Each evaluation context includes exactly one hole into which a term is plugged in a capturing fashion. The shape of the context indicates with this hole where reduction may occur. To describe “bubbling” with the aid of evaluation contexts, a single axiom suffices:

This single reduction rule is the lift rule from Felleisen and Hieb's lambda calculus for assignment statements. The evaluation contexts restrict this rule to certain terms, but it is freely applicable in any term, including under lambdas.

Following Plotkin, showing the usefulness of a calculus derived from a set of reduction rules demands (1) a Church-Rosser lemma for the single-step relation, which induces an evaluation function, and (2) a Curry-Feys standardization lemma for the transitive-reflexive closure of the single-step relation, which replaces the non-deterministic search in the evaluation function with a deterministic left-most/outermost search. Felleisen showed that imperative extensions of this calculus satisfy these theorems. Consequences of these theorems are that the equational theory—the symmetric-transitive-reflexive closure—is a sound reasoning principle for these languages. However, in practice, most applications of reduction semantics dispense with the calculus and use the standard reduction only (and the evaluator that can be derived from it).

Reduction semantics are particularly useful given the ease by which evaluation contexts can model state or unusual control constructs (e.g., first-class continuations). In addition, reduction semantics have been used to model object-oriented languages, [6] contract systems, exceptions, futures, call-by-need, and many other language features. A thorough, modern treatment of reduction semantics that discusses several such applications at length is given by Matthias Felleisen, Robert Bruce Findler and Matthew Flatt in Semantics Engineering with PLT Redex. [7]

Big-step semantics

Natural semantics

Big-step structural operational semantics is also known under the names natural semantics, relational semantics and evaluation semantics. [8] Big-step operational semantics was introduced under the name natural semantics by Gilles Kahn when presenting Mini-ML, a pure dialect of ML.

One can view big-step definitions as definitions of functions, or more generally of relations, interpreting each language construct in an appropriate domain. Its intuitiveness makes it a popular choice for semantics specification in programming languages, but it has some drawbacks that make it inconvenient or impossible to use in many situations, such as languages with control-intensive features or concurrency. [9]

A big-step semantics describes in a divide-and-conquer manner how final evaluation results of language constructs can be obtained by combining the evaluation results of their syntactic counterparts (subexpressions, substatements, etc.).

Comparison

There are a number of distinctions between small-step and big-step semantics that influence whether one or the other forms a more suitable basis for specifying the semantics of a programming language.

Big-step semantics have the advantage of often being simpler (needing fewer inference rules) and often directly correspond to an efficient implementation of an interpreter for the language (hence Kahn calling them "natural".) Both can lead to simpler proofs, for example when proving the preservation of correctness under some program transformation. [10]

The main disadvantage of big-step semantics is that non-terminating (diverging) computations do not have an inference tree, making it impossible to state and prove properties about such computations. [10]

Small-step semantics give more control over the details and order of evaluation. In the case of instrumented operational semantics, this allows the operational semantics to track and the semanticist to state and prove more accurate theorems about the run-time behaviour of the language. These properties make small-step semantics more convenient when proving type soundness of a type system against an operational semantics. [10]

See also

Related Research Articles

<span class="mw-page-title-main">Inner product space</span> Generalization of the dot product; used to define Hilbert spaces

In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

Lambda calculus is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation that can be used to simulate any Turing machine. It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics.

The Cauchy–Schwarz inequality is an upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is considered one of the most important and widely used inequalities in mathematics.

In mathematics and theoretical computer science, a type theory is the formal presentation of a specific type system. Type theory is the academic study of type systems.

Combinatory logic is a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry, and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators, which were introduced by Schönfinkel in 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly in predicate logic. A combinator is a higher-order function that uses only function application and earlier defined combinators to define a result from its arguments.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

In theoretical computer science, the π-calculus is a process calculus. The π-calculus allows channel names to be communicated along the channels themselves, and in this way it is able to describe concurrent computations whose network configuration may change during the computation.

Kripke semantics is a formal semantics for non-classical logic systems created in the late 1950s and early 1960s by Saul Kripke and André Joyal. It was first conceived for modal logics, and later adapted to intuitionistic logic and other non-classical systems. The development of Kripke semantics was a breakthrough in the theory of non-classical logics, because the model theory of such logics was almost non-existent before Kripke.

The simply typed lambda calculus, a form of type theory, is a typed interpretation of the lambda calculus with only one type constructor that builds function types. It is the canonical and simplest example of a typed lambda calculus. The simply typed lambda calculus was originally introduced by Alonzo Church in 1940 as an attempt to avoid paradoxical use of the untyped lambda calculus.

Constraint logic programming is a form of constraint programming, in which logic programming is extended to include concepts from constraint satisfaction. A constraint logic program is a logic program that contains constraints in the body of clauses. An example of a clause including a constraint is A(X,Y):-X+Y>0,B(X),C(Y). In this clause, X+Y>0 is a constraint; A(X,Y), B(X), and C(Y) are literals as in regular logic programming. This clause states one condition under which the statement A(X,Y) holds: X+Y is greater than zero and both B(X) and C(Y) are true.

In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering.

Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two.

In logic, general frames are Kripke frames with an additional structure, which are used to model modal and intermediate logics. The general frame semantics combines the main virtues of Kripke semantics and algebraic semantics: it shares the transparent geometrical insight of the former, and robust completeness of the latter.

In formal semantics, a generalized quantifier (GQ) is an expression that denotes a set of sets. This is the standard semantics assigned to quantified noun phrases. For example, the generalized quantifier every boy denotes the set of sets of which every boy is a member:

Interaction nets are a graphical model of computation devised by Yves Lafont in 1990 as a generalisation of the proof structures of linear logic. An interaction net system is specified by a set of agent types and a set of interaction rules. Interaction nets are an inherently distributed model of computation in the sense that computations can take place simultaneously in many parts of an interaction net, and no synchronisation is needed. The latter is guaranteed by the strong confluence property of reduction in this model of computation. Thus interaction nets provide a natural language for massive parallelism. Interaction nets are at the heart of many implementations of the lambda calculus, such as efficient closed reduction and optimal, in Lévy's sense, Lambdascope.

In mathematics, the Skorokhod integral, also named Hitsuda–Skorokhod integral, often denoted , is an operator of great importance in the theory of stochastic processes. It is named after the Ukrainian mathematician Anatoliy Skorokhod and Japanese mathematician Masuyuki Hitsuda. Part of its importance is that it unifies several concepts:

A decoherence-free subspace (DFS) is a subspace of a quantum system's Hilbert space that is invariant to non-unitary dynamics. Alternatively stated, they are a small section of the system Hilbert space where the system is decoupled from the environment and thus its evolution is completely unitary. DFSs can also be characterized as a special class of quantum error correcting codes. In this representation they are passive error-preventing codes since these subspaces are encoded with information that (possibly) won't require any active stabilization methods. These subspaces prevent destructive environmental interactions by isolating quantum information. As such, they are an important subject in quantum computing, where (coherent) control of quantum systems is the desired goal. Decoherence creates problems in this regard by causing loss of coherence between the quantum states of a system and therefore the decay of their interference terms, thus leading to loss of information from the (open) quantum system to the surrounding environment. Since quantum computers cannot be isolated from their environment and information can be lost, the study of DFSs is important for the implementation of quantum computers into the real world.

In computer science, a computation is said to diverge if it does not terminate or terminates in an exceptional state. Otherwise it is said to converge. In domains where computations are expected to be infinite, such as process calculi, a computation is said to diverge if it fails to be productive.

In rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation. Some authors use the term to refer to an evaluation strategy.

In computer science, refocusing is a program transformation used to implement a reduction semantics—i.e., a small-step operational semantics with an explicit representation of the reduction context—more efficiently. It is a step towards implementing a deterministic semantics as a deterministic abstract machine.

References

  1. McCarthy, John. "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I". Archived from the original on 2013-10-04. Retrieved 2006-10-13.
  2. 1 2 Felleisen, M.; Hieb, R. (1992). "The Revised Report on the Syntactic Theories of Sequential Control and State". Theoretical Computer Science. 103 (2): 235–271. doi:10.1016/0304-3975(92)90014-7.
  3. Plotkin, Gordon (1975). "Call-by-name, call-by-value and the λ-calculus" (PDF). Theoretical Computer Science. 1 (2): 125–159. doi: 10.1016/0304-3975(75)90017-1 . Retrieved July 22, 2021.
  4. Felleisen, Matthias (1987). The calculi of Lambda-v-CS conversion: a syntactic theory of control and state in imperative higher-order programming languages (PDF) (PhD). Indiana University. Retrieved July 22, 2021.
  5. Felleisen, Matthias; Friedman, Daniel P. (1987). "A Reduction Semantics for Imperative Higher-Order Languages". Proceedings of the Parallel Architectures and Languages Europe. International Conference on Parallel Architectures and Languages Europe. Vol. 1. Springer-Verlag. pp. 206–223. doi:10.1007/3-540-17945-3_12.
  6. Abadi, M.; Cardelli, L. (8 September 2012). A Theory of Objects. ISBN   9781441985989.
  7. Felleisen, Matthias; Findler, Robert Bruce; Flatt, Matthew (2009). Semantics Engineering with PLT Redex. The MIT Press. ISBN   978-0-262-06275-6.
  8. University of Illinois CS422
  9. Nipkow, Tobias; Klein, Gerwin (2014). Concrete Semantics (PDF). pp. 101–102. doi: 10.1007/978-3-319-10542-0 . Retrieved Mar 13, 2024.
  10. 1 2 3 Xavier Leroy. "Coinductive big-step operational semantics".

Further reading