Abstract rewriting machine

Last updated

The Abstract Rewriting Machine (ARM) is a virtual machine which implements term rewriting for minimal term rewriting systems.

Minimal term rewriting systems are left-linear term rewriting systems in which each rule takes on one of six forms:

Continuation
Return
Match
Add
Delete
Ident

Each of these six forms is mapped (in ARM) to one or a few processor instructions on most contemporary micro processors. Accordingly, minimal term rewriting is achieved at tens to hundreds of clock cycles per reduction step—millions of reduction steps per second.

ARM implements general term rewriting, in that every single-sorted unconditional left-linear term rewriting system can be transformed (compiled) into a minimal term rewriting system that gives rise to the same normal form relation.

An overview with references to this compilation process for innermost rewriting, as well as a detailed overview of ARM, can be found in "Within ARM's reach: compilation of left-linear rewrite systems via minimal rewrite systems". A description for lazy (non-innermost) rewriting can be found in "Lazy rewriting on eager machinery".

A documented implementation of ARM (with the term rewriting language Epic) is available here. Note that site and software are no longer being actively maintained.

Related Research Articles

<span class="mw-page-title-main">Associative property</span> Property of a mathematical operation

In mathematics, the associative property is a property of some binary operations, which means that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.

In logic and computer science, unification is an algorithmic process of solving equations between symbolic expressions. For example, using x,y,z as variables, the singleton equation set { cons(x,cons(x,nil)) = cons(2,y) } is a syntactic first-order unification problem that has the substitution { x ↦ 2, ycons(2,nil) } as its only solution.

In mathematics, computer science, and logic, rewriting covers a wide range of methods of replacing subterms of a formula with other terms. Such methods may be achieved by rewriting systems. In their most basic form, they consist of a set of objects, plus relations on how to transform those objects.

A first class constraint is a dynamical quantity in a constrained Hamiltonian system whose Poisson bracket with all the other constraints vanishes on the constraint surface in phase space. To calculate the first class constraint, one assumes that there are no second class constraints, or that they have been calculated previously, and their Dirac brackets generated.

Bunched logic is a variety of substructural logic proposed by Peter O'Hearn and David Pym. Bunched logic provides primitives for reasoning about resource composition, which aid in the compositional analysis of computer and other systems. It has category-theoretic and truth-functional semantics, which can be understood in terms of an abstract concept of resource, and a proof theory in which the contexts Γ in an entailment judgement Γ ⊢ A are tree-like structures (bunches) rather than lists or (multi)sets as in most proof calculi. Bunched logic has an associated type theory, and its first application was in providing a way to control the aliasing and other forms of interference in imperative programs. The logic has seen further applications in program verification, where it is the basis of the assertion language of separation logic, and in systems modelling, where it provides a way to decompose the resources used by components of a system.

Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.

For supervised learning applications in machine learning and statistical learning theory, generalization error is a measure of how accurately an algorithm is able to predict outcome values for previously unseen data. Because learning algorithms are evaluated on finite samples, the evaluation of a learning algorithm may be sensitive to sampling error. As a result, measurements of prediction error on the current data may not provide much information about predictive ability on new data. Generalization error can be minimized by avoiding overfitting in the learning algorithm. The performance of a machine learning algorithm is visualized by plots that show values of estimates of the generalization error through the learning process, which are called learning curves.

In abstract rewriting, an object is in normal form if it cannot be rewritten any further, i.e. it is irreducible. Depending on the rewriting system, an object may rewrite to several normal forms or none at all. Many properties of rewriting systems relate to normal forms.

Orthogonality as a property of term rewriting systems (TRSs) describes where the reduction rules of the system are all left-linear, that is each variable occurs only once on the left hand side of each reduction rule, and there is no overlap between them, i.e. the TRS has no critical pairs. For example is not left-linear.

In mathematics, computer science and logic, overlap, as a property of the reduction rules in term rewriting system, describes a situation where a number of different reduction rules specify potentially contradictory ways of reducing a reducible expression, also known as a redex, within a term.

Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.

In mathematics and economics, transportation theory or transport theory is a name given to the study of optimal transportation and allocation of resources. The problem was formalized by the French mathematician Gaspard Monge in 1781.

<span class="mw-page-title-main">Eddy diffusion</span> Mixing of fluids due to eddy currents

In fluid dynamics, eddy diffusion, eddy dispersion, or turbulent diffusion is a process by which fluid substances mix together due to eddy motion. These eddies can vary widely in size, from subtropical ocean gyres down to the small Kolmogorov microscales, and occur as a result of turbulence. The theory of eddy diffusion was first developed by Sir Geoffrey Ingram Taylor.

<span class="mw-page-title-main">Coulomb wave function</span>

In mathematics, a Coulomb wave function is a solution of the Coulomb wave equation, named after Charles-Augustin de Coulomb. They are used to describe the behavior of charged particles in a Coulomb potential and can be written in terms of confluent hypergeometric functions or Whittaker functions of imaginary argument.

<span class="mw-page-title-main">Light front quantization</span> Technique in computational quantum field theory

The light-front quantization of quantum field theories provides a useful alternative to ordinary equal-time quantization. In particular, it can lead to a relativistic description of bound systems in terms of quantum-mechanical wave functions. The quantization is based on the choice of light-front coordinates, where plays the role of time and the corresponding spatial coordinate is . Here, is the ordinary time, is one Cartesian coordinate, and is the speed of light. The other two Cartesian coordinates, and , are untouched and often called transverse or perpendicular, denoted by symbols of the type . The choice of the frame of reference where the time and -axis are defined can be left unspecified in an exactly soluble relativistic theory, but in practical calculations some choices may be more suitable than others.

In mathematics, a partial cyclic order is a ternary relation that generalizes a cyclic order in the same way that a partial order generalizes a linear order.

Anti-unification is the process of constructing a generalization common to two given symbolic expressions. As in unification, several frameworks are distinguished depending on which expressions are allowed, and which expressions are considered equal. If variables representing functions are allowed in an expression, the process is called "higher-order anti-unification", otherwise "first-order anti-unification". If the generalization is required to have an instance literally equal to each input expression, the process is called "syntactical anti-unification", otherwise "E-anti-unification", or "anti-unification modulo theory".

In rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation. Some authors use the term to refer to an evaluation strategy.

Implicit computational complexity (ICC) is a subfield of computational complexity theory that characterizes algorithms by constraints on the way in which they are constructed, without reference to a specific underlying machine model or to explicit bounds on computational resources unlike conventional complexity theory. ICC was developed in the 1990s and employs the techniques of proof theory, substructural logic, model theory and recursion theory to prove bounds on the expressive power of high-level formal languages. ICC is also concerned with the practical realization of functional programming languages, language tools and type theory that can control the resource usage of programs in a formally verifiable sense.

Polarization gradient cooling is a technique in laser cooling of atoms. It was proposed to explain the experimental observation of cooling below the doppler limit. Shortly after the theory was introduced experiments were performed that verified the theoretical predictions. While Doppler cooling allows atoms to be cooled to hundreds of microkelvin, PG cooling allows atoms to be cooled to a few microkelvin or less.

References