Warren Abstract Machine

Last updated

In 1983, David H. D. Warren designed an abstract machine for the execution of Prolog consisting of a memory architecture and an instruction set. [1] [2] [3] This design became known as the Warren Abstract Machine (WAM) and has become the de facto standard target for Prolog compilers.

Contents

Purpose

The purpose of compiling Prolog code to the more low-level WAM code is to make subsequent interpretation of the Prolog program more efficient. Prolog code is reasonably easy to translate to WAM instructions, which can be more efficiently interpreted. Also, subsequent code improvements and compilations to native code are often easier to perform on the more low-level representation.

In order to write efficient Prolog programs, a basic understanding of how the WAM works can be advantageous. Some of the most important WAM concepts are first argument indexing and its relation to choice-points, tail call optimization, and memory reclamation on failure.

Memory areas

The WAM has the following memory areas:

Example

Here is a piece of Prolog code:

girl(sally).girl(jane).boy(B):-\+girl(B).

A WAM-based Prolog compiler will compile this into WAM instructions similar to the following:

predicate(girl/1):switch_on_term(2,1,fail,fail,fail),label(1):switch_on_atom([(sally,3),(jane,5)])label(2):try_me_else(4)label(3):get_atom(sally,0)proceedlabel(4):trust_me_else_faillabel(5):get_atom(jane,0)proceedpredicate(boy/1):get_variable(x(1),0)put_structure(girl/1,0)unify_local_value(x(1))execute((\+)/1)])

An important characteristic of this code is its ability to cope with the various modes in which the predicates can be evoked: any argument might be a variable, a ground term, or a partly instantiated term. The "switch" instructions handle the different cases.

Related Research Articles

<span class="mw-page-title-main">Assembly language</span> Low-level programming language

In computer programming, assembly language, often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported.

<span class="mw-page-title-main">Computer program</span> Instructions a computer can execute

.

Prolog is a logic programming language that has its origins in artificial intelligence, automated theorem proving and computational linguistics.

An optimizing compiler is a compiler designed to generate code that is optimized in aspects such as minimizing program execution time, memory usage, storage size, and power consumption. Optimization is generally implemented as a sequence of optimizing transformations, algorithms that transform code to produce semantically equivalent code optimized for some aspect.

In computer science, threaded code is a programming technique where the code has a form that essentially consists entirely of calls to subroutines. It is often used in compilers, which may generate code in that form or be implemented in that form themselves. The code may be processed by an interpreter or it may simply be a sequence of machine code call instructions.

In computer science, an instruction set architecture (ISA) is an abstract model that generally defines how software controls the CPU in a computer or a family of computers. A device or program that executes instructions described by that ISA, such as a central processing unit (CPU), is called an implementation of that ISA.

<span class="mw-page-title-main">Interpreter (computing)</span> Program that executes source code without a separate compilation step

In computer science, an interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program. An interpreter generally uses one of the following strategies for program execution:

  1. Parse the source code and perform its behavior directly;
  2. Translate source code into some efficient intermediate representation or object code and immediately execute that;
  3. Explicitly execute stored precompiled bytecode made by a compiler and matched with the interpreter's virtual machine.

In computer science, an abstract machine is a theoretical model that allows for a detailed and precise analysis of how a computer system functions. It is similar to a mathematical function in that it receives inputs and produces outputs based on predefined rules. Abstract machines vary from literal machines in that they are expected to perform correctly and independently of hardware. Abstract machines are "machines" because they allow step-by-step execution of programmes; they are "abstract" because they ignore many aspects of actual (hardware) machines. A typical abstract machine consists of a definition in terms of input, output, and the set of allowable operations used to turn the former into the latter. They can be used for purely theoretical reasons as well as models for real-world computer systems. In the theory of computation, abstract machines are often used in thought experiments regarding computability or to analyse the complexity of algorithms. This use of abstract machines is fundamental to the field of computational complexity theory, such as finite state machines, Mealy machines, push-down automata, and Turing machines.

In computer science, computer engineering and programming language implementations, a stack machine is a computer processor or a virtual machine in which the primary interaction is moving short-lived temporary values to and from a push down stack. In the case of a hardware processor, a hardware stack is used. The use of a stack significantly reduces the required number of processor registers. Stack machines extend push-down automata with additional load/store operations or multiple stacks and hence are Turing-complete.

Datalog is a declarative logic programming language. While it is syntactically a subset of Prolog, Datalog generally uses a bottom-up rather than top-down evaluation model. This difference yields significantly different behavior and properties from Prolog. It is often used as a query language for deductive databases. Datalog has been applied to problems in data integration, networking, program analysis, and more.

In computer science, a tail call is a subroutine call performed as the final action of a procedure. If the target of a tail is the same subroutine, the subroutine is said to be tail recursive, which is a special case of direct recursion. Tail recursion is particularly useful, and is often easy to optimize in implementations.

SWI-Prolog is a free implementation of the programming language Prolog, commonly used for teaching and semantic web applications. It has a rich set of features, libraries for constraint logic programming, multithreading, unit testing, GUI, interfacing to Java, ODBC and others, literate programming, a web server, SGML, RDF, RDFS, developer tools, and extensive documentation.

ECLiPSe is a software system for the development and deployment of constraint logic programming applications, e.g., in the areas of optimization, planning, scheduling, resource allocation, timetabling, transport, etc. It is also suited for teaching most aspects of combinatorial problem solving, e.g., problem modeling, constraint programming, mathematical programming, and search techniques. It contains constraint solver libraries, a high-level modeling and control language, interfaces to third-party solvers, an integrated development environment and interfaces for embedding into host environments.

In computer science, run-time algorithm specialization is a methodology for creating efficient algorithms for costly computation tasks of certain kinds. The methodology originates in the field of automated theorem proving and, more specifically, in the Vampire theorem prover project.

GNU Prolog is a compiler developed by Daniel Diaz with an interactive debugging environment for Prolog available for Unix, Windows, Mac OS X and Linux. It also supports some extensions to Prolog including constraint programming over a finite domain, parsing using definite clause grammars, and an operating system interface.

Compiler Description Language (CDL) is a programming language based on affix grammars. It is very similar to Backus–Naur form (BNF) notation. It was designed for the development of compilers. It is very limited in its capabilities and control flow, and intentionally so. The benefits of these limitations are twofold.

Algebraic Logic Functional (ALF) programming language combines functional and logic programming techniques. Its foundation is Horn clause logic with equality, which consists of predicates and Horn clauses for logic programming, and functions and equations for functional programming.

The syntax and semantics of Prolog, a programming language, are the sets of rules that define how a Prolog program is written and how it is interpreted, respectively. The rules are laid out in ISO standard ISO/IEC 13211 although there are differences in the Prolog implementations.

In computer programming, a function is a callable unit of software logic that has a well-defined interface and behavior and can be invoked multiple times.

A high-level language computer architecture (HLLCA) is a computer architecture designed to be targeted by a specific high-level programming language (HLL), rather than the architecture being dictated by hardware considerations. It is accordingly also termed language-directed computer design, coined in McKeeman (1967) and primarily used in the 1960s and 1970s. HLLCAs were popular in the 1960s and 1970s, but largely disappeared in the 1980s. This followed the dramatic failure of the Intel 432 (1981) and the emergence of optimizing compilers and reduced instruction set computer (RISC) architectures and RISC-like complex instruction set computer (CISC) architectures, and the later development of just-in-time compilation (JIT) for HLLs. A detailed survey and critique can be found in Ditzel & Patterson (1980).

References

  1. David H. D. Warren (October 1983). An abstract Prolog instruction set (PDF). Menlo Park, CA, USA: Artificial Intelligence Center at SRI International. Archived (PDF) from the original on 2022-06-19.
  2. Hassan Aït-Kaci (February 18, 1999). Warren's Abstract Machine: A Tutorial Reconstruction (PDF). Archived from the original on 2003-02-13.{{cite book}}: CS1 maint: unfit URL (link)
  3. Hassan Aït-Kaci. "Warren's Abstract Machine: A Tutorial Reconstruction; the book, errata and slides". Archived from the original on 19 January 2022. Retrieved 7 March 2011.