Graph reduction machine

Last updated

A graph reduction machine is a special-purpose computer built to perform combinator calculations by graph reduction.

Contents

Examples include the SKIM ("S-K-I machine") computer, built at the University of Cambridge Computer Laboratory, [1] the multiprocessor GRIP ("Graph Reduction In Parallel") computer, built at University College London, [2] [3] and the Reduceron, which was implemented on an FPGA with the single purpose of executing Haskell. [4] [5]

See also

Related Research Articles

In computing, a compiler is a computer program that translates computer code written in one programming language into another language. The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language to create an executable program.

In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program.

In computer science, pseudocode is a description of the steps in an algorithm using a mix of conventions of programming languages with informal, usually self-explanatory, notation of actions and conditions. Although pseudocode shares features with regular programming languages, it is intended for human reading rather than machine control. Pseudocode typically omits details that are essential for machine implementation of the algorithm, meaning that pseudocode can only be verified by hand. The programming language is augmented with natural language description details, where convenient, or with compact mathematical notation. The purpose of using pseudocode is that it is easier for people to understand than conventional programming language code, and that it is an efficient and environment-independent description of the key principles of an algorithm. It is commonly used in textbooks and scientific publications to document algorithms and in planning of software and other algorithms.

Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with flexible hardware platforms like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to add custom computational blocks using FPGAs. On the other hand, the main difference from custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric, thus providing new computational blocks without the need to manufacture and add new chips to the existing system.

<span class="mw-page-title-main">Model checking</span> Computer science field

In computer science, model checking or property checking is a method for checking whether a finite-state model of a system meets a given specification. This is typically associated with hardware or software systems, where the specification contains liveness requirements as well as safety requirements.

In computer science, a binary decision diagram (BDD) or branching program is a data structure that is used to represent a Boolean function. On a more abstract level, BDDs can be considered as a compressed representation of sets or relations. Unlike other compressed representations, operations are performed directly on the compressed representation, i.e. without decompression.

MultiLisp is a functional programming language, a dialect of the language Lisp, and of its dialect Scheme, extended with constructs for parallel computing execution and shared memory. These extensions involve side effects, rendering MultiLisp nondeterministic. Along with its parallel-programming extensions, MultiLisp also had some unusual garbage collection and task scheduling algorithms. Like Scheme, MultiLisp was optimized for symbolic computing. Unlike some parallel programming languages, MultiLisp incorporated constructs for causing side effects and for explicitly introducing parallelism.

Lennart Augustsson is a Swedish computer scientist. He was formerly a lecturer at the Computing Science Department at Chalmers University of Technology. His research field is functional programming and implementations of functional programming languages.

<span class="mw-page-title-main">Hardware acceleration</span> Specialized computer hardware

Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both.

In computer science, unbounded nondeterminism or unbounded indeterminacy is a property of concurrency by which the amount of delay in servicing a request can become unbounded as a result of arbitration of contention for shared resources while still guaranteeing that the request will eventually be serviced. Unbounded nondeterminism became an important issue in the development of the denotational semantics of concurrency, and later became part of research into the theoretical concept of hypercomputation.

Uniform machine scheduling is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. We are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m different machines. The goal is to minimize the makespan - the total time required to execute the schedule. The time that machine i needs in order to process job j is denoted by pi,j. In the general case, the times pi,j are unrelated, and any matrix of positive processing times is possible. In the specific variant called uniform machine scheduling, some machines are uniformly faster than others. This means that, for each machine i, there is a speed factor si, and the run-time of job j on machine i is pi,j = pj / si.

In cryptography, Galois/Counter Mode (GCM) is a mode of operation for symmetric-key cryptographic block ciphers which is widely adopted for its performance. GCM throughput rates for state-of-the-art, high-speed communication channels can be achieved with inexpensive hardware resources.

In theoretical computer science, a pointer machine is an atomistic abstract computational machine whose storage structure is a graph. A pointer algorithm could also be an algorithm restricted to the pointer machine model.

<span class="mw-page-title-main">NP-completeness</span> Complexity class

In computational complexity theory, a problem is NP-complete when:

  1. It is a decision problem, meaning that for any input to the problem, the output is either "yes" or "no".
  2. When the answer is "yes", this can be demonstrated through the existence of a short solution.
  3. The correctness of each solution can be verified quickly and a brute-force search algorithm can find a solution by trying all possible solutions.
  4. The problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified.

A term graph is a representation of an expression in a formal language as a generalized graph whose vertices are terms. Term graphs are a more powerful form of representation than expression trees because they can represent not only common subexpressions but also cyclic/recursive subexpressions.

In rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation. Some authors use the term to refer to an evaluation strategy.

Inductive programming (IP) is a special area of automatic programming, covering research from artificial intelligence and programming, which addresses learning of typically declarative and often recursive programs from incomplete specifications, such as input/output examples or constraints.

<span class="mw-page-title-main">John Darlington</span> British academic and author

John Darlington is a British academic, researcher and author. He is an Emeritus Professor at Imperial College London. He was Director of the London e-Science Centre and was head of the Functional Programming and Social Computing Sections at Imperial.

References

  1. Clarke, T. J.W.; Gladstone, P. J.S.; MacLean, C. D.; Norman, A. C. (25 August 1980). "SKIM - the S, K, I reduction machine". Proceedings of the 1980 ACM conference on LISP and functional programming - LFP '80. LFP '80. New York, NY, USA: Association for Computing Machinery. pp. 128–135. doi:10.1145/800087.802798. ISBN   978-1-4503-7396-8. S2CID   10189254.
  2. "Reduction Machines". 31 July 2002. Archived from the original on 31 July 2002. Retrieved 1 July 2023.
  3. Jones, Simon L. Peyton; Clack, Chris; Salkild, Jon; Hardie, Mark (1987). "GRIP — a high-performance architecture for parallel graph reduction". In Kahn, Gilles (ed.). Functional Programming Languages and Computer Architecture. Lecture Notes in Computer Science. Vol. 274. Berlin, Heidelberg: Springer. pp. 98–112. doi: 10.1007/3-540-18317-5_7 . ISBN   978-3-540-47879-9.
  4. Naylor, Matthew; Runciman, Colin (2012). "The Reduceron reconfigured and re-evaluated". Journal of Functional Programming. 22 (4–5): 574–613. doi: 10.1017/S0956796812000214 . ISSN   1469-7653. S2CID   1310090.
  5. Naylor, Matthew; Runciman, Colin (2008). "The Reduceron: Widening the von Neumann Bottleneck for Graph Reduction Using an FPGA". In Chitil, Olaf; Horváth, Zoltán; Zsók, Viktória (eds.). Implementation and Application of Functional Languages. Lecture Notes in Computer Science. Vol. 5083. Berlin, Heidelberg: Springer. pp. 129–146. doi:10.1007/978-3-540-85373-2_8. ISBN   978-3-540-85373-2.

Further reading