Real RAM

Last updated

In computing, especially computational geometry, a real RAM (random-access machine) is a mathematical model of a computer that can compute with exact real numbers instead of the binary fixed point or floating point numbers used by most actual computers. The real RAM was formulated by Michael Ian Shamos in his 1978 Ph.D. dissertation. [1]

Contents

Model

The "RAM" part of the real RAM model name stands for "random-access machine". This is a model of computing that resembles a simplified version of a standard computer architecture. It consists of a stored program, a computer memory unit consisting of an array of cells, and a central processing unit with a bounded number of registers. Each memory cell or register can store a real number. Under the control of the program, the real RAM can transfer real numbers between memory and registers, and perform arithmetic operations on the values stored in the registers.

The allowed operations typically include addition, subtraction, multiplication, and division, as well as comparisons, but not modulus or rounding to integers. The reason for avoiding integer rounding and modulus operations is that allowing these operations could give the real RAM unreasonable amounts of computational power, enabling it to solve PSPACE-complete problems in polynomial time. [2]

When analyzing algorithms for the real RAM, each allowed operation is typically assumed to take constant time.

Implementation

Software libraries such as LEDA have been developed which allow programmers to write computer programs that work as if they were running on a real RAM. These libraries represent real values using data structures which allow them to perform arithmetic and comparisons with the same results as a real RAM would produce. For example, In LEDA, real numbers are represented using the leda_real datatype, which supports k-th roots for any natural number k, rational operators, and comparison operators. [3] The time analysis of the underlying real RAM algorithm using these real datatypes can be interpreted as counting the number of library calls needed by a given algorithm. [4]

Comparison to other computational models

Related Research Articles

In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">Discrete mathematics</span> Study of discrete mathematical structures

Discrete mathematics is the study of mathematical structures that can be considered "discrete" rather than "continuous". Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no exact definition of the term "discrete mathematics".

Magma is a computer algebra system designed to solve problems in algebra, number theory, geometry and combinatorics. It is named after the algebraic structure magma. It runs on Unix-like operating systems, as well as Windows.

Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. While modern computational geometry is a recent development, it is one of the oldest fields of computing with a history stretching back to antiquity.

<span class="mw-page-title-main">Real computation</span> Concept in computability theory

In computability theory, the theory of real computation deals with hypothetical computing machines using infinite-precision real numbers. They are given this name because they operate on the set of real numbers. Within this theory, it is possible to prove interesting statements such as "The complement of the Mandelbrot set is only partially decidable."

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most arithmetic logic unit (ALU) hardware, which typically offers between 8 and 64 bits of precision.

<span class="mw-page-title-main">Ravindran Kannan</span>

Ravindran Kannan is a Principal Researcher at Microsoft Research India, where he leads the algorithms research group. He is also the first adjunct faculty of Computer Science and Automation Department of Indian Institute of Science.

In computation theory, the Blum–Shub–Smale machine, or BSS machine, is a model of computation introduced by Lenore Blum, Michael Shub and Stephen Smale, intended to describe computations over the real numbers. Essentially, a BSS machine is a Random Access Machine with registers that can store arbitrary real numbers and that can compute rational functions over reals in a single time step. It is closely related to the Real RAM model.

The Library of Efficient Data types and Algorithms (LEDA) is a proprietarily-licensed software library providing C++ implementations of a broad variety of algorithms for graph theory and computational geometry. It was originally developed by the Max Planck Institute for Informatics Saarbrücken. Since 2001, LEDA is further developed and distributed by the Algorithmic Solutions Software GmbH.

In computability theory, super-recursive algorithms are a generalization of ordinary algorithms that are more powerful, that is, compute more than Turing machines. The term was introduced by Mark Burgin, whose book "Super-recursive algorithms" develops their theory and presents several mathematical models. Turing machines and other mathematical models of conventional algorithms allow researchers to find properties of recursive algorithms and their computations. In a similar way, mathematical models of super-recursive algorithms, such as inductive Turing machines, allow researchers to find properties of super-recursive algorithms and their computations.

In computational complexity theory, there is an open problem of whether some information about a sum of radicals may be computed in polynomial time depending on the input size, i.e., in the number of bits necessary to represent this sum. It is of importance for many problems in computational geometry, since the computation of the Euclidean distance between two points in the general case involves the computation of a square root, and therefore the perimeter of a polygon or the length of a polygonal chain takes the form of a sum of radicals.

In computational complexity theory, and more specifically in the analysis of algorithms with integer data, the transdichotomous model is a variation of the random-access machine in which the machine word size is assumed to match the problem size. The model was proposed by Michael Fredman and Dan Willard, who chose its name "because the dichotomy between the machine model and the problem size is crossed in a reasonable manner."

In mathematical logic, computational complexity theory, and computer science, the existential theory of the reals is the set of all true sentences of the form

<span class="mw-page-title-main">Michael Shub</span> American mathematician

Michael Ira Shub is an American mathematician who has done research into dynamical systems and the complexity of real number algorithms.

In mathematics, specifically in computational geometry, geometric nonrobustness is a problem wherein branching decisions in computational geometry algorithms are based on approximate numerical computations, leading to various forms of unreliability including ill-formed output and software failure through crashing or infinite loops.

Complexity and Real Computation is a book on the computational complexity theory of real computation. It studies algorithms whose inputs and outputs are real numbers, using the Blum–Shub–Smale machine as its model of computation. For instance, this theory is capable of addressing a question posed in 1991 by Roger Penrose in The Emperor's New Mind: "is the Mandelbrot set computable?"

<span class="mw-page-title-main">Felipe Cucker</span> Uruguayan mathematician

Juan Felipe Cucker Farkas is an Uruguayan mathematician and theoretical computer scientist who has done research into the complexity theory of the Blum–Shub–Smale computational model and the complexity of numerical algorithms in linear programming and numerical algebraic geometry.

References

  1. Shamos, Michael Ian (1978), Computational Geometry, Ph.D. dissertation, Yale University.
  2. Schönhage, Arnold (1979), "On the power of random access machines", Proceedings of the Sixth International Colloquium on Automata, Languages and Programming (ICALP '79), Lecture Notes in Computer Science, vol. 71, Springer, pp. 520–529, doi:10.1007/3-540-09510-1_42, ISBN   978-3-540-09510-1, MR   0573259 .
  3. Melhorn, Kurt; Näher, Stefan (1999). The LEDA Platform of Combinatorial and Geometric Computing. Cambridge University Press. Retrieved 12 November 2019.
  4. Mehlhorn, Kurt; Schirra, Stefan (2001), "Exact computation with leda_real—theory and geometric applications" (PDF), Symbolic Algebraic Methods and Verification Methods (Dagstuhl, 1999), Springer, pp. 163–172, doi:10.1007/978-3-7091-6280-4_16, ISBN   978-3-211-83593-7, MR   1832422 .
  5. Grötschel, M.; Lovász, L.; Schrijver, A. (1981-06-01). "The ellipsoid method and its consequences in combinatorial optimization". Combinatorica. 1 (2): 169–197. doi:10.1007/BF02579273. ISSN   1439-6912.
  6. Blum, Lenore; Shub, Mike; Smale, Steve (1989), "On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines", Bulletin of the American Mathematical Society , 21 (1): 1–46, doi: 10.1090/S0273-0979-1989-15750-9 , Zbl   0681.03020 .