LOOP (programming language)

Last updated

LOOP is a simple register language that precisely captures the primitive recursive functions. [1] The language is derived from the counter-machine model. Like the counter machines the LOOP language comprises a set of one or more unbounded registers, each of which can hold a single non-negative integer. A few arithmetic instructions (like 'CleaR', 'INCrement', 'DECrement', 'CoPY', ...) operate on the registers. The only control flow instruction is 'LOOP x DO...END'. It causes the instructions within its scope to be repeated x times. (Changes of the content of register x during the execution of the loop do not affect the number of passes.)

Contents

History

The LOOP language was formulated in a 1967 paper by Albert R. Meyer and Dennis M. Ritchie. [2] They showed the correspondence between the LOOP language and primitive recursive functions.

The language also was the topic of the unpublished PhD thesis of Ritchie. [3] [4]

It was also presented by Uwe Schöning, along with GOTO and WHILE. [5]

Design philosophy and features

In contrast to GOTO programs and WHILE programs, LOOP programs always terminate. [6] Therefore, the set of functions computable by LOOP-programs is a proper subset of computable functions (and thus a subset of the computable by WHILE and GOTO program functions). [7]

Meyer & Ritchie proved that each primitive recursive function is LOOP-computable and vice versa. [2] [5]

An example of a total computable function that is not LOOP computable is the Ackermann function. [8]

Formal definition

Syntax

LOOP-programs consist of the symbols LOOP, DO, END, :=, + and ; as well as any number of variables and constants. LOOP-programs have the following syntax in modified Backus–Naur form:

Here, are variable names and are constants.

Semantics

If P is a LOOP program, P is equivalent to a function . The variables through in a LOOP program correspond to the arguments of the function , and are initialized before program execution with the appropriate values. All other variables are given the initial value zero. The variable corresponds to the value that takes when given the argument values from through .

A statement of the form

xi := 0

means the value of the variable is set to 0.

A statement of the form

xi := xi + 1

means the value of the variable is incremented by 1.

A statement of the form

P1; P2

represents the sequential execution of sub-programs and , in that order.

A statement of the form

LOOP x DOPEND

means the repeated execution of the partial program a total of times, where the value that has at the beginning of the execution of the statement is used. Even if changes the value of , it won't affect how many times is executed in the loop. If has the value zero, then is not executed inside the LOOP statement. This allows for branches in LOOP programs, where the conditional execution of a partial program depends on whether a variable has value zero or one.

Creating "convenience instructions"

From the base syntax one create "convenience instructions". These will not be subroutines in the conventional sense but rather LOOP programs created from the base syntax and given a mnemonic. In a formal sense, to use these programs one needs to either (i) "expand" them into the code  they will require the use of temporary or "auxiliary" variables so this must be taken into account, or (ii) design the syntax with the instructions 'built in'.

Example

The k-ary projection function extracts the i-th coordinate from an ordered k-tuple.

In their seminal paper [2] Meyer & Ritchie made the assignment a basic statement. As the example shows the assignment can be derived from the list of basic statements.

To create the instruction use the block of code below. =equiv

xj := 0; LOOP xi DO   xj := xj + 1 END

Again, all of this is for convenience only; none of this increases the model's intrinsic power.

Example Programs

Addition

Addition is recursively defined as:

Here, S should be read as "successor".

In the hyperoperater sequence it is the function

can be implemented by the LOOP program ADD( x1, x2)

LOOP x1DO   x0 := x0 + 1 END; LOOP x2DO   x0 := x0 + 1 END

Multiplication

Multiplication is the hyperoperation function

can be implemented by the LOOP program MULT( x1, x2 )

x0 := 0; LOOP x2DO   x0 := ADD( x1, x0) END

The program uses the ADD() program as a "convenience instruction". Expanded, the MULT program is a LOOP-program with two nested LOOP instructions. ADD counts for one.

More hyperoperators

Given a LOOP program for a hyperoperation function , one can construct a LOOP program for the next level

for instance (which stands for exponentiation) can be implemented by the LOOP program POWER( x1, x2 )

x0 := 1; LOOP x2DO   x0 := MULT( x1, x0 ) END

The exponentiation program, expanded, has three nested LOOP instructions.

Predecessor

The predecessor function is defined as

.

This function can be computed by the following LOOP program, which sets the variable to .

/* precondition: x2 = 0 */LOOP x1DO   x0 := x2;   x2 := x2 + 1 END

Expanded, this is the program

/* precondition: x2 = 0 */LOOP x1DO   x0 := 0;   LOOP x2DO     x0 := x0 + 1   END;   x2 := x2 + 1 END

This program can be used as a subroutine in other LOOP programs. The LOOP syntax can be extended with the following statement, equivalent to calling the above as a subroutine:

x0 := x1 ∸ 1

Remark: Again one has to mind the side effects. The predecessor program changes the variable x2, which might be in use elsewhere. To expand the statement x0 := x1 ∸ 1, one could initialize the variables xn, xn+1 and xn+2 (for a big enough n) to 0, x1 and 0 respectively, run the code on these variables and copy the result (xn) to x0. A compiler can do this.

Cut-off subtraction

If in the 'addition' program above the second loop decrements x0 instead of incrementing, the program computes the difference (cut off at 0) of the variables and .

x0 := x1LOOP x2DO   x0 := x0 ∸ 1 END

Like before we can extend the LOOP syntax with the statement:

x0 := x1 ∸ x2

If then else

An if-then-else statement with if x1> x2 then P1 else P2:

xn1 := x1 ∸ x2; xn2 := 0; xn3 := 1; LOOP xn1DO   xn2 := 1;   xn3 := 0 END; LOOP xn2DO   P1 END; LOOP xn3DO   P2 END;

See also

Notes and references

  1. Enderton 2012.
  2. 1 2 3 Meyer & Ritchie 1967.
  3. "Discovering Dennis Ritchie's Lost Dissertation". CHM. 2020-06-19. Retrieved 2020-07-14.
  4. Program structure and computational complexity draft | 102790971 | Computer History Museum. 1967. Retrieved 2020-07-14.{{cite book}}: |website= ignored (help)
  5. 1 2 Schöning 2008, p. 105.
  6. Schöning 2008, p. 93.
  7. Schöning 2001, p. 122.
  8. Schöning 2008, p. 112.

Bibliography

Mastering the Art of Loops in Programming: A Step-by-Step Tutorial

Related Research Articles

In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total computable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive.

In mathematics and computer programming, exponentiating by squaring is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. These can be of quite general use, for example in modular arithmetic or powering of matrices. For semigroups for which additive notation is commonly used, like elliptic curves used in cryptography, this method is also referred to as double-and-add.

In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a real-valued function f, its derivative f, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then

In computability theory, a primitive recursive function is, roughly speaking, a function that can be computed by a computer program whose loops are all "for" loops. Primitive recursive functions form a strict subset of those general recursive functions that are also total functions.

In mathematical logic and computer science, a general recursive function, partial recursive function, or μ-recursive function is a partial function from natural numbers to natural numbers that is "computable" in an intuitive sense – as well as in a formal one. If the function is total, it is also called a total recursive function. In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines. The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every total recursive function is a primitive recursive function—the most famous example is the Ackermann function.

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

<span class="mw-page-title-main">Law of large numbers</span> Averages of repeated trials converge to the expected value

In probability theory, the law of large numbers (LLN) is a mathematical theorem that states that the average of the results obtained from a large number of independent and identical random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean.

In probability theory, the probability generating function of a discrete random variable is a power series representation (the generating function) of the probability mass function of the random variable. Probability generating functions are often employed for their succinct description of the sequence of probabilities Pr(X = i) in the probability mass function for a random variable X, and to make available the well-developed theory of power series with non-negative coefficients.

In computer science, a loop invariant is a property of a program loop that is true before each iteration. It is a logical assertion, sometimes checked with a code assertion. Knowing its invariant(s) is essential in understanding the effect of a loop.

<span class="mw-page-title-main">Secant method</span> Root-finding method

In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method. However, the secant method predates Newton's method by over 3000 years.

In computability theory, the μ-operator, minimization operator, or unbounded search operator searches for the least natural number with a given property. Adding the μ-operator to the primitive recursive functions makes it possible to define all computable functions.

In number theory, the integer square root (isqrt) of a non-negative integer n is the non-negative integer m which is the greatest integer less than or equal to the square root of n,

<span class="mw-page-title-main">Recursion (computer science)</span> Use of functions that call themselves

In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.

The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions.

In mathematics, an initial algebra is an initial object in the category of F-algebras for a given endofunctor F. This initiality provides a general framework for induction and recursion.

In computer science, the prefix sum, cumulative sum, inclusive scan, or simply scan of a sequence of numbers x0, x1, x2, ... is a second sequence of numbers y0, y1, y2, ..., the sums of prefixes of the input sequence:

In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. Its early form was known to Seki Kōwa and was found for rectification of the circle, i.e. the calculation of π. It is most useful for accelerating the convergence of a sequence that is converging linearly.

In algebra, the greatest common divisor of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.

The PROPT MATLAB Optimal Control Software is a new generation platform for solving applied optimal control and parameters estimation problems.

Janus is a time-reversible programming language written at Caltech in 1982. The operational semantics of the language were formally specified, together with a program inverter and an invertible self-interpreter, in 2007 by Tetsuo Yokoyama and Robert Glück. A Janus inverter and interpreter is made freely available by the TOPPS research group at DIKU. Another Janus interpreter was implemented in Prolog in 2009. The below summarises the language presented in the 2007 paper.