Lucid (programming language)

Last updated
Lucid
Paradigm Dataflow
Designed by Edward A. Ashcroft
William W. Wadge
First appeared1976
Typing discipline Typeless
Major implementations
pLucid, GIPSY
Dialects
Granular Lucid, Indexical Lucid, Tensor Lucid, Forensic Lucid, Lucx, JOOIPL
Influenced by
ISWIM
Influenced
SISAL, PureData, Lustre

Lucid is a dataflow programming language designed to experiment with non-von Neumann programming models. It was designed by Bill Wadge and Ed Ashcroft and described in the 1985 book Lucid, the Dataflow Programming Language. [1]

Contents

pLucid was the first interpreter for Lucid.

Model

Lucid uses a demand-driven model for data computation. Each statement can be understood as an equation defining a network of processors and communication lines between them through which data flows. Each variable is an infinite stream of values and every function is a filter or a transformer. Iteration is simulated by 'current' values and 'fby' (read as 'followed by') operator allowing composition of streams.

Lucid is based on an algebra of histories, a history being an infinite sequence of data items. Operationally, a history can be thought of as a record of the changing values of a variable, history operations such as first and next can be understood in ways suggested by their names. Lucid was originally conceived as a disciplined, mathematically pure, single-assignment language, in which verification would be simplified. However, the dataflow interpretation has been an important influence on the direction in which Lucid has evolved.

Details

In Lucid (and other dataflow languages) an expression that contains a variable that has not yet been bound waits until the variable has been bound, before proceeding. An expression like x + y will wait until both x and y are bound before returning with the output of the expression. An important consequence of this is that explicit logic for updating related values is avoided, which results in substantial code reduction, compared to mainstream languages.

Each variable in Lucid is a stream of values. An expression n = 1 fby n + 1 defines a stream using the operator 'fby' (a mnemonic for "followed by"). fby defines what comes after the previous expression. (In this instance the stream produces 1,2,3,...). The values in a stream can be addressed by these operators (assuming x is the variable being used):

'first x' - fetches the first value in the stream x,

'x' - the current value of the stream,

'next x' - fetches the next value in the stream.

'asa' - an operator that does some thing 'as soon as' the condition given becomes true.

'x upon p' - upon is an operator that repeats the old value of the stream x, and updates to the new values only when the stream p makes a true value available. (It serves to slow down the stream x) i.e.: x upon p is the stream x with new values appearing upon the truth of p.

The computation is carried out by defining filters or transformation functions that act on these time-varying streams of data.

Examples

Factorial

fac   where     n = 0 fby (n + 1);     fac = 1 fby ( fac * (n + 1) );   end

Fibonacci sequence

fib   where     fib = 0 fby ( 1 fby fib + next fib );   end

Total of a Sequence

total   where      total = 0 fby total + x   end;

Running Average

running_avg   where       sum = first(input) fby sum + next(input);      n = 1 fby n + 1;      running_avg = sum / n;   end;

Prime numbers

prime   where      prime = 2 fby (n whenever isprime(n));      n = 3 fby n+1;      isprime(n) = not(divs) asa divs or prime*prime > N                      where                        N is current n;                        divs = N mod prime eq 0;                      end;   end

Dataflow diagram

Prime numbers sequence dataflow diagram (Lucid).png

Quick sort

qsort(a) = if eof(first a) then a else follow(qsort(b0),qsort(b1)) fi   where      p = first a < a;      b0 = a whenever p;      b1 = a whenever not p;      follow(x,y) = if xdone then y upon xdone else x fi                      where                         xdone = iseod x fby xdone or iseod x;                       end   end

Data flow diagram

    --------> whenever -----> qsort ---------    |             ^                           |    |             |                           |    |            not                          |    |             ^                           |    |---> first   |                           |    |       |     |                           |    |       V     |                           |    |---> less ---                            |    |             |                           |    |             V                           V ---+--------> whenever -----> qsort -----> conc -------> ifthenelse ----->    |                                                       ^   ^    |                                                       |   |     --------> next ----> first ------> iseod --------------    |    |                                                           |     -----------------------------------------------------------

Root mean square

sqroot(avg(square(a)))   where      square(x) = x*x;      avg(y)    = mean         where           n = 1 fby n+1;           mean = first y fby mean + d;           d = (next y - mean)/(n+1);         end;      sqroot(z) = approx asa  err < 0.0001         where           Z is current z;           approx = Z/2 fby (approx + Z/approx)/2;           err    = abs(square(approx)-Z);         end;    end

Hamming problem

h    where      h = 1 fby merge(merge(2 * h, 3 * h), 5 * h);      merge(x,y) = if xx <= yy then xx else yy fi         where            xx = x upon xx <= yy;           yy = y upon yy <= xx;         end;    end;

Dataflow Diagram

Hamming problem dataflow diagram Hamming problem dataflow diagram (Lucid).png
Hamming problem dataflow diagram

Related Research Articles

<span class="mw-page-title-main">Erlang (programming language)</span> Programming language

Erlang is a general-purpose, concurrent, functional high-level programming language, and a garbage-collected runtime system. The term Erlang is used interchangeably with Erlang/OTP, or Open Telecom Platform (OTP), which consists of the Erlang runtime system, several ready-to-use components (OTP) mainly written in Erlang, and a set of design principles for Erlang programs.

In programming language theory, lazy evaluation, or call-by-need, is an evaluation strategy which delays the evaluation of an expression until its value is needed and which also avoids repeated evaluations (sharing).

<span class="mw-page-title-main">Laplace's equation</span> Second order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

OCaml is a general-purpose, high-level multi-paradigm programming language which extends the Caml dialect of ML with object-oriented features. OCaml was created in 1996 by Xavier Leroy, Jérôme Vouillon, Damien Doligez, Didier Rémy, Ascánder Suárez, and others.

In mathematics, and in other disciplines involving formal languages, including mathematical logic and computer science, a free variable is a notation (symbol) that specifies places in an expression where substitution may take place and is not a parameter of this or any container expression. Some older books use the terms real variable and apparent variable for free variable and bound variable, respectively. The idea is related to a placeholder, or a wildcard character that stands for an unspecified symbol.

In mathematics and computer science in general, a fixed point of a function is a value that is mapped to itself by the function.

<span class="mw-page-title-main">Covariance matrix</span> Measure of covariance of components of a random vector

In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector. Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances.

<span class="mw-page-title-main">Poisson's ratio</span> Measure of material deformation perpendicular to loading

In materials science and solid mechanics, Poisson's ratio (nu) is a measure of the Poisson effect, the deformation of a material in directions perpendicular to the specific direction of loading. The value of Poisson's ratio is the negative of the ratio of transverse strain to axial strain. For small values of these changes, is the amount of transversal elongation divided by the amount of axial compression. Most materials have Poisson's ratio values ranging between 0.0 and 0.5. For soft materials, such as rubber, where the bulk modulus is much higher than the shear modulus, Poisson's ratio is near 0.5. For open-cell polymer foams, Poisson's ratio is near zero, since the cells tend to collapse in compression. Many typical solids have Poisson's ratios in the range of 0.2–0.3. The ratio is named after the French mathematician and physicist Siméon Poisson.

<span class="mw-page-title-main">Deming regression</span> Algorithm for the line of best fit for a two-dimensional dataset

In statistics, Deming regression, named after W. Edwards Deming, is an errors-in-variables model which tries to find the line of best fit for a two-dimensional dataset. It differs from the simple linear regression in that it accounts for errors in observations on both the x- and the y- axis. It is a special case of total least squares, which allows for any number of predictors and a more complicated error structure.

Oz is a multiparadigm programming language, developed in the Programming Systems Lab at Université catholique de Louvain, for programming language education. It has a canonical textbook: Concepts, Techniques, and Models of Computer Programming.

In computer science, pattern matching is the act of checking a given sequence of tokens for the presence of the constituents of some pattern. In contrast to pattern recognition, the match usually has to be exact: "either it will or will not be a match." The patterns generally have the form of either sequences or tree structures. Uses of pattern matching include outputting the locations of a pattern within a token sequence, to output some component of the matched pattern, and to substitute the matching pattern with some other token sequence.

<span class="mw-page-title-main">Canonical correlation</span> Way of inferring information from cross-covariance matrices

In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors X = (X1, ..., Xn) and Y = (Y1, ..., Ym) of random variables, and there are correlations among the variables, then canonical-correlation analysis will find linear combinations of X and Y which have maximum correlation with each other. T. R. Knapp notes that "virtually all of the commonly encountered parametric tests of significance can be treated as special cases of canonical-correlation analysis, which is the general procedure for investigating the relationships between two sets of variables." The method was first introduced by Harold Hotelling in 1936, although in the context of angles between flats the mathematical concept was published by Jordan in 1875.

In computer science, a generator is a routine that can be used to control the iteration behaviour of a loop. All generators are also iterators. A generator is very similar to a function that returns an array, in that a generator has parameters, can be called, and generates a sequence of values. However, instead of building an array containing all the values and returning them all at once, a generator yields the values one at a time, which requires less memory and allows the caller to get started processing the first few values immediately. In short, a generator looks like a function but behaves like an iterator.

<span class="mw-page-title-main">Total least squares</span>

In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization of Deming regression and also of orthogonal regression, and can be applied to both linear and non-linear models.

In mathematics, the second partial derivative test is a method in multivariable calculus used to determine if a critical point of a function is a local minimum, maximum or saddle point.

<span class="mw-page-title-main">Corner detection</span>

Corner detection is an approach used within computer vision systems to extract certain kinds of features and infer the contents of an image. Corner detection is frequently used in motion detection, image registration, video tracking, image mosaicing, panorama stitching, 3D reconstruction and object recognition. Corner detection overlaps with the topic of interest point detection.

In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. When determining the numerical relationship between two variables of interest, using their correlation coefficient will give misleading results if there is another confounding variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient. This is precisely the motivation for including other right-side variables in a multiple regression; but while multiple regression gives unbiased results for the effect size, it does not give a numerical value of a measure of the strength of the relationship between the two variables of interest.

In 1979, Honeywell Information Systems announced a new programming language for their time-sharing service named TEX, an acronym for the Text Executive text processing system. TEX was a first-generation scripting language developed around the time of AWK and used by Honeywell initially as an in-house system test automation tool.

This article describes the features in Haskell.

In mathematics, the Kronecker sum of discrete Laplacians, named after Leopold Kronecker, is a discrete version of the separation of variables for the continuous Laplacian in a rectangular cuboid domain.

References

  1. Wadge, William W.; Ashcroft, Edward A. (1985). Lucid, the Dataflow Programming Language . Academic Press. ISBN   0-12-729650-6 . Retrieved 8 January 2015.