POP-2

Last updated

POP-2
Paradigm Multi-paradigm: structured, reflective, procedural
Family Lisp: POP
Designed by Robin Popplestone; Rod Burstall, Steve Hardy; Robert Rae, Allan Ramsay
Developers University of Edinburgh
University of Sussex
First appeared1970;54 years ago (1970)
Stable release
1975 / 1975;49 years ago (1975)
Typing discipline dynamic
Implementation language assembly
Platform Elliott 4130, ICT 1909, BESM-6, PDP-10, PDP-11
OS George, TOPS-10, Unix
License Proprietary
Major implementations
WPOP
Dialects
POP-10
Influenced by
Lisp, ALGOL 60, COWSEL (renamed POP-1)
Influenced
POP-11

POP-2 (also called POP2) is a programming language developed around 1970 from the earlier language POP-1 (developed by Robin Popplestone in 1968, originally named COWSEL) by Robin Popplestone and Rod Burstall at the University of Edinburgh. It drew roots from many sources: the languages Lisp and ALGOL 60, and theoretical ideas from Peter J. Landin. It used an incremental compiler, which gave it some of the flexibility of an interpreted language, including allowing new function definitions at run time and modification of function definitions while a program runs (both of which are features of dynamic compilation), without the overhead of an interpreted language. [1]

Contents

Description

Stack

POP-2's syntax is ALGOL-like, except that assignments are in reverse order: instead of writing

a  := 3;

one writes

3 -> a;

The reason for this is that the language has explicit notion of an operand stack . Thus, the prior assignment can be written as two separate statements:

3;

which evaluates the value 3 and leaves it on the stack, and

-> a;

which pops the top value off the stack and assigns it to the variable 'a'. Similarly, the function call

f(x, y, z);

can be written as

x, y, z; f();

(commas and semicolons being largely interchangeable) or even

x, y, z.f;

or

(x, y, z).f;

Because of the stack-based paradigm, there is no need to distinguish between statements and expressions; thus, the two constructs

if a > b then        c -> e    else        d -> e    close;

and

if a > b then        c    else        d    close -> e;

are equivalent (use of close, as endif hadn't become a common end-of-if-clause notation yet).

Arrays and doublet functions

There are no special language constructs to create arrays or record structures as they are commonly understood: instead, these are created with the aid of special builtin functions, e.g., newarray [2] (for arrays that can contain any type of item) and newanyarray [3] to create restricted types of items.

Thus, array element and record field accessors are simply special cases of a doublet function: this is a function that had another function attached as its updater, [4] which is called on the receiving side of an assignment. Thus, if the variable a contains an array, then

3 -> a(4);

is equivalent to

updater(a)(3, 4);

the builtin function updater returning the updater of the doublet. Of course, updater is a doublet and can be used to change the updater component of a doublet.

Functions

Variables can hold values of any type, including functions, which are first-class objects. Thus, the following constructs

function max x y; if x > y then x else y close end;

and

vars max;    lambda x y; if x > y then x else y close end -> max;

are equivalent.

An interesting operation on functions is partial application, (sometimes termed currying ). In partial application, some number of the rightmost arguments of the function (which are the last ones placed on the stack before the function is involved) are frozen to given values, to produce a new function of fewer arguments, which is a closure of the original function. For instance, consider a function for computing general second-degree polynomials:

function poly2 x a b c; a * x * x + b * x + c end;

This can be bound, for instance as

vars less1squared;    poly2(% 1, -2, 1%) -> less1squared;

such that the expression

less1squared(3)

applies the closure of poly2 with three arguments frozen, to the argument 3, returning the square of (3 - 1), which is 4. The application of the partially applied function causes the frozen values (in this case 1, -2, 1) to be added to whatever is already on the stack (in this case 3), after which the original function poly2 is invoked. It then uses the top four items on the stack, producing the same result as

poly2(3, 1, -2, 1)

i.e.

1*3*3 + (-2)*3 + 1

Operator definition

In POP-2, it was possible to define new operations (operators in modern terms). [5]

vars operation 3 +*;     lambda x y; x * x + y * y end -> nonop +*

The first line declares a new operation +* with precedence (priority) 3. The second line creates a function f(x,y)=x*x+y*y, and assigns it to the newly declared operation +*.

History

The original version of POP-2 was implemented on an Elliott 4130 computer in the University of Edinburgh (with only 64 KB RAM, doubled to 128 KB in 1972). [6]

POP-2 was ported to the ICT 1900 series on a 1909 at Lancaster University by John Scott in 1968.

In the mid-1970s, POP-2 was ported to BESM-6 (POPLAN System).

Later versions were implemented for Computer Technology Limited (CTL) Modular One, PDP-10, ICL 1900 series (running the operating system George). Julian Davies, in Edinburgh, implemented an extended version of POP-2, which he named POP-10 on the PDP-10 computer running TOPS-10. This was the first dialect of POP-2 that treated case as significant in identifier names, used lower case for most system identifiers, and supported long identifiers with more than 8 characters.

Shortly after that, a new implementation known as WPOP (for WonderPop) was implemented by Robert Rae and Allan Ramsay in Edinburgh, on a research-council funded project. That version introduced caged address spaces, some compile-time syntactic typing (e.g., for integers and reals), and some pattern matching constructs for use with a variety of data structures.

In parallel with that, Steve Hardy at University of Sussex implemented a subset of POP-2, which he named POP-11 which ran on a Digital Equipment Corporation (DEC) PDP-11/40 computer. It was originally designed to run on the DEC operating system RSX-11D, in time-shared mode for teaching, but that caused so many problems that an early version of Unix was installed and used instead. That version of Pop-11 was written in Unix assembly language, and code was incrementally compiled to an intermediate bytecode which was interpreted. That port was completed around 1976, and as a result, Pop-11 was used in several places for teaching. To support its teaching function, many of the syntactic features of POP-2 were modified, e.g., replacing function ... end with define ... enddefine and adding a wider variety of looping constructs with closing brackets to match their opening brackets instead of the use of close for all loops in POP-2. Pop-11 also introduced a pattern matcher for list structures, making it far easier to teach artificial intelligence (AI) programming.

Around 1980, Pop-11 was ported to a VAX-11/780 computer by Steve Hardy and John Gibson, and soon after that it was replaced by a full incremental compiler (producing machine-code instead of an interpreted intermediate code). The existence of the compiler and all its subroutines at run time made it possible to support far richer language extensions than are possible with Macros, and as a result Pop-11 was used (by Steve Hardy, Chris Mellish and John Gibson) to produce an implementation of Prolog, using the standard syntax of Prolog, and the combined system became known as Poplog, to which Common Lisp and Standard ML were added later. This version was later ported to a variety of machines and operating systems and as a result Pop-11 became the dominant dialect of POP-2, still available in the Poplog system.

Around 1986, a new AI company Cognitive Applications Ltd., collaborated with members of Sussex university to produce a variant of Pop-11 named AlphaPop running on Apple Mac computers, with integrated graphics. This was used for many commercial projects, and to teach AI programming in several universities. That it was implemented in an early dialect of C, using an idiosyncratic compiler made it very hard to maintain and upgrade to new versions of the Mac operating system. Also, AlphaPop was not "32-bit clean" due to the use of high address bits as tag bits to signify the type of objects, which was incompatible with the use of memory above 8 Mb on later Macintoshes.

See also

Related Research Articles

In computer science, an abstract data type (ADT) is a mathematical model for data types, defined by its behavior (semantics) from the point of view of a user of the data, specifically in terms of possible values, possible operations on data of this type, and the behavior of these operations. This mathematical model contrasts with data structures, which are concrete representations of data, and are the point of view of an implementer, not a user. For example, a stack has push/pop operations that follow a Last-In-First-Out rule, and can be concretely implemented using either a list or an array. Another example is a set which stores values, without any particular order, and no repeated values. Values themselves are not retrieved from sets, rather one tests a value for membership to obtain a Boolean "in" or "not in".

B is a programming language developed at Bell Labs circa 1969 by Ken Thompson and Dennis Ritchie.

C is a general-purpose computer programming language. It was created in the 1970s by Dennis Ritchie, and remains very widely used and influential. By design, C's features cleanly reflect the capabilities of the targeted CPUs. It has found lasting use in operating systems, device drivers, and protocol stacks, but its use in application software has been decreasing. C is commonly used on computer architectures that range from the largest supercomputers to the smallest microcontrollers and embedded systems.

In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program.

Forth is a procedural, concatenative, stack-oriented programming language and interactive development environment designed by Charles H. "Chuck" Moore and first used by other programmers in 1970. Although not an acronym, the language's name in its early years was often spelled in all capital letters as FORTH. The FORTH-79 and FORTH-83 implementations, which were not written by Moore, became de facto standards, and an official standardization of the language was published in 1994 as ANS Forth. A wide range of Forth derivatives existed before and after ANS Forth. The free software Gforth implementation is actively maintained, as are several commercially supported systems.

MUMPS, or M, is an imperative, high-level programming language with an integrated transaction processing key–value database. It was originally developed at Massachusetts General Hospital for managing patient medical records and hospital laboratory information systems.

<span class="mw-page-title-main">Lua (programming language)</span> Lightweight programming language

Lua is a lightweight, high-level, multi-paradigm programming language designed primarily for embedded use in applications. Lua is cross-platform, since the interpreter of compiled bytecode is written in ANSI C, and Lua has a relatively simple C API to embed it into applications.

Poplog is a reflective, incrementally compiled software development computer programming integrated development environment and system platform for the programming languages POP-11, Common Lisp, Prolog, and Standard ML. It was created originally in the United Kingdom for teaching and research in artificial intelligence, at the University of Sussex, and later marketed as a commercial package for software development, teaching, and research. It was one of the initiatives supported for a time by the UK government-funded Alvey Programme.

COWSEL is a programming language designed between 1964 and 1966 by Robin Popplestone. It was based on an reverse Polish notation (RPN) form of the language Lisp, combined with some ideas from Combined Programming Language (CPL).

<span class="mw-page-title-main">Stack (abstract data type)</span> Abstract data type

In computer science, a stack is an abstract data type that serves as a collection of elements with two main operations:

BLISS is a system programming language developed at Carnegie Mellon University (CMU) by W. A. Wulf, D. B. Russell, and A. N. Habermann around 1970. It was perhaps the best known system language until C debuted a few years later. Since then, C became popular and common, and BLISS faded into obscurity. When C was in its infancy, a few projects within Bell Labs debated the merits of BLISS vs. C.

IDL, short for Interactive Data Language, is a programming language used for data analysis. It is popular in particular areas of science, such as astronomy, atmospheric physics and medical imaging. IDL shares a common syntax with PV-Wave and originated from the same codebase, though the languages have subsequently diverged in detail. There are also free or costless implementations, such as GNU Data Language (GDL) and Fawlty Language (FL).

POP-11 is a reflective, incrementally compiled programming language with many of the features of an interpreted language. It is the core language of the Poplog programming environment developed originally by the University of Sussex, and recently in the School of Computer Science at the University of Birmingham, which hosts the main Poplog website.

In a given programming language design, a first-class citizen is an entity which supports all the operations generally available to other entities. These operations typically include being passed as an argument, returned from a function, and assigned to a variable.

Edinburgh IMP is a development of Atlas Autocode, initially developed around 1966-1969 at the University of Edinburgh, Scotland. It is a general-purpose programming language which was used heavily for systems programming.

ALGOL 68C is an imperative computer programming language, a dialect of ALGOL 68, that was developed by Stephen R. Bourne and Michael Guy to program the Cambridge Algebra System (CAMAL). The initial compiler was written in the Princeton Syntax Compiler that was implemented by J. H. Mathewman at Cambridge.

Rodney Martineau "Rod" Burstall FRSE is a British computer scientist and one of four founders of the Laboratory for Foundations of Computer Science at the University of Edinburgh.

<span class="mw-page-title-main">Cosmos (operating system)</span> Toolkit for building GUI and command-line based operating systems

C# Open Source Managed Operating System (Cosmos) is a toolkit for building GUI and command-line based operating systems, written mostly in the programming language C# and small amounts of a high-level assembly language named X#. Cosmos is a backronym, in that the acronym was chosen before the meaning. It is open-source software released under a BSD license.

Freddy (1969–1971) and Freddy II (1973–1976) were experimental robots built in the Department of Machine Intelligence and Perception.

In computer programming, a function, subprogram, procedure, method, routine or subroutine is a callable unit that has a well-defined behavior and can be invoked by other software units to exhibit that behavior.

References

General
Inline
  1. Burstall, R.M.; Collins, J.S.; Popplestone, R.J. (1968). POP-2 Papers (PDF). London: The Round Table.
  2. Rubinstein, Mark; Sloman, A. (October 1985 – April 1989). "Help Newarray". University of Birmingham. Retrieved 22 March 2024.
  3. Hardy, Steven; Williams, John; Sloman, A. (January 1978 – April 1986). "Help Newanyarray". University of Birmingham. Retrieved 22 March 2024.
  4. Sloman, A. (April 1985). "Help Updater". University of Birmingham. Retrieved 21 March 2024.
  5. POP-2 Reference Manual, page 217, and An Introduction to the Study of Programming Languages, by David William Barron, page 75
  6. Dunn, Raymond D. (February 1970). "POP-2/4100 Users' Manual" (PDF). School of Artificial Intelligence. University of Edinburgh. Retrieved 3 June 2022.