DAP FORTRAN

Last updated

DAP FORTRAN was an extension of the non IO parts of FORTRAN with constructs that supported parallel computing for the ICL Distributed Array Processor (DAP). The DAP had a Single Instruction Multiple Data (SIMD) architecture with 64x64 single bit processors.

DAP FORTRAN had the following major features:

In a declaration either one or two extents could be omitted as in:

C     Multiply vector by matrixREAL M(,),V(),R()R=SUM(M*MATR(A))C     Converge to a Laplace potential in an areaREAL P(,),OLD_P(,)LOGICAL INSIDE(,)DO 1K=1,ITERATIONSOLD_P=PP(INSIDE)=0.25*(P(,+)+P(,-)+P(+,)+P(-,))IF(MAX(ABS(P-OLD_P)).LT.EPS)RETURN    1CONTINUE

The omitted dimension was taken as 64, the size of one side of the DAP. The speed of arithmetic operations depended strongly on the number of bits in the value. INTEGER*n reserved 8n bits where n is 1 to 8, and REAL*n reserved 8n bits where n is 3 to 8. LOGICAL reserved a single bit.

However, DAP FORTRAN fell between two conflicting objectives. It needed to effectively exploit the DAP facilities. But also had to be accessible to the scientific computing community whose primary language, with a design closely tied to serial architectures, was FORTRAN. The dialect used was ICL's 2900-series FORTRAN which was based on an early version of the FORTRAN 77 standard and had mismatches with both FORTRAN 77 and the older FORTRAN 66 standard.

DAP FORTRAN was significantly different from either standard FORTRAN and the machine was not capable of accepting or optimising standard FORTRAN programs. On the other hand, compared with other contemporary languages which were by design extensible (notably ALGOL-68), FORTRAN was less than well suited to this task. The result was noticeably inelegant and did require a great deal of new learning. Operationally, there was an overhead to transfer computational data into and out of the array, and problems which did not fit the 64x64 matrix imposed additional complexity to handle the boundaries (65x65 was perhaps the worst case!) but for problems which suited the architecture, it could outperform the current Cray pipeline architectures by two orders of magnitude.

A later version of the DAP used Fortran-Plus instead which was based on FORTRAN 77 and had more flexible indexing. In particular it automatically mapped user sized arrays onto the underlying hardware.

Related Research Articles

In computer science, an array data structure, or simply an array, is a data structure consisting of a collection of elements, each identified by at least one array index or key. An array is stored such that the position of each element can be computed from its index tuple by a mathematical formula. The simplest type of data structure is a linear array, also called one-dimensional array.

The bit is a basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1"or"0", but other representations such as true/false, yes/no, +/, or on/off are common.

Fortran General-purpose programming language

Fortran is a general-purpose, compiled imperative programming language that is especially suited to numeric computation and scientific computing.

SIMD

Single instruction, multiple data (SIMD) is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously. Such machines exploit data level parallelism, but not concurrency: there are simultaneous (parallel) computations, but only a single process (instruction) at a given moment. SIMD is particularly applicable to common tasks such as adjusting the contrast in a digital image or adjusting the volume of digital audio. Most modern CPU designs include SIMD instructions to improve the performance of multimedia use. SIMD is not to be confused with SIMT, which utilizes threads.

Parallel computing Programming paradigm in which many processes are executed simultaneously

Parallel computing is a type of computation where many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

Message Passing Interface (MPI) is a standardized and portable message-passing standard designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several well-tested and efficient implementations of MPI, many of which are open-source or in the public domain. These fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications.

Sparse matrix

In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition how many elements need to be zero for a matrix to be considered sparse but a common criterion is that the number of non-zero elements is roughly the number of rows or columns. By contrast, if most of the elements are nonzero, then the matrix is considered dense. The number of zero-valued elements divided by the total number of elements is sometimes referred to as the sparsity of the matrix.

In computer programming, array slicing is an operation that extracts a subset of elements from an array and packages them as another array, possibly in a different dimension from the original.

CDC Cyber Range of mainframe-class supercomputers manufectured by Control Data Corporation (CDC) during the 1970s and 1980s

The CDC Cyber range of mainframe-class supercomputers were the primary products of Control Data Corporation (CDC) during the 1970s and 1980s. In their day, they were the computer architecture of choice for scientific and mathematically intensive computing. They were used for modeling fluid flow, material science stress analysis, electrochemical machining analysis, probabilistic analysis, energy and academic computing, radiation shielding modeling, and other applications. The lineup also included the Cyber 18 and Cyber 1000 minicomputers. Like their predecessor, the CDC 6600, they were unusual in using the ones' complement binary representation.

In computer science, array programming refers to solutions which allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings.

LAPACK Software library for numerical linear algebra

LAPACK is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision.

Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.

The Distributed Array Processor (DAP) produced by International Computers Limited (ICL) was the world's first commercial massively parallel computer. The original paper study was complete in 1972 and building of the prototype began in 1974. The first machine was delivered to Queen Mary College in 1979.

ICL 2900 Series

The ICL 2900 Series was a range of mainframe computer systems announced by the UK manufacturer ICL on 9 October 1974. The company had started development, under the name "New Range" immediately on its formation in 1968. The range was not designed to be compatible with any previous machines produced by the company, nor with any competitor's machines: rather, it was conceived as a synthetic option combining the best ideas available from a variety of sources.

The NAG Numerical Library is a software product developed and sold by The Numerical Algorithms Group. It is a software library of numerical analysis routines, containing more than 1,900 mathematical and statistical algorithms. Areas covered by the library include linear algebra, optimization, quadrature, the solution of ordinary and partial differential equations, regression analysis, and time series analysis.

This is an overview of Fortran 95 language features. Included are the additional features of TR-15581:Enhanced Data Type Facilities, which have been universally implemented. Old features that have been superseded by new ones are not described – few of those historic features are used in modern programs although most have been retained in the language to maintain backward compatibility. The current standard is Fortran 2018; many of its new features are still being implemented in compilers. The additional features of Fortran 2003, Fortran 2008 and Fortran 2018 are described by Metcalf, Reid and Cohen.

An array is a systematic arrangement of similar objects, usually in rows and columns.

In computer science, an array type is a data type that represents a collection of elements, each selected by one or more indices that can be computed at run time during program execution. Such a collection is usually called an array variable, array value, or simply array. By analogy with the mathematical concepts vector and matrix, array types with one and two indices are often called vector type and matrix type, respectively. More generally, a multidimensional array type can be called a tensor type.

The ICL DRS was a range of departmental computers from International Computers Limited (ICL). Standing originally for Distributed Resource System, the full name was later dropped in favour of the abbreviation.