Paradigms | Parallel computing, Functional, Purely functional, Declarative programming |
---|---|
Designed by | Dr. Daniel Cooke, Dr. Nelson Rushton, Dr. Brad Nemanich |
Developers | Texas Tech University, Texas Multicore Technologies |
First appeared | 1989 |
Typing discipline | Static, type inference |
Platform | x86, Power, ARM |
OS | Windows, macOS, Linux |
License | Proprietary [1] |
Website | texasmulticore |
SequenceL is a general purpose functional programming language and auto-parallelizing (Parallel computing) compiler and tool set, whose primary design objectives are performance on multi-core processor hardware, ease of programming, platform portability/optimization, and code clarity and readability. Its main advantage is that it can be used to write straightforward code that automatically takes full advantage of all the processing power available, without programmers needing to be concerned with identifying parallelisms, specifying vectorization, avoiding race conditions, and other challenges of manual directive-based programming approaches such as OpenMP.
Programs written in SequenceL can be compiled to multithreaded code that runs in parallel, with no explicit indications from a programmer of how or what to parallelize. As of 2015 [update] , versions of the SequenceL compiler generate parallel code in C++ and OpenCL, which allows it to work with most popular programming languages, including C, C++, C#, Fortran, Java, and Python. A platform-specific runtime manages the threads safely, automatically providing parallel performance according to the number of cores available, currently supporting x86, POWER8, and ARM platforms.
SequenceL was initially developed over a 20-year period starting in 1989, mostly at Texas Tech University. Primary funding was from NASA, which originally wanted to develop a specification language which was "self-verifying"; that is, once written, the requirements could be executed, and the results verified against the desired outcome.
The principal researcher on the project was initially Dr. Daniel Cooke, [2] who was soon joined by Dr. Nelson Rushton (another Texas Tech professor) and later Dr. Brad Nemanich (then a PhD student under Cooke). The goal of creating a language that was simple enough to be readable, but unambiguous enough to be executable, drove the inventors to settle on a functional, declarative language approach, where a programmer describes desired results, rather than the means to achieve them. The language is then free to solve the problem in the most efficient manner that it can find.
As the language evolved, the researchers developed new computational approaches, including consume-simplify-produce (CSP). [3] In 1998, research began to apply SequenceL to parallel computing. This culminated in 2004 when it took its more complete form with the addition of the normalize-transpose (NT) semantic, [4] [5] which coincided with the major vendors of central processing units (CPUs) making a major shift to multi-core processors rather than continuing to increase clock speeds. NT is the semantic work-horse, being used to simplify and decompose structures, based on a dataflow-like execution strategy similar to GAMMA [6] and NESL. [7] The NT semantic achieves a goal similar to that of the Lämmel and Peyton-Jones' boilerplate elimination. [8] [9] All other features of the language are definable from these two laws - including recursion, subscripting structures, function references, and evaluation of function bodies. [10] [11]
Though it was not the original intent, these new approaches allowed the language to parallelize a large fraction of the operations it performed, transparently to the programmer. In 2006, a prototype auto-parallelizing compiler was developed at Texas Tech University. In 2009, Texas Tech licensed the intellectual property to Texas Multicore Technologies (TMT), [12] for follow-on commercial development. In January 2017 TMT released v3, which includes a free Community Edition for download in addition to the commercial Professional Edition.
SequenceL is designed to be as simple as possible to learn and use, focusing on algorithmic code where it adds value, e.g., the inventors chose not to reinvent I/O since C handled that well. As a result, the full language reference for SequenceL is only 40 pages, with copious examples, and its formal grammar has around 15 production rules. [13]
SequenceL is strictly evaluated (like Lisp), statically typed with type inference (like Haskell), and uses a combination of infix and prefix operators that resemble standard, informal mathematical notation (like C, Pascal, Python, etc.). It is a purely declarative language, meaning that a programmer defines functions, in the mathematical sense, without giving instructions for their implementation. For example, the mathematical definition of matrix multiplication is as follows:
The SequenceL definition mirrors that definition more or less exactly:
matmul(A(2), B(2)) [i,j] := let k := 1...size(B); in sum( A[i,k] * B[k,j] );
The subscripts following each parameter A and B on the left hand side of the definition indicate that A and B are depth-2 structures (i.e., lists of lists of scalars), which are here thought of as matrices. From this formal definition, SequenceL infers the dimensions of the defined product from the formula for its (i, j)'th entry (as the set of pairs (i, j) for which the right hand side is defined) and computes each entry by the same formula as in the informal definition above. Notice there are no explicit instructions for iteration in this definition, or for the order in which operations are to be carried out. Because of this, the SequenceL compiler can perform operations in any order (including parallel order) which satisfies the defining equation. In this example, computation of coordinates in the product will be parallelized in a way that, for large matrices, scales linearly with the number of processors.
As noted above, SequenceL has no built-in constructs for input/output (I/O) since it was designed to work in an additive manner with other programming languages. The decision to compile to multithreaded C++ and support the 20+ Simplified Wrapper and Interface Generator (SWIG) languages (C, C++, C#, Java, Python, etc.) means it easily fits into extant design flows, training, and tools. It can be used to enhance extant applications, create multicore libraries, and even create standalone applications by linking the resulting code with other code which performs I/O tasks. SequenceL functions can also be queried from an interpreter with given inputs, like Python and other interpreted languages.
The main non-scalar construct of SequenceL is the sequence, which is essentially a list. Sequences may be nested to any level. To avoid the routine use of recursion common in many purely functional languages, SequenceL uses a technique termed normalize–transpose (NT), in which scalar operations are automatically distributed over elements of a sequence. [14] For example, in SequenceL we have
This results not from overloading the '+' operator, but from the effect of NT that extends to all operations, both built-in and user-defined. As another example, if f() is a 3-argument function whose arguments are scalars, then for any appropriate x and z we will have
The NT construct can be used for multiple arguments at once, as in, for example
It also works when the expected argument is a non-scalar of any type T, and the actual argument is a list of objects of type T (or, in greater generality, any data structure whose coordinates are of type T). For example, if A is a matrix and Xs is a list of matrices [X1, ..., Xn], and given the above definition of matrix multiply, in SequenceL we would have
matmul(A,Xs) = [matmul(A,X1),...,matmul(A,Xn)]
As a rule, NTs eliminate the need for iteration, recursion, or high level functional operators to
This tends to account for most uses of iteration and recursion.
A good example that demonstrates the above concepts would be in finding prime numbers. A prime number is defined as
So a positive integer z is prime if no numbers from 2 through z-1, inclusive, divide evenly. SequenceL allows this problem to be programmed by literally transcribing the above definition into the language.
In SequenceL, a sequence of the numbers from 2 through z-1, inclusive, is just (2...(z-1)), so a program to find all of the primes between 100 and 200 can be written:
prime(z) := z when none(z mod (2...(z-1)) = 0);
Which, in English just says,
If that condition isn't met, the function returns nothing. As a result, running this program yields
cmd:>prime(17) 17 cmd:>prime(18) empty
The string "between 100 and 200" doesn't appear in the program. Rather, a programmer will typically pass that part in as the argument. Since the program expects a scalar as an argument, passing it a sequence of numbers instead will cause SequenceL to perform the operation on each member of the sequence automatically. Since the function returns empty for failing values, the result will be the input sequence, but filtered to return only those numbers that satisfy the criteria for primes:
cmd:>prime(100...200) [101,103,107,109,113,127,131,137,139,149,151,157,163,167,173,179,181,191,193,197,199]
In addition to solving this problem with a very short and readable program, SequenceL's evaluation of the nested sequences would all be performed in parallel.
The following software components are available and supported by TMT for use in writing SequenceL code. All components are available on x86 platforms running Windows, macOS, and most varieties of Linux (including CentOS, RedHat, openSUSE, and Ubuntu), and on ARM and IBM Power platforms running most varieties of Linux.
A command-line interpreter allows writing code directly into a command shell, or loading code from prewritten text files. This code can be executed, and the results evaluated, to assist in checking code correctness, or finding a quick answer. It is also available via the popular Eclipse integrated development environment (IDE). Code executed in the interpreter does not run in parallel; it executes in one thread.
A command-line compiler reads SequenceL code and generates highly parallelized, vectorized, C++, and optionally OpenCL, which must be linked with the SequenceL runtime library to execute.
The runtime environment is a pre-compiled set of libraries which works with the compiled parallelized C++ code to execute optimally on the target platform. It builds on Intel Threaded Building Blocks (TBB) [15] and handles things such as cache optimization, memory management, work queues-stealing, and performance monitoring.
An Eclipse integrated development environment plug-in provides standard editing abilities (function rollup, chromacoding, etc.), and a SequenceL debugging environment. This plug-in runs against the SequenceL Interpreter, so cannot be used to debug the multithreaded code; however, by providing automatic parallelization, debugging of parallel SequenceL code is really verifying correctness of sequential SequenceL code. That is, if it runs correctly sequentially, it should run correctly in parallel – so debugging in the interpreter is sufficient.
Various math and other standard function libraries are included as SequenceL source code to streamline the programming process and serve as best practice examples. These may be imported, in much the same way that C or C++ libraries are #included.
C is a general-purpose programming language. It was created in the 1970s by Dennis Ritchie and remains very widely used and influential. By design, C's features cleanly reflect the capabilities of the targeted CPUs. It has found lasting use in operating systems code, device drivers, and protocol stacks, but its use in application software has been decreasing. C is commonly used on computer architectures that range from the largest supercomputers to the smallest microcontrollers and embedded systems.
In computer programming, a macro is a rule or pattern that specifies how a certain input should be mapped to a replacement output. Applying a macro to an input is known as macro expansion. The input and output may be a sequence of lexical tokens or characters, or a syntax tree. Character macros are supported in software applications to make it easy to invoke common command sequences. Token and tree macros are supported in some programming languages to enable code reuse or to extend the language, sometimes for domain-specific languages.
Standard ML (SML) is a general-purpose, high-level, modular, functional programming language with compile-time type checking and type inference. It is popular for writing compilers, for programming language research, and for developing theorem provers.
In computer programming, a type system is a logical system comprising a set of rules that assigns a property called a type to every term. Usually the terms are various language constructs of a computer program, such as variables, expressions, functions, or modules. A type system dictates the operations that can be performed on a term. For variables, the type system determines the allowed values of that term.
The C preprocessor is the macro preprocessor for several computer programming languages, such as C, Objective-C, C++, and a variety of Fortran languages. The preprocessor provides inclusion of header files, macro expansions, conditional compilation, and line control.
F# is a general-purpose, high-level, strongly typed, multi-paradigm programming language that encompasses functional, imperative, and object-oriented programming methods. It is most often used as a cross-platform Common Language Infrastructure (CLI) language on .NET, but can also generate JavaScript and graphics processing unit (GPU) code.
In compiler design, static single assignment form is a type of intermediate representation (IR) where each variable is assigned exactly once. SSA is used in most high-quality optimizing compilers for imperative languages, including LLVM, the GNU Compiler Collection, and many commercial compilers.
In computer science, array programming refers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings.
In computer programming, a return statement causes execution to leave the current subroutine and resume at the point in the code immediately after the instruction which called the subroutine, known as its return address. The return address is saved by the calling routine, today usually on the process's call stack or in a register. Return statements in many programming languages allow a function to specify a return value to be passed back to the code that called the function.
Automatic parallelization, also auto parallelization, or autoparallelization refers to converting sequential code into multi-threaded and/or vectorized code in order to use multiple processors simultaneously in a shared-memory multiprocessor (SMP) machine. Fully automatic parallelization of sequential programs is a challenge because it requires complex program analysis and the best approach may depend upon parameter values that are not known at compilation time.
Automatic vectorization, in parallel computing, is a special case of automatic parallelization, where a computer program is converted from a scalar implementation, which processes a single pair of operands at a time, to a vector implementation, which processes one operation on multiple pairs of operands at once. For example, modern conventional computers, including specialized supercomputers, typically have vector operations that simultaneously perform operations such as the following four additions :
The programming language APL is distinctive in being symbolic rather than lexical: its primitives are denoted by symbols, not words. These symbols were originally devised as a mathematical notation to describe algorithms. APL programmers often assign informal names when discussing functions and operators but the core functions and operators provided by the language are denoted by non-textual symbols.
In computer science, polymorphic recursion refers to a recursive parametrically polymorphic function where the type parameter changes with each recursive invocation made, instead of staying constant. Type inference for polymorphic recursion is equivalent to semi-unification and therefore undecidable and requires the use of a semi-algorithm or programmer-supplied type annotations.
This article describes the features in the programming language Haskell.
For several years parallel hardware was only available for distributed computing but recently it is becoming available for the low end computers as well. Hence it has become inevitable for software programmers to start writing parallel applications. It is quite natural for programmers to think sequentially and hence they are less acquainted with writing multi-threaded or parallel processing applications. Parallel programming requires handling various issues such as synchronization and deadlock avoidance. Programmers require added expertise for writing such applications apart from their expertise in the application domain. Hence programmers prefer to write sequential code and most of the popular programming languages support it. This allows them to concentrate more on the application. Therefore, there is a need to convert such sequential applications to parallel applications with the help of automated tools. The need is also non-trivial because large amount of legacy code written over the past few decades needs to be reused and parallelized.
In computer programming, a function, procedure, method, subroutine, routine, or subprogram is a callable unit of software logic that has a well-defined interface and behavior and can be invoked multiple times.
In computer science, purely functional programming usually designates a programming paradigm—a style of building the structure and elements of computer programs—that treats all computation as the evaluation of mathematical functions.
Privatization is a technique used in shared-memory programming to enable parallelism, by removing dependencies that occur across different threads in a parallel program. Dependencies between threads arise from two or more threads reading or writing a variable at the same time. Privatization gives each thread a private copy, so it can read and write it independently and thus, simultaneously.
Futhark is a multi-paradigm, high-level, functional, data parallel, array programming language. It is a dialect of the language ML, originally developed at UCPH Department of Computer Science (DIKU) as part of the HIPERFIT project. It focuses on enabling data parallel programs written in a functional style to be executed with high performance on massively parallel hardware, especially graphics processing units (GPUs). Futhark is strongly inspired by NESL, and its implementation uses a variant of the flattening transformation, but imposes constraints on how parallelism can be expressed in order to enable more aggressive compiler optimisations. In particular, irregular nested data parallelism is not supported. It is free and open-source software released under an ISC license.
Flix is a functional, imperative, and logic programming language developed at Aarhus University, with funding from the Independent Research Fund Denmark, and by a community of open source contributors. The Flix language supports algebraic data types, pattern matching, parametric polymorphism, currying, higher-order functions, extensible records, channel and process-based concurrency, and tail call elimination. Two notable features of Flix are its type and effect system and its support for first-class Datalog constraints.