This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations .(December 2017) |
Paradigm | multi-paradigm: imperative, parallel |
---|---|
Designed by | Thinking Machines |
Developer | Thinking Machines |
First appeared | 1987 |
Stable release | 6.x (August 27, 1993 ) / August 27, 1993 |
Typing discipline | static, weak, manifest |
OS | Connection Machine |
Filename extensions | .cs |
Influenced by | |
ANSI C, *Lisp | |
Influenced | |
Dataparallel-C |
C* (or C-star) is a data-parallel superset of ANSI C with synchronous semantics.
It was developed in 1987 as an alternative language to *Lisp and CM-Fortran for the Connection Machine CM-2 and above. The language C* adds to C a "domain" data type and a selection statement for parallel execution in domains.
For the CM-2 models the C* compiler translated the code into serial C, calling PARIS (Parallel Instruction Set) functions, and passed the resulting code to the front end computer's native compiler. The resulting executables were executed on the front end computer with PARIS calls being executed on the Connection Machine.
On the CM-5 and CM-5E parallel C* Code was executed in a SIMD style fashion on processing elements, whereas serial code was executed on the PM (Partition Manager) Node, with the PM acting as a "front end" if directly compared to a CM-2. The latest version of C* as of 27 August 1993 is 6.x. An unimplemented language dubbed "Parallel C" (not to be confused with Unified Parallel C) influenced the design of C*. Dataparallel-C was based on C*.
Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic.
In computing, a compiler is a computer program that translates computer code written in one programming language into another language. The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language to create an executable program.
Lisp is a family of programming languages with a long history and a distinctive, fully parenthesized prefix notation. Originally specified in the late 1950s, it is the second-oldest high-level programming language still in common use, after Fortran. Lisp has changed since its early days, and many dialects have existed over its history. Today, the best-known general-purpose Lisp dialects are Common Lisp, Scheme, Racket, and Clojure.
A programming language is a system of notation for writing computer programs.
In computing, source code, or simply code or source, is a plain text computer program written in a programming language. A programmer writes the human readable source code to control the behavior of a computer.
A Connection Machine (CM) is a member of a series of massively parallel supercomputers that grew out of doctoral research on alternatives to the traditional von Neumann architecture of computers by Danny Hillis at Massachusetts Institute of Technology (MIT) in the early 1980s. Starting with CM-1, the machines were intended originally for applications in artificial intelligence (AI) and symbolic processing, but later versions found greater success in the field of computational science.
In computer science, a compiler-compiler or compiler generator is a programming tool that creates a parser, interpreter, or compiler from some form of formal description of a programming language and machine.
Thinking Machines Corporation was a supercomputer manufacturer and artificial intelligence (AI) company, founded in Waltham, Massachusetts, in 1983 by Sheryl Handler and W. Daniel "Danny" Hillis to turn Hillis's doctoral work at the Massachusetts Institute of Technology (MIT) on massively parallel computing architectures into a commercial product named the Connection Machine. The company moved in 1984 from Waltham to Kendall Square in Cambridge, Massachusetts, close to the MIT AI Lab. Thinking Machines made some of the most powerful supercomputers of the time, and by 1993 the four fastest computers in the world were Connection Machines. The firm filed for bankruptcy in 1994; its hardware and parallel computing software divisions were acquired in time by Sun Microsystems.
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
Guy Lewis Steele Jr. is an American computer scientist who has played an important role in designing and documenting several computer programming languages and technical standards.
OpenMP is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran, on many platforms, instruction-set architectures and operating systems, including Solaris, AIX, FreeBSD, HP-UX, Linux, macOS, and Windows. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.
Cilk, Cilk++, Cilk Plus and OpenCilk are general-purpose programming languages designed for multithreaded parallel computing. They are based on the C and C++ programming languages, which they extend with constructs to express parallel loops and the fork–join idiom.
In computer programming, dataflow programming is a programming paradigm that models a program as a directed graph of the data flowing between operations, thus implementing dataflow principles and architecture. Dataflow programming languages share some features of functional languages, and were generally developed in order to bring some functional concepts to a language more suitable for numeric processing. Some authors use the term datastream instead of dataflow to avoid confusion with dataflow computing or dataflow architecture, based on an indeterministic machine paradigm. Dataflow programming was pioneered by Jack Dennis and his graduate students at MIT in the 1960s.
Fortress is a discontinued experimental programming language for high-performance computing, created by Sun Microsystems with funding from DARPA's High Productivity Computing Systems project. One of the language designers was Guy L. Steele Jr., whose previous work includes Scheme, Common Lisp, and Java.
*Lisp is a programming language, a dialect of the language Lisp. It was conceived of in 1985 by two employees of the Thinking Machines Corporation, Cliff Lasser and Steve Omohundro, as a way to provide an efficient yet high-level language for programming the nascent Connection Machine (CM).
New Implementation of LISP (NIL) is a programming language, a dialect of the language Lisp, developed at the Massachusetts Institute of Technology (MIT) during the 1970s, and intended to be the successor to the language Maclisp. It is a 32-bit implementation, and was in part a response to Digital Equipment Corporation's (DEC) VAX computer. The project was headed by Jon L White, with a stated goal of maintaining compatibility with MacLisp while fixing many of its problems.
Data parallelism is parallelization across multiple processors in parallel computing environments. It focuses on distributing the data across different nodes, which operate on the data in parallel. It can be applied on regular data structures like arrays and matrices by working on each element in parallel. It contrasts to task parallelism as another form of parallelism.
For several years parallel hardware was only available for distributed computing but recently it is becoming available for the low end computers as well. Hence it has become inevitable for software programmers to start writing parallel applications. It is quite natural for programmers to think sequentially and hence they are less acquainted with writing multi-threaded or parallel processing applications. Parallel programming requires handling various issues such as synchronization and deadlock avoidance. Programmers require added expertise for writing such applications apart from their expertise in the application domain. Hence programmers prefer to write sequential code and most of the popular programming languages support it. This allows them to concentrate more on the application. Therefore, there is a need to convert such sequential applications to parallel applications with the help of automated tools. The need is also non-trivial because large amount of legacy code written over the past few decades needs to be reused and parallelized.
This glossary of computer science is a list of definitions of terms and concepts used in computer science, its sub-disciplines, and related fields, including terms relevant to software, data science, and computer programming.