C*

Last updated
C*
Paradigm multi-paradigm: object-oriented, imperative, parallel
Designed by Thinking Machines
Developer Thinking Machines
First appeared1987
Stable release
6.x (August 27, 1993 (1993-08-27)) / August 27, 1993 (1993-08-27)
Typing discipline static, weak, manifest
OS Connection Machine
Filename extensions .cs
Influenced by
ANSI C, *Lisp
Influenced
Dataparallel-C

C* (or C-star) is an object-oriented, data-parallel superset of ANSI C with synchronous semantics.

Contents

History

It was developed in 1987 as an alternative language to *Lisp and CM-Fortran for the Connection Machine CM-2 and above. The language C* adds to C a "domain" data type and a selection statement for parallel execution in domains.

For the CM-2 models the C* compiler translated the code into serial C, calling PARIS (Parallel Instruction Set) functions, and passed the resulting code to the front end computer's native compiler. The resulting executables were executed on the front end computer with PARIS calls being executed on the Connection Machine.

On the CM-5 and CM-5E parallel C* Code was executed in a SIMD style fashion on processing elements, whereas serial code was executed on the PM (Partition Manager) Node, with the PM acting as a "front end" if directly compared to a CM-2. The latest version of C* as of 27 August 1993 is 6.x. An unimplemented language dubbed "Parallel C" (not to be confused with Unified Parallel C) influenced the design of C*. Dataparallel-C was based on C*.

Related Research Articles

Computer programming is the process of performing a particular computation, usually by designing and building an executable computer program. Programming involves tasks such as analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, and the implementation of algorithms. The source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, which is directly executed by the central processing unit. The purpose of programming is to find a sequence of instructions that will automate the performance of a task on a computer, often for solving a given problem. Proficient programming thus usually requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, and formal logic.

In computing, a compiler is a computer program that translates computer code written in one programming language into another language. The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language to create an executable program.

<span class="mw-page-title-main">Interpreter (computing)</span> Program that executes source code without a separate compilation step

In computer science, an interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program. An interpreter generally uses one of the following strategies for program execution:

  1. Parse the source code and perform its behavior directly;
  2. Translate source code into some efficient intermediate representation or object code and immediately execute that;
  3. Explicitly execute stored precompiled bytecode made by a compiler and matched with the interpreter Virtual Machine.
<span class="mw-page-title-main">Connection Machine</span>

A Connection Machine (CM) is a member of a series of massively parallel supercomputers that grew out of doctoral research on alternatives to the traditional von Neumann architecture of computers by Danny Hillis at Massachusetts Institute of Technology (MIT) in the early 1980s. Starting with CM-1, the machines were intended originally for applications in artificial intelligence (AI) and symbolic processing, but later versions found greater success in the field of computational science.

In computer science, a compiler-compiler or compiler generator is a programming tool that creates a parser, interpreter, or compiler from some form of formal description of a programming language and machine.

Thinking Machines Corporation was a supercomputer manufacturer and artificial intelligence (AI) company, founded in Waltham, Massachusetts, in 1983 by Sheryl Handler and W. Daniel "Danny" Hillis to turn Hillis's doctoral work at the Massachusetts Institute of Technology (MIT) on massively parallel computing architectures into a commercial product named the Connection Machine. The company moved in 1984 from Waltham to Kendall Square in Cambridge, Massachusetts, close to the MIT AI Lab. Thinking Machines made some of the most powerful supercomputers of the time, and by 1993 the four fastest computers in the world were Connection Machines. The firm filed for bankruptcy in 1994; its hardware and parallel computing software divisions were acquired in time by Sun Microsystems.

<span class="mw-page-title-main">Guy L. Steele Jr.</span> American computer scientist (born 1954)

Guy Lewis Steele Jr. is an American computer scientist who has played an important role in designing and documenting several computer programming languages and technical standards.

<span class="mw-page-title-main">OpenMP</span> Open standard for parallelizing

OpenMP is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran, on many platforms, instruction-set architectures and operating systems, including Solaris, AIX, FreeBSD, HP-UX, Linux, macOS, and Windows. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.

Cilk, Cilk++, Cilk Plus and OpenCilk are general-purpose programming languages designed for multithreaded parallel computing. They are based on the C and C++ programming languages, which they extend with constructs to express parallel loops and the fork–join idiom.

In computer programming, dataflow programming is a programming paradigm that models a program as a directed graph of the data flowing between operations, thus implementing dataflow principles and architecture. Dataflow programming languages share some features of functional languages, and were generally developed in order to bring some functional concepts to a language more suitable for numeric processing. Some authors use the term datastream instead of dataflow to avoid confusion with dataflow computing or dataflow architecture, based on an indeterministic machine paradigm. Dataflow programming was pioneered by Jack Dennis and his graduate students at MIT in the 1960s.

In computer science, a tail call is a subroutine call performed as the final action of a procedure. If the target of a tail is the same subroutine, the subroutine is said to be tail recursive, which is a special case of direct recursion. Tail recursion is particularly useful, and is often easy to optimize in implementations.

Fortress is a discontinued experimental programming language for high-performance computing, created by Sun Microsystems with funding from DARPA's High Productivity Computing Systems project. One of the language designers was Guy L. Steele Jr., whose previous work includes Scheme, Common Lisp, and Java.

*Lisp is a programming language, a dialect of the language Lisp. It was conceived of in 1985 by two employees of the Thinking Machines Corporation, Cliff Lasser and Steve Omohundro, as a way to provide an efficient yet high-level language for programming the nascent Connection Machine (CM).

In computer programming, a programming language implementation is a system for executing computer programs. There are two general approaches to programming language implementation:

New Implementation of LISP (NIL) is a programming language, a dialect of the language Lisp, developed at the Massachusetts Institute of Technology (MIT) during the 1970s, and intended to be the successor to the language Maclisp. It is a 32-bit implementation, and was in part a response to Digital Equipment Corporation's (DEC) VAX computer. The project was headed by Jon L White, with a stated goal of maintaining compatibility with MacLisp while fixing many of its problems.

<span class="mw-page-title-main">Data parallelism</span> Parallelization across multiple processors in parallel computing environments

Data parallelism is parallelization across multiple processors in parallel computing environments. It focuses on distributing the data across different nodes, which operate on the data in parallel. It can be applied on regular data structures like arrays and matrices by working on each element in parallel. It contrasts to task parallelism as another form of parallelism.

This article attempts to set out the various similarities and differences between the various programming paradigms as a summary in both graphical and tabular format with links to the separate discussions concerning these similarities and differences in extant Wikipedia articles.

For several years parallel hardware was only available for distributed computing but recently it is becoming available for the low end computers as well. Hence it has become inevitable for software programmers to start writing parallel applications. It is quite natural for programmers to think sequentially and hence they are less acquainted with writing multi-threaded or parallel processing applications. Parallel programming requires handling various issues such as synchronization and deadlock avoidance. Programmers require added expertise for writing such applications apart from their expertise in the application domain. Hence programmers prefer to write sequential code and most of the popular programming languages support it. This allows them to concentrate more on the application. Therefore, there is a need to convert such sequential applications to parallel applications with the help of automated tools. The need is also non-trivial because large amount of legacy code written over the past few decades needs to be reused and parallelized.

<span class="mw-page-title-main">David A. Moon</span> American computer scientist

David A. Moon is a programmer and computer scientist, known for his work on the Lisp programming language, as co-author of the Emacs text editor, as the inventor of ephemeral garbage collection, and as one of the designers of the Dylan programming language. Guy L. Steele Jr. and Richard P. Gabriel (1993) name him as a leader of the Common Lisp movement and describe him as "a seductively powerful thinker, quiet and often insulting, whose arguments are almost impossible to refute".

References

    General