Real-time Cmix

Last updated

Real-Time Cmix (RTcmix) is one of the MUSIC-N family of computer music programming languages. RTcmix is descended from the MIX program developed by Paul Lansky at Princeton University in 1978 to perform algorithmic composition using digital audio soundfiles on an IBM 3031 mainframe computer. After synthesis functions were added, the program was renamed Cmix in the 1980s. Real-time capability was added by David Topper, John Gibson, Brad Garton, and Douglas Scott in the mid-1990s. In addition, support for TCP socket connectivity, interactive control of the scheduler were added, as well as the ability to embed the synthesis engine into fully featured applications such as Max/MSP.

Over the years Cmix/RTcmix has run on a variety of computer platforms and operating systems, including NeXT, Sun Microsystems, IRIX, Linux, and Mac OS X. It is and has always been an open source project, differentiating it from commercial synthesizers and music software. It is currently developed by a group of computer music researchers both academic (at Princeton University, Columbia University, and Indiana University Bloomington), and private.

RTcmix has a number of unique (or highly unusual) features when compared with other synthesis and signal processing languages. For one, it has a built-in MINC parser, which enables the user to write C-style code within the score file, extending its capability for algorithmic composition and making it closer in some respects to later music software such as SuperCollider and Max/MSP. It uses a single-script instruction file (the score file), and synthesis and signal processing routines (called instruments) loaded as shared libraries. This is different from MUSIC-N languages such as Csound where the instruments exist in a second file written in a specification language that builds the routines out of simple building blocks (organized as opcodes or unit generators). RTcmix has similar functionality to Csound and other computer music languages, however, and their shared lineage means that scripts written for one language will be extremely familiar-looking (if not immediately comprehensible) to users of the other language.

Related Research Articles

Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, electrical engineering, and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the first experiments and innovations with electronic instruments at the turn of the 20th century.

<span class="mw-page-title-main">Digital synthesizer</span> Synthesizer that uses digital signal processing to make sounds

A digital synthesizer is a synthesizer that uses digital signal processing (DSP) techniques to make musical sounds. This in contrast to older analog synthesizers, which produce music using analog electronics, and samplers, which play back digital recordings of acoustic, electric, or electronic instruments. Some digital synthesizers emulate analog synthesizers; others include sampling capability in addition to digital synthesis.

<span class="mw-page-title-main">Electronic musical instrument</span> Musical instrument that uses electronic circuits to generate sound

An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.

Paul Lansky is an American composer.

<span class="mw-page-title-main">Music technology (electronic and digital)</span>

Digital music technology encompasses digital instruments, computers, electronic effects units, software, or digital audio equipment by a performer, composer, sound engineer, DJ, or record producer to produce, perform or record music. The term refers to electronic devices, instruments, computer hardware, and software used in performance, playback, recording, composition, mixing, analysis, and editing of music.

Csound is a domain-specific computer programming language for audio programming. It is called Csound because it is written in C, as opposed to some of its predecessors.

Wavetable synthesis is a sound synthesis technique used to create quasi-periodic waveforms often used in the production of musical tones or notes.

Granular synthesis is a sound synthesis method that operates on the microsound time scale.

<span class="mw-page-title-main">ChucK</span> Audio programming language

ChucK is a concurrent, strongly timed audio programming language for real-time synthesis, composition, and performance, which runs on Linux, Mac OS X, Microsoft Windows, and iOS. It is designed to favor readability and flexibility for the programmer over other considerations such as raw performance. It natively supports deterministic concurrency and multiple, simultaneous, dynamic control rates. Another key feature is the ability to live code; adding, removing, and modifying code on the fly, while the program is running, without stopping or restarting. It has a highly precise timing/concurrency model, allowing for arbitrarily fine granularity. It offers composers and researchers a powerful and flexible programming tool for building and experimenting with complex audio synthesis programs, and real-time interactive control.

MUSIC-N refers to a family of computer music programs and programming languages descended from or influenced by MUSIC, a program written by Max Mathews in 1957 at Bell Labs. MUSIC was the first computer program for generating digital audio waveforms through direct synthesis. It was one of the first programs for making music on a digital computer, and was certainly the first program to gain wide acceptance in the music research community as viable for that task. The world's first computer-controlled music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard. However, CSIRAC produced sound by sending raw pulses to the speaker, it did not produce standard digital audio with PCM samples, like the MUSIC-series of programs.

<span class="mw-page-title-main">Max (software)</span> Visual programming language

Max, also known as Max/MSP/Jitter, is a visual programming language for music and multimedia developed and maintained by San Francisco-based software company Cycling '74. Over its more than thirty-year history, it has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations.

<span class="mw-page-title-main">Max Mathews</span> American pioneer in computer music

Max Vernon Mathews was an American pioneer of computer music.

<span class="mw-page-title-main">Pure Data</span> Visual programming language

Pure Data (Pd) is a visual programming language developed by Miller Puckette in the 1990s for creating interactive computer music and multimedia works. While Puckette is the main author of the program, Pd is an open-source project with a large developer base working on new extensions. It is released under BSD-3-Clause. It runs on Linux, MacOS, iOS, Android and Windows. Ports exist for FreeBSD and IRIX.

JSyn is a free API for developing interactive sound applications in Java. Developed by Phil Burk and others, it is distributed through Burk's company, Mobileer Inc. JSyn has a flexible, unit generator-based synthesis and DSP architecture that allows developers to create synthesizers, audio playback routines, and effects processing algorithms within a Java framework that allows for easy integration with other Java routines. A plugin is available for web browsers to run JSyn-enabled applets distributed over the World Wide Web.

The Synthesis Toolkit (STK) is an open source API for real time audio synthesis with an emphasis on classes to facilitate the development of physical modelling synthesizers. It is written in C++ and is written and maintained by Perry Cook at Princeton University and Gary Scavone at McGill University. It contains both low-level synthesis and signal processing classes and higher-level instrument classes which contain examples of most of the currently available physical modelling algorithms in use today. STK is free software, but a number of its classes, particularly some physical modelling algorithms, are covered by patents held by Stanford University and Yamaha.

<span class="mw-page-title-main">Visual programming language</span> Programming language written graphically by a user

In computing, a visual programming language, also known as diagrammatic programming, graphical programming or block coding, is a programming language that lets users create programs by manipulating program elements graphically rather than by specifying them textually. A VPL allows programming with visual expressions, spatial arrangements of text and graphic symbols, used either as elements of syntax or secondary notation. For example, many VPLs are based on the idea of "boxes and arrows", where boxes or other screen objects are treated as entities, connected by arrows, lines or arcs which represent relations.

Russell Pinkston is a professor of composition and the director of the electronic music studios at the University of Texas at Austin School of Music.

Richard Charles Boulanger is a composer, author, and electronic musician. He is a key figure in the development of the audio programming language Csound, and is associated with computer music pioneers Max Mathews and Barry Vercoe.