MUSIC-N

Last updated

MUSIC-N refers to a family of computer music programs and programming languages descended from or influenced by MUSIC, a program written by Max Mathews in 1957 at Bell Labs. [1] MUSIC was the first computer program for generating digital audio waveforms through direct synthesis. It was one of the first programs for making music (in actuality, sound) on a digital computer, and was certainly the first program to gain wide acceptance in the music research community as viable for that task. The world's first computer-controlled music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard. [2] However, CSIRAC produced sound by sending raw pulses to the speaker, it did not produce standard digital audio with PCM samples, like the MUSIC-series of programs.

Contents

Design

All MUSIC-N derivative programs have a (more-or-less) common design, made up of a library of functions built around simple signal-processing and synthesis routines (written as "opcodes" or unit generators). These simple opcodes are then constructed by the user into an instrument (usually through a text-based instruction file, but increasingly through a graphical interface) that defines a sound which is then "played" by a second file (called the score) which specifies notes, durations, pitches, amplitudes, and other parameters relevant to the musical informatics of the piece. Some variants of the language merge the instrument and score, though most still distinguish between control-level functions (which operate on the music) and functions that run at the sampling rate of the audio being generated (which operate on the sound). A notable exception is ChucK, which unifies audio-rate and control-rate timing into a single framework, allowing arbitrarily fine time granularity and also one mechanism to manage both. This has the advantage of more flexible and readable code as well as drawbacks of reduced system performance.

MUSIC-N and derived software are mostly available as complete self-contained programs, which can have different types of user-interfaces, from text- to GUI-based ones. In this aspect, Csound and RTcmix have since evolved to work effectively as software libraries which can be accessed through a variety of frontends and programming languages, such as C, C++, Java, Python, Tcl, Lua, Lisp, Scheme, etc., as well as other music systems such as Pure Data, Max/MSP and plugin frameworks LADSPA and VST.

A number of highly original (and to this day largely unchallenged) assumptions are implemented in MUSIC and its descendants about the best way to create sound on a computer. Many of Mathews' implementations (such as using pre-calculated arrays for waveform and envelope storage, the use of a scheduler that runs in musical time rather than at audio rate) are the norm for most hardware and software synthesis and audio DSP systems today.

Family

MUSIC included a number of variants, e.g.:

Derivatives of MUSIC IV include: [3]
  • MUSIC 4BF was developed by H. Howe and G. Winham on an IBM System/360 at Princeton University in 1967 [4]
  • MUSIC 360 was developed by Barry Vercoe on an IBM System/360 at Princeton University in 1969 [4]
  • MUSIC 11 was developed by B. Vercoe, S. Haflich, R. Hale, and C. Howe on a DEC PDP-11 at MIT in 1973 [4]
  • Csound (descended from MUSIC 11 and in wide use today)
MUSIC V was considerably augmented at IRCAM in Paris by John Gardner and Jean-Louis Richer to enable it to process digitized sounds as well as to synthesize sounds

Structured Audio Orchestra Language (SAOL) is an imperative, MUSIC-N programming language, which is part of the MPEG-4 audio standard, by Eric Scheirer

Less obviously, MUSIC can be seen as the parent program[ vague ] for:

MUSIC IV

MUSIC IV was a computer music synthesis software package written by Max Mathews. The program was an expansion of earlier packages written by Mathews to produce music by direct digital computation, which could be heard by converting samples to audible sound using a digital-to-analog converter (DAC). MUSIC IV was further expanded [3] by Godfrey Winham and Hubert Howe into MUSIC IVB, and then into MUSIC IVBF, a more portable version written in FORTRAN. It is a precursor to CSound.

MUSIC IV allows the programmer to enter a musical score as a text file and have each note played with a particular "musical instrument", which is a software algorithm. Some instruments are supplied in the package, but the programmer can supply new instruments in the form of FORTRAN code, to be compiled and called by the MUSIC IV package to generate output.

As designed, the package was not intended for real-time generation of music as is done by a modern portable electronic keyboard instrument; instead, entire songs or musical pieces are encoded and processed into a digital file on disk or tape containing the stream of samples. Prior to the advent of low-cost digital audio gear in the late 1980s, the samples were typically sent to a DAC and recorded on analog tape.

Related Research Articles

Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, electrical engineering and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the first experiments and innovations with electronic instruments at the turn of the 20th century.

<span class="mw-page-title-main">Electronic musical instrument</span> Musical instrument that uses electronic circuits to generate sound

An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.

<span class="mw-page-title-main">Music technology (electronic and digital)</span>

Digital music technology encompasses digital instruments, computers, electronic effects units, software, or digital audio equipment by a performer, composer, sound engineer, DJ, or record producer to produce, perform or record music. The term refers to electronic devices, instruments, computer hardware, and software used in performance, playback, recording, composition, mixing, analysis, and editing of music.

<span class="mw-page-title-main">IBM 704</span> Vacuum-tube computer system

The IBM 704 is a large digital mainframe computer introduced by IBM in 1954. It was the first mass-produced computer with hardware for floating-point arithmetic. The IBM 704 Manual of operation states:

The type 704 Electronic Data-Processing Machine is a large-scale, high-speed electronic calculator controlled by an internally stored program of the single address type.

A music sequencer is a device or application software that can record, edit, or play back music, by handling note and performance information in several forms, typically CV/Gate, MIDI, or Open Sound Control (OSC), and possibly audio and automation data for digital audio workstations (DAWs) and plug-ins.

Csound is a domain-specific computer programming language for audio programming. It is called Csound because it is written in C, as opposed to some of its predecessors.

A software synthesizer or softsynth is a computer program that generates digital audio, usually for music. Computer software that can create sounds or music is not new, but advances in processing speed now allow softsynths to accomplish the same tasks that previously required the dedicated hardware of a conventional synthesizer. Softsynths may be readily interfaced with other music software such as music sequencers typically in the context of a digital audio workstation. Softsynths are usually less expensive and can be more portable than dedicated hardware.

Wavetable synthesis is a sound synthesis technique used to create quasi-periodic waveforms often used in the production of musical tones or notes.

Granular synthesis is a sound synthesis method that operates on the microsound time scale.

Real-Time Cmix (RTcmix) is one of the MUSIC-N family of computer music programming languages. RTcmix is descended from the MIX program developed by Paul Lansky at Princeton University in 1978 to perform algorithmic composition using digital audio soundfiles on an IBM 3031 mainframe computer. After synthesis functions were added, the program was renamed Cmix in the 1980s. Real-time capability was added by Brad Garton and David Topper in the mid-1990s, with support for TCP socket connectivity, interactive control of the scheduler, and object-oriented embedding of the synthesis engine into fully featured applications.

<span class="mw-page-title-main">Max (software)</span> Visual programming language

Max, also known as Max/MSP/Jitter, is a visual programming language for music and multimedia developed and maintained by San Francisco-based software company Cycling '74. Over its more than thirty-year history, it has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations.

<span class="mw-page-title-main">Max Mathews</span> American pioneer in computer music

Max Vernon Mathews was an American pioneer of computer music.

Barry Lloyd Vercoe is a New Zealand-born computer scientist and composer. He is best known as the inventor of Csound, a music synthesis language with wide usage among computer music composers. SAOL, the underlying language for the MPEG-4 Structured Audio standard, is also historically derived from Csound.

Unit generators are the basic formal units in many MUSIC-N-style computer music programming languages. They are sometimes called opcodes, though this expression is not accurate in that these are not machine-level instructions.

Programming is a form of music production and performance using electronic devices and computer software, such as sequencers and workstations or hardware synthesizers, sampler and sequencers, to generate sounds of musical instruments. These musical sounds are created through the use of music coding languages. There are many music coding languages of varying complexity. Music programming is also frequently used in modern pop and rock music from various regions of the world, and sometimes in jazz and contemporary classical music. It gained popularity in the 1950s and has been emerging ever since.

Nyquist is a programming language for sound synthesis and analysis based on the Lisp programming language. It is an extension of the XLISP dialect of Lisp, and is named after Harry Nyquist.

The Bell Labs Digital Synthesizer, better known as the Alles Machine or Alice, was an experimental additive synthesizer designed by Hal Alles at Bell Labs during the 1970s. The Alles Machine used computer-controlled 16 bit digital synthesizer operating at 30k samples/sec with 32 FM sinewave oscillators. The Alles Machine has been called the first true digital additive synthesizer, following on earlier Bell experiments that were partially or wholly implemented as software on large computers. Only one full-length composition was recorded for the machine, before it was disassembled and donated to Oberlin Conservatory's TIMARA department in 1981. Several commercial synthesizers based on the Alles design were released during the 1980s, including the Atari AMY sound chip.

Richard Charles Boulanger is a composer, author, and electronic musician. He is a key figure in the development of the audio programming language Csound, and is associated with computer music pioneers Max Mathews and Barry Vercoe.

References

  1. Peter Manning, Computer and Electronic Music. Oxford Univ. Press, 1993.
  2. The music of CSIRAC Archived 2008-07-05 at the Wayback Machine
  3. 1 2 3 4 Roads, Curtis; Mathews, Max (Winter 1980). "Interview with Max Mathews". Computer Music Journal . 4 (4): 15–22. doi:10.2307/3679463. JSTOR   3679463.
  4. 1 2 3 4 5 6 7 8 9 10 Roads, Curtis (1996). The Computer Music Tutorial. MIT Press. p. 789. ISBN   9780262680820.

Further reading

See also