Hierarchical Music Specification Language

Last updated

The Hierarchical Music Specification Language (HMSL) is a music programming language written in the 1980s by Larry Polansky, Phil Burk, and David Rosenboom at Mills College. Written on top of Forth, it allowed for the creation of real-time interactive music performance systems, algorithmic composition software, and any other kind of program that requires a high degree of musical informatics. It was distributed by Frog Peak Music, and runs with a very light memory footprint (~1 megabyte) on Macintosh and Amiga systems.

Music form of art using sound

Music is an art form and cultural activity whose medium is sound organized in time. General definitions of music include common elements such as pitch, rhythm, dynamics, and the sonic qualities of timbre and texture. Different styles or types of music may emphasize, de-emphasize or omit some of these elements. Music is performed with a vast range of instruments and vocal techniques ranging from singing to rapping; there are solely instrumental pieces, solely vocal pieces and pieces that combine singing and instruments. The word derives from Greek μουσική . See glossary of musical terminology.

Programming language language designed to communicate instructions to a machine

A programming language is a formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms.

Larry Polansky American musician

Larry Polansky is a composer, guitarist, mandolinist, and a professor at the University of California, Santa Cruz. He is a founding member and co-director of Frog Peak Music :. He co-wrote HMSL with Phil Burk and David Rosenboom.

Unlike CSound and other languages for audio synthesis, HMSL is primarily a language for making music. As such, it interfaces with sound-making devices through built-in MIDI classes. However, it has a high degree of built-in understanding of music performance practice, tuning systems, and score reading. Its main interface for the manipulation of musical parameters is through the metaphor of shapes, which can be created, altered, and combined to create a musical texture, either by themselves or in response to real-time or scheduled events in a score.

MIDI electronic musical instrument industry specification

MIDI is a technical standard that describes a communications protocol, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments, computers, and related audio devices for playing, editing and recording music. A single MIDI link through a MIDI cable can carry up to sixteen channels of information, each of which can be routed to a separate device or instrument. This could be sixteen different digital instruments, for example.

Performance performing arts event, single representation of a performing arts production

Performance is completion of a task with application of knowledge, skills and abilities.

Musical tuning umbrella term for the act of tuning an instrument and a system of pitches

In music, there are two common meanings for tuning:

HMSL has been widely used by composers working in algorithmic composition for over twenty years. In addition to the authors (who are also composers), HMSL has been used in pieces by Nick Didkovsky, The Hub, James Tenney, Tom Erbe, and Pauline Oliveros.

Nick Didkovsky American musician

Nick Didkovsky is a composer, guitarist, computer music programmer, and leader of the band Doctor Nerve. He is a former student of Christian Wolff, Pauline Oliveros and Gerald Shapiro.

The Hub is an American "computer network music" ensemble formed in 1986 consisting of John Bischoff, Tim Perkis, Chris Brown, Scot Gresham-Lancaster, Mark Trayle and Phil Stone. "The Hub was the first live computer music band whose members were all composers, as well as designers and builders of their own hardware and software."

James Tenney American composer and music theorist

James Tenney was an American composer and music theorist. He made significant early musical contributions to plunderphonics, sound synthesis, algorithmic composition, process music, spectral music, microtonal music, and tuning systems including extended just intonation. His theoretical writings variously concern musical form, texture, timbre, consonance and dissonance, and harmonic perception.

A Java port of HMSL was developed by Nick Didkovsky under the name JMSL, and is designed to interface to the JSyn API.

JSyn is a free API for developing interactive sound applications in Java. Developed by Phil Burk and others, it is distributed through Burk's company, Mobileer Inc. JSyn has a flexible, unit generator-based synthesis and DSP architecture that allows developers to create synthesizers, audio playback routines, and effects processing algorithms within a Java framework that allows for easy integration with other Java routines. A plugin is available for web browsers to run JSyn-enabled applets distributed over the world wide web.

Related Research Articles

Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the very first experiments and innovations with electronic instruments at the turn of the 20th century.

Electronic musical instrument musical instrument that produces its sounds using electronics

An electronic musical instrument is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.

Csound is a computer programming language for sound, also known as a sound compiler or an audio programming language, or more precisely, an audio DSL. It is called Csound because it is written in C, as opposed to some of its predecessors.

SuperCollider is an environment and programming language originally released in 1996 by James McCartney for real-time audio synthesis and algorithmic composition.

Scorewriter software used for creating sheet music

A scorewriter, or music notation program is software used with a computer for creating, editing and printing sheet music. A scorewriter is to music notation what a word processor is to text, in that they both allow fast corrections (undo), flexible editing, easy sharing of electronic documents, and clean, uniform layout. In addition, most scorewriters, especially those from the 2000s, are able to record notes played on a MIDI keyboard, and play music back via MIDI or virtual instruments. Playback is especially useful for novice composers or music students or when no musicians are readily available or affordable.

Real-Time Cmix (RTcmix) is one of the MUSIC-N family of computer music programming languages. RTcmix is descended from the MIX program developed by Paul Lansky at Princeton University in 1978 to perform algorithmic composition using digital audio soundfiles on a IBM 3031 mainframe computer. After synthesis functions were added, the program was renamed Cmix in the 1980s. Real-time capability was added by Brad Garton and David Topper in the mid-1990s, with support for TCP socket connectivity, interactive control of the scheduler, and object-oriented embedding of the synthesis engine into fully featured applications.

MUSIC-N refers to a family of computer music programs and programming languages descended from or influenced by MUSIC, a program written by Max Mathews in 1957 at Bell Labs. MUSIC was the first computer program for generating digital audio waveforms through direct synthesis. It was one of the first programs for making music on a digital computer, and was certainly the first program to gain wide acceptance in the music research community as viable for that task. The world's first computer-controlled music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard. However, CSIRAC produced sound by sending raw pulses to the speaker, it did not produce standard digital audio with PCM samples, like the MUSIC-series of programs.

Max (software) visual programming language for music and multimedia

Max, also known as Max/MSP/Jitter, is a visual programming language for music and multimedia developed and maintained by San Francisco-based software company Cycling '74. Over its more than thirty-year history, it has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations.

Max Mathews American pioneer in computer music

Max Vernon Mathews was a pioneer of computer music.

Algorithmic composition is the technique of using algorithms to create music.

Generative music is a term popularized by Brian Eno to describe music that is ever-different and changing, and that is created by a system.

David Cope is an American author, composer, scientist, and former professor of music at the University of California, Santa Cruz. His primary area of research involves artificial intelligence and music; he writes programs and algorithms that can analyse existing music and create new compositions in the style of the original input music. He taught a summer Workshop in Algorithmic Computer Music that was open to the public as well as a general education course entitled Artificial Intelligence and Music for enrolled UCSC students. Cope is also cofounder and CTO Emeritus of Recombinant Inc, a music technology company.

Live coding

Live coding makes programming an integral part of the running program.

Gary Lee Nelson is a composer and media artist who taught at Oberlin College in the TIMARA department. He specializes in algorithmic composition, real-time interactive sound and video along with digital film making.

UPIC is a computerised musical composition tool, devised by the composer Iannis Xenakis. It was developed at the Centre d'Etudes de Mathématique et Automatique Musicales (CEMAMu) in Paris, and was completed in 1977. Xenakis used it on his subsequent piece Mycènes Alpha (1978), and it has been used by composers such as Jean-Claude Risset, François-Bernard Mâche, Takehito Shimazu, Mari King, and Curtis Roads. Aphex Twin talked about it in an interview

Pop music automation is a field of study among musicians and computer scientists with a goal of producing successful pop music algorithmically. It is often based on the premise that pop music is especially formulaic, unchanging, and easy to compose. The idea of automating pop music composition is related to many ideas in algorithmic music, Artificial Intelligence (AI) and computational creativity.