Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, electrical engineering, and psychoacoustics. [1] The field of computer music can trace its roots back to the origins of electronic music, and the first experiments and innovations with electronic instruments at the turn of the 20th century. [2]
Much of the work on computer music has drawn on the relationship between music and mathematics, a relationship that has been noted since the Ancient Greeks described the "harmony of the spheres".
Musical melodies were first generated by the computer originally named the CSIR Mark 1 (later renamed CSIRAC) in Australia in 1950. There were newspaper reports from America and England (early and recently) that computers may have played music earlier, but thorough research has debunked these stories as there is no evidence to support the newspaper reports (some of which were speculative). Research has shown that people speculated about computers playing music, possibly because computers would make noises, [3] but there is no evidence that they did it. [4] [5]
The world's first computer to play music was the CSIR Mark 1 (later named CSIRAC), which was designed and built by Trevor Pearcey and Maston Beard in the late 1940s. Mathematician Geoff Hill programmed the CSIR Mark 1 to play popular musical melodies from the very early 1950s. In 1950 the CSIR Mark 1 was used to play music, the first known use of a digital computer for that purpose. The music was never recorded, but it has been accurately reconstructed. [6] [7] In 1951 it publicly played the "Colonel Bogey March" [8] of which only the reconstruction exists. However, the CSIR Mark 1 played standard repertoire and was not used to extend musical thinking or composition practice, as Max Mathews did, which is current computer-music practice.
The first music to be performed in England was a performance of the British National Anthem that was programmed by Christopher Strachey on the Ferranti Mark 1, late in 1951. Later that year, short extracts of three pieces were recorded there by a BBC outside broadcasting unit: the National Anthem, "Baa, Baa, Black Sheep", and "In the Mood"; this is recognized as the earliest recording of a computer to play music as the CSIRAC music was never recorded. This recording can be heard at the Manchester University site. [9] Researchers at the University of Canterbury, Christchurch declicked and restored this recording in 2016 and the results may be heard on SoundCloud. [10] [11] [6]
Two further major 1950s developments were the origins of digital sound synthesis by computer, and of algorithmic composition programs beyond rote playback. Amongst other pioneers, the musical chemists Lejaren Hiller and Leonard Isaacson worked on a series of algorithmic composition experiments from 1956 to 1959, manifested in the 1957 premiere of the Illiac Suite for string quartet. [12] Max Mathews at Bell Laboratories developed the influential MUSIC I program and its descendants, further popularising computer music through a 1963 article in Science. [13] The first professional composer to work with digital synthesis was James Tenney, who created a series of digitally synthesized and/or algorithmically composed pieces at Bell Labs using Mathews' MUSIC III system, beginning with Analog #1 (Noise Study) (1961). [14] [15] After Tenney left Bell Labs in 1964, he was replaced by composer Jean-Claude Risset, who conducted research on the synthesis of instrumental timbres and composed Computer Suite from Little Boy (1968).
Early computer-music programs typically did not run in real time, although the first experiments on CSIRAC and the Ferranti Mark 1 did operate in real time. From the late 1950s, with increasingly sophisticated programming, programs would run for hours or days, on multi million-dollar computers, to generate a few minutes of music. [16] [17] One way around this was to use a 'hybrid system' of digital control of an analog synthesiser and early examples of this were Max Mathews' GROOVE system (1969) and also MUSYS by Peter Zinovieff (1969).
Until now partial use has been exploited for musical research into the substance and form of sound (convincing examples are those of Hiller and Isaacson in Urbana, Illinois, US; Iannis Xenakis in Paris and Pietro Grossi in Florence, Italy). [18]
In May 1967 the first experiments in computer music in Italy were carried out by the S 2F M studio in Florence [19] in collaboration with General Electric Information Systems Italy. [20] Olivetti-General Electric GE 115 (Olivetti S.p.A.) is used by Grossi as a performer: three programmes were prepared for these experiments. The programmes were written by Ferruccio Zulian [21] and used by Pietro Grossi for playing Bach, Paganini, and Webern works and for studying new sound structures. [22]
John Chowning's work on FM synthesis from the 1960s to the 1970s allowed much more efficient digital synthesis, [23] eventually leading to the development of the affordable FM synthesis-based Yamaha DX7 digital synthesizer, released in 1983. [24]
Interesting sounds must have a fluidity and changeability that allows them to remain fresh to the ear. In computer music this subtle ingredient is bought at a high computational cost, both in terms of the number of items requiring detail in a score and in the amount of interpretive work the instruments must produce to realize this detail in sound. [25]
This article contains promotional content .(February 2023) |
In Japan, experiments in computer music date back to 1962, when Keio University professor Sekine and Toshiba engineer Hayashi experimented with the TOSBAC [ jp ] computer. This resulted in a piece entitled TOSBAC Suite, influenced by the Illiac Suite. Later Japanese computer music compositions include a piece by Kenjiro Ezaki presented during Osaka Expo '70 and "Panoramic Sonore" (1974) by music critic Akimichi Takeda. Ezaki also published an article called "Contemporary Music and Computers" in 1970. Since then, Japanese research in computer music has largely been carried out for commercial purposes in popular music, though some of the more serious Japanese musicians used large computer systems such as the Fairlight in the 1970s. [26]
In the late 1970s these systems became commercialized, including systems like the Roland MC-8 Microcomposer, where a microprocessor-based system controls an analog synthesizer, released in 1978. [26] In addition to the Yamaha DX7, the advent of inexpensive digital chips and microcomputers opened the door to real-time generation of computer music. [24] In the 1980s, Japanese personal computers such as the NEC PC-88 came installed with FM synthesis sound chips and featured audio programming languages such as Music Macro Language (MML) and MIDI interfaces, which were most often used to produce video game music, or chiptunes. [26] By the early 1990s, the performance of microprocessor-based computers reached the point that real-time generation of computer music using more general programs and algorithms became possible. [27]
Advances in computing power and software for manipulation of digital media have dramatically affected the way computer music is generated and performed. Current-generation micro-computers are powerful enough to perform very sophisticated audio synthesis using a wide variety of algorithms and approaches. Computer music systems and approaches are now ubiquitous, and so firmly embedded in the process of creating music that we hardly give them a second thought: computer-based synthesizers, digital mixers, and effects units have become so commonplace that use of digital rather than analog technology to create and record music is the norm, rather than the exception. [28]
There is considerable activity in the field of computer music as researchers continue to pursue new and interesting computer-based synthesis, composition, and performance approaches. Throughout the world there are many organizations and institutions dedicated to the area of computer and electronic music study and research, including the CCRMA (Center of Computer Research in Music and Acoustic, Stanford, USA), ICMA (International Computer Music Association), C4DM (Centre for Digital Music), IRCAM, GRAME, SEAMUS (Society for Electro Acoustic Music in the United States), CEC (Canadian Electroacoustic Community), and a great number of institutions of higher learning around the world.
Later, composers such as Gottfried Michael Koenig and Iannis Xenakis had computers generate the sounds of the composition as well as the score. Koenig produced algorithmic composition programs which were a generalization of his own serial composition practice. This is not exactly similar to Xenakis' work as he used mathematical abstractions and examined how far he could explore these musically. Koenig's software translated the calculation of mathematical equations into codes which represented musical notation. This could be converted into musical notation by hand and then performed by human players. His programs Project 1 and Project 2 are examples of this kind of software. Later, he extended the same kind of principles into the realm of synthesis, enabling the computer to produce the sound directly. SSP is an example of a program which performs this kind of function. All of these programs were produced by Koenig at the Institute of Sonology in Utrecht in the 1970s. [29] In the 2000s, Andranik Tangian developed a computer algorithm to determine the time event structures for rhythmic canons and rhythmic fugues, which were then "manually" worked out into harmonic compositions Eine kleine Mathmusik I and Eine kleine Mathmusik II performed by computer; [30] [31] for scores and recordings see. [32]
Computers have also been used in an attempt to imitate the music of great composers of the past, such as Mozart. A present exponent of this technique is David Cope, whose computer programs analyses works of other composers to produce new works in a similar style. Cope's best-known program is Emily Howell. [33] [34] [35]
Melomics, a research project from the University of Málaga (Spain), developed a computer composition cluster named Iamus, which composes complex, multi-instrument pieces for editing and performance. Since its inception, Iamus has composed a full album in 2012, also named Iamus, which New Scientist described as "the first major work composed by a computer and performed by a full orchestra". [36] The group has also developed an API for developers to utilize the technology, and makes its music available on its website.
Computer-aided algorithmic composition (CAAC, pronounced "sea-ack") is the implementation and use of algorithmic composition techniques in software. This label is derived from the combination of two labels, each too vague for continued use. The label computer-aided composition lacks the specificity of using generative algorithms. Music produced with notation or sequencing software could easily be considered computer-aided composition. The label algorithmic composition is likewise too broad, particularly in that it does not specify the use of a computer. The term computer-aided, rather than computer-assisted, is used in the same manner as computer-aided design. [37]
Machine improvisation uses computer algorithms to create improvisation on existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation uses machine learning and pattern matching algorithms to analyze existing musical examples. The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic re-injection. This is different from other improvisation methods with computers that use algorithmic composition to generate new music without performing analysis of existing music examples. [38]
Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson's Illiac Suite for String Quartet (1957) and Xenakis' uses of Markov chains and stochastic processes. Modern methods include the use of lossless data compression for incremental parsing, prediction suffix tree, string searching and more. [39] Style mixing is possible by blending models derived from several musical sources, with the first style mixing done by S. Dubnov in a piece NTrope Suite using Jensen-Shannon joint source model. [40] Later the use of factor oracle algorithm (basically a factor oracle is a finite state automaton constructed in linear time and space in an incremental fashion) [41] was adopted for music by Assayag and Dubnov [42] and became the basis for several systems that use stylistic re-injection. [43]
The first implementation of statistical style modeling was the LZify method in Open Music, [44] followed by the Continuator system that implemented interactive machine improvisation that interpreted the LZ incremental parsing in terms of Markov models and used it for real time style modeling [45] developed by François Pachet at Sony CSL Paris in 2002. [46] [47] Matlab implementation of the Factor Oracle machine improvisation can be found as part of Computer Audition toolbox. There is also an NTCC implementation of the Factor Oracle machine improvisation. [48]
OMax is a software environment developed in IRCAM. OMax uses OpenMusic and Max. It is based on researches on stylistic modeling carried out by Gerard Assayag and Shlomo Dubnov and on researches on improvisation with the computer by G. Assayag, M. Chemillier and G. Bloch (a.k.a. the OMax Brothers) in the Ircam Music Representations group. [49] One of the problems in modeling audio signals with factor oracle is the symbolization of features from continuous values to a discrete alphabet. This problem was solved in the Variable Markov Oracle (VMO) available as python implementation, [50] using an information rate criteria for finding the optimal or most informative representation. [51]
The use of artificial intelligence to generate new melodies, [52] cover pre-existing music, [53] and clone artists' voices, is a recent phenomenon that has been reported to disrupt the music industry. [54]
Live coding [55] (sometimes known as 'interactive programming', 'on-the-fly programming', [56] 'just in time programming') is the name given to the process of writing software in real time as part of a performance. Recently it has been explored as a more rigorous alternative to laptop musicians who, live coders often feel, lack the charisma and pizzazz of musicians performing live. [57]
Electronic music broadly is a group of music genres that employ electronic musical instruments, circuitry-based music technology and software, or general-purpose electronics in its creation. It includes both music made using electronic and electromechanical means. Pure electronic instruments depended entirely on circuitry-based sound generation, for instance using devices such as an electronic oscillator, theremin, or synthesizer. Electromechanical instruments can have mechanical parts such as strings, hammers, and electric elements including magnetic pickups, power amplifiers and loudspeakers. Such electromechanical devices include the telharmonium, Hammond organ, electric piano and electric guitar.
An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.
Digital music technology encompasses the use of digital instruments to produce, perform or record music. These instruments vary, including computers, electronic effects units, software, and digital audio equipment. Digital music technology is used in performance, playback, recording, composition, mixing, analysis and editing of music, by professions in all parts of the music industry.
A music sequencer is a device or application software that can record, edit, or play back music, by handling note and performance information in several forms, typically CV/Gate, MIDI, or Open Sound Control, and possibly audio and automation data for digital audio workstations (DAWs) and plug-ins.
Granular synthesis is a sound synthesis method that operates on the microsound time scale.
IRCAM is a French institute dedicated to the research of music and sound, especially in the fields of avant garde and electro-acoustical art music. It is situated next to, and is organisationally linked with, the Centre Pompidou in Paris. The extension of the building was designed by Renzo Piano and Richard Rogers. Much of the institute is located underground, beneath the fountain to the east of the buildings.
CSIRAC, originally known as CSIR Mk 1, was Australia's first digital computer, and the fifth stored program computer in the world. It is the oldest surviving first-generation electronic computer (the Zuse Z4 at the Deutsches Museum is older, but was electro-mechanical, not electronic), and was the first in the world to play digital music.
Laurie Spiegel is an American composer. She has worked at Bell Laboratories, in computer graphics, and is known primarily for her electronic music compositions and her algorithmic composition software Music Mouse. She is also a guitarist and lutenist.
MUSIC-N refers to a family of computer music programs and programming languages descended from or influenced by MUSIC, a program written by Max Mathews in 1957 at Bell Labs. MUSIC was the first computer program for generating digital audio waveforms through direct synthesis. It was one of the first programs for making music on a digital computer, and was certainly the first program to gain wide acceptance in the music research community as viable for that task. The world's first computer-controlled music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard. However, CSIRAC produced sound by sending raw pulses to the speaker, it did not produce standard digital audio with PCM samples, like the MUSIC-series of programs.
Max Vernon Mathews was an American pioneer of computer music.
Algorithmic composition is the technique of using algorithms to create music.
David “Dave” Cope is an American author, composer, scientist, and Dickerson Emeriti Professor of Music at UC Santa Cruz. His primary area of research involves artificial intelligence and music; he writes programs and algorithms that can analyze existing music and create new compositions in the style of the original input music. He taught the groundbreaking summer workshop in Workshop in Algorithmic Computer Music (WACM) that was open to the public as well as a general education course entitled Artificial Intelligence and Music for enrolled UCSC students. Cope is also co-founder and CTO Emeritus of Recombinant Inc., a music technology company.
John M. Chowning is an American composer, musician, discoverer, and professor best known for his work at Stanford University, the founding of CCRMA – Center for Computer Research in Music and Acoustics in 1975 and his development of the digital implementation of FM synthesis and the digital sound spatialization while there.
OpenMusic (OM) is an object-oriented visual programming environment for musical composition based on Common Lisp. It may also be used as an all-purpose visual interface to Lisp programming. At a more specialized level, a set of provided classes and libraries make it a very convenient environment for music composition.
Computer audition (CA) or machine listening is the general field of study of algorithms and systems for audio interpretation by machines. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer Paris Smaragdis, interviewed in Technology Review, talks about these systems — "software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents."
Paul Doornbusch is an Australian composer and musician. He is the author of a book documenting the first computer music, made with the CSIRAC.
Computational creativity is a multidisciplinary endeavour that is located at the intersection of the fields of artificial intelligence, cognitive psychology, philosophy, and the arts.
Sergio Maltagliati is an Italian Internet-based artist, composer, and visual-digital artist. His first musical experience with the Gialdino Gialdini Musical Band was in the early 70s.
Pietro Grossi was an Italian composer pioneer of computer music, visual artist and hacker ahead of his time. He began experimenting with electronic techniques in Italy in the early sixties.
Shlomo Dubnov is an American-Israeli computer music researcher and composer. He is a professor in the Music Department and Affiliate Professor in Computer Science and Engineering and a founding faculty of the Halıcıoğlu Data Science Institute in the University of California, San Diego, where he has been since 2003. He is the Director of the Center for Research in Entertainment and Learning (CREL) at UC San Diego's Qualcomm Institute.
In 1957 the MUSIC program allowed an IBM 704 mainframe computer to play a 17-second composition by Mathews. Back then computers were ponderous, so synthesis would take an hour.
The generation of sound signals requires very high sampling rates.... A high speed machine such as the I.B.M. 7090 ... can compute only about 5000 numbers per second ... when generating a reasonably complex sound.
Lecture Notes in Computer Science 1725