Max Vernon Mathews (November 13, 1926 in Columbus, Nebraska, US – April 21, 2011 in San Francisco, CA, US) was an American pioneer of computer music.
Max Vernon Mathews was born in Columbus, Nebraska, by two science schoolteachers. His father in particular taught physics, chemistry and biology in the Peru High School [1] of Nebraska, where he was also the principal [2] . His father allowed him to learn and play in the physics, biology and chemistry laboratories, where he enjoyed making lots of things from motors to mercury barometers. At the age of 9, when students are usually introduced to algebra, he started to study by himself the subject with few other students. That was because the vast majority of population there were farmers and their sons weren't interested about learning algebra, since it isn't useful for the everyday work. In the same way he studied calculus, but he never graduated from high school [2] .
After a period as a radar repairman in the navy, where he felt in love with electronics, Mathews decide to study electrical engineering at the California Institute of Technology and the Massachusetts Institute of Technology, receiving a Sc.D. in 1954. Working at Bell Labs, Mathews wrote MUSIC, the first widely used program for sound generation, in 1957. For the rest of the century, he continued as a leader in digital audio research, synthesis, and human-computer interaction as it pertains to music performance. In 1968, Mathews and L. Rosler developed Graphic 1, an interactive graphical sound system on which one could draw figures using a light-pen that would be converted into sound, simplifying the process of composing computer generated music. [3] [4] Also in 1970, Mathews and F. R. Moore developed the GROOVE (Generated Real-time Output Operations on Voltage-controlled Equipment) system, [5] a first fully developed music synthesis system for interactive composition and realtime performance, using 3C/Honeywell DDP-24 [6] (or DDP-224) [7] minicomputers. It used a CRT display to simplify the management of music synthesis in realtime, 12bit D/A for realtime sound playback, an interface for analog devices, and even several controllers including a musical keyboard, knobs, and rotating joysticks to capture realtime performance. [3] [7] [4]
Although MUSIC was not the first attempt to generate sound with a computer (an Australian CSIRAC computer played tunes as early as 1951), [8] Mathews fathered generations of digital music tools. He described his work in parental terms, in the following excerpt from "Horizons in Computer Music", March 8–9, 1997, Indiana University:
Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines – they were far too slow to synthesize music in real-time. Chowning's FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable. Starting with the GROOVE program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the Radio-Baton, plus a program, the Conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.
In 1961, Mathews arranged the accompaniment of the song "Daisy Bell" for an uncanny performance by computer-synthesized human voice, using technology developed by John Kelly, Carol Lochbaum, Joan Miller and Lou Gerstman of Bell Laboratories. Author Arthur C. Clarke was coincidentally visiting friend and colleague John Pierce at the Bell Labs Murray Hill facility at the time of this remarkable speech synthesis demonstration and was so impressed that he later told Stanley Kubrick to use it in 2001: A Space Odyssey , in the climactic scene where the HAL 9000 computer sings while his cognitive functions are disabled. [9]
Mathews directed the Acoustical and Behavioral Research Center at Bell Laboratories from 1962 to 1985, which carried out research in speech communication, visual communication, human memory and learning, programmed instruction, analysis of subjective opinions, physical acoustics, and industrial robotics. From 1974 to 1980 he was the Scientific Advisor to the Institute de Recherche et Coordination Acoustique/Musique (IRCAM), Paris, France, and since 1987 has been Professor of Music (Research) at Stanford University. He served as the Master of Ceremonies for the concert program of NIME-01, the inaugural conference on New interfaces for musical expression.
Mathews was a member of the National Academy of Sciences, the National Academy of Engineering and a fellow in the American Academy of Arts and Sciences, the Acoustical Society of America, the IEEE, and the Audio Engineering Society. He received a Silver Medal in Musical Acoustics [10] from the Acoustical Society of America, and the Chevalier de l'ordre des Arts et Lettres, République Française.
The Max portion of the software package Max/MSP is named after him (the MSP portion is named for Miller Puckette, who teaches at UC San Diego).
Mathews died on the morning of 21 April 2011 in San Francisco, California of complications from pneumonia. He was 84. He was survived by his wife, Marjorie, his three sons and six grandchildren.
Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation.
Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, electrical engineering, and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the first experiments and innovations with electronic instruments at the turn of the 20th century.
A digital synthesizer is a synthesizer that uses digital signal processing (DSP) techniques to make musical sounds. This in contrast to older analog synthesizers, which produce music using analog electronics, and samplers, which play back digital recordings of acoustic, electric, or electronic instruments. Some digital synthesizers emulate analog synthesizers; others include sampling capability in addition to digital synthesis.
Electronic music is a genre of music that employs electronic musical instruments, digital instruments, or circuitry-based music technology in its creation. It includes both music made using electronic and electromechanical means. Pure electronic instruments depended entirely on circuitry-based sound generation, for instance using devices such as an electronic oscillator, theremin, or synthesizer. Electromechanical instruments can have mechanical parts such as strings, hammers, and electric elements including magnetic pickups, power amplifiers and loudspeakers. Such electromechanical devices include the telharmonium, Hammond organ, electric piano and the electric guitar.
An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.
Digital music technology encompasses digital instruments, computers, electronic effects units, software, or digital audio equipment by a performer, composer, sound engineer, DJ, or record producer to produce, perform or record music. The term refers to electronic devices, instruments, computer hardware, and software used in performance, playback, recording, composition, mixing, analysis, and editing of music.
A music sequencer is a device or application software that can record, edit, or play back music, by handling note and performance information in several forms, typically CV/Gate, MIDI, or Open Sound Control (OSC), and possibly audio and automation data for digital audio workstations (DAWs) and plug-ins.
Csound is a domain-specific computer programming language for audio programming. It is called Csound because it is written in C, as opposed to some of its predecessors.
Wavetable synthesis is a sound synthesis technique used to create quasi-periodic waveforms often used in the production of musical tones or notes.
Laurie Spiegel is an American composer. She has worked at Bell Laboratories, in computer graphics, and is known primarily for her electronic-music compositions and her algorithmic composition software Music Mouse. She also plays the guitar and lute.
MUSIC-N refers to a family of computer music programs and programming languages descended from or influenced by MUSIC, a program written by Max Mathews in 1957 at Bell Labs. MUSIC was the first computer program for generating digital audio waveforms through direct synthesis. It was one of the first programs for making music on a digital computer, and was certainly the first program to gain wide acceptance in the music research community as viable for that task. The world's first computer-controlled music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard. However, CSIRAC produced sound by sending raw pulses to the speaker, it did not produce standard digital audio with PCM samples, like the MUSIC-series of programs.
Barry Lloyd Vercoe is a New Zealand-born computer scientist and composer. He is best known as the inventor of Csound, a music synthesis language with wide usage among computer music composers. SAOL, the underlying language for the MPEG-4 Structured Audio standard, is also historically derived from Csound.
Karlheinz Essl is an Austrian composer, performer, sound artist, improviser, and composition teacher.
The Institute of Electronic Music and Acoustics (IEM) is a multidisciplinary research center within the University of Music and Performing Arts, Graz, (Austria).
The Bell Labs Digital Synthesizer, better known as the Alles Machine or Alice, was an experimental additive synthesizer designed by Hal Alles at Bell Labs during the 1970s. The Alles Machine used computer-controlled 16 bit digital synthesizer operating at 30k samples/sec with 32 FM sinewave oscillators. The Alles Machine has been called the first true digital additive synthesizer, following on earlier Bell experiments that were partially or wholly implemented as software on large computers. Only one full-length composition was recorded for the machine, before it was disassembled and donated to Oberlin Conservatory's TIMARA department in 1981. Several commercial synthesizers based on the Alles design were released during the 1980s, including the Atari AMY sound chip.
Gareth Loy is an American author, composer, musician and mathematician. Loy is the author of the two volume series on the intersection of music and mathematics titled Musimathics. Loy was an early practitioner of music synthesis at Stanford, and wrote the first software compiler for the Systems Concepts Digital Synthesizer. More recently, Loy has published the freeware music programming language Musimat, designed specifically for subjects covered in Musimathics, available as a free download. Although Musimathics was first published in 2006 and 2007, the series continues to evolve with updates by the author and publishers. The texts are being used in numerous math and music classes at both the graduate and undergraduate level, with more current reviews noting that the originally targeted academic distribution is now reaching a much wider audience. Music synthesis pioneer Max Mathews stated that Loy's books are a "guided tour-de-force of the mathematics of physics and music... Loy has always been a brilliantly clear writer. In Musimathics, he is also an encyclopedic writer. He covers everything needed to understand existing music and musical instruments, or to create new music or new instruments... Loy's book and John R. Pierce's famous The Science of Musical Sound belong on everyone's bookshelf, and the rest of the shelf can be empty." John Chowning states, in regard to Nekyia and the Samson Box, "After completing the software, Loy composed Nekyia, a beautiful and powerful composition in four channels that fully exploited the capabilities of the Samson Box. As an integral part of the community, Loy has paid back many times over all that he learned, by conceiving the (Samson) system with maximal generality such that it could be used for research projects in psychoacoustics as well as for hundreds of compositions by a host of composers having diverse compositional strategies."
The Brooklyn College Center for Computer Music (BC-CCM) located at Brooklyn College of the City University of New York (CUNY) was one of the first computer music centers at a public university in the United States. The BC-CCM is a community of artists and researchers that began in the 1970s.
The DDP-24 (1963) was a 24-bit computer designed and built by the Computer Control Company, aka 3C, located in Framingham, Massachusetts. In 1966 the company was sold to Honeywell who continued the DDP line into the 1970s.
Richard Charles Boulanger is a composer, author, and electronic musician. He is a key figure in the development of the audio programming language Csound, and is associated with computer music pioneers Max Mathews and Barry Vercoe.
TIMARA is a program at the Oberlin Conservatory of Music notable for its importance in the history of electronic music. Established in 1967, TIMARA is well known as the world's first conservatory program in electronic music. Department alumni have included Cory Arcangel, Christopher Rouse, Dary John Mizelle, Dan Forden and Amy X Neuburg.
{{cite web}}
: CS1 maint: url-status (link)