Granular synthesis

Last updated

Granular synthesis is a sound synthesis method that operates on the microsound time scale.

Contents

It is based on the same principle as sampling. However, the samples are split into small pieces of around 1 to 100 ms in duration. These small pieces are called grains. Multiple grains may be layered on top of each other, and may play at different speeds, phases, volume, and frequency, among other parameters.

At low speeds of playback, the result is a kind of soundscape, often described as a cloud, that is manipulatable in a manner unlike that for natural sound sampling or other synthesis techniques. At high speeds, the result is heard as a note or notes of a novel timbre. By varying the waveform, envelope, duration, spatial position, and density of the grains, many different sounds can be produced.

Both have been used for musical purposes: as sound effects, raw material for further processing by other synthesis or digital signal processing effects, or as complete musical works in their own right. Conventional effects that can be achieved include amplitude modulation and time stretching. More experimentally, stereo or multichannel scattering, random reordering, disintegration and morphing are possible.

History

Greek composer Iannis Xenakis is known as the inventor of the granular synthesis technique. [1] [ page needed ]

The composer Iannis Xenakis (1960) was the first to explicate a compositional theory for grains of sound. He began by adopting the following lemma: "All sound, even continuous musical variation, is conceived as an assemblage of a large number of elementary sounds adequately disposed in time. In the attack, body, and decline of a complex sound, thousands of pure sounds appear in a more or less short interval of time ." Xenakis created granular sounds using analog tone generators and tape splicing. These appear in the composition Analogique A-B for string orchestra and tape (1959). [2]

Curtis Roads was the first to implement granular synthesis on a computer in 1974. [3]

Twelve years later, in 1986, the Canadian composer Barry Truax implemented real-time versions of this synthesis technique using the DMX-1000 Signal Processing Computer. [4] "Granular synthesis was implemented in different ways by Truax." [2]

Microsound

This includes all sounds on the time scale shorter than musical notes, the sound object time scale, and longer than the sample time scale. Specifically, this is shorter than one tenth of a second and longer than 10 milliseconds, which includes part of the audio frequency range (20 Hz to 20 kHz) as well as part of the infrasonic frequency range (below 20 Hz, rhythm). [5]

These sounds include transient audio phenomena and are known in acoustics and signal processing by various names including sound particles, quantum acoustics, sonal atom, grain, glisson, grainlet, trainlet, microarc, wavelet, chirplet, fof, time-frequency atom, pulsar, impulse, toneburst, tone pip, acoustic pixel, and others. In the frequency domain they may be named kernel, logon, and frame, among others. [5]

Physicist Dennis Gabor was an important pioneer in microsound. [5] Micromontage is musical montage with microsound.

Microtime is the level of "sonic" or aural "syntax" or the "time-varying distribution of...spectral energy". [6]

See also

Related Research Articles

Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, electrical engineering, and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the first experiments and innovations with electronic instruments at the turn of the 20th century.

<span class="mw-page-title-main">Digital synthesizer</span> Synthesizer that uses digital signal processing to make sounds

A digital synthesizer is a synthesizer that uses digital signal processing (DSP) techniques to make musical sounds. This in contrast to older analog synthesizers, which produce music using analog electronics, and samplers, which play back digital recordings of acoustic, electric, or electronic instruments. Some digital synthesizers emulate analog synthesizers; others include sampling capability in addition to digital synthesis.

<span class="mw-page-title-main">Electronic musical instrument</span> Musical instrument that uses electronic circuits to generate sound

An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.

Subtractive synthesis is a method of sound synthesis in which partials of an audio signal are attenuated by a filter to alter the timbre of the sound. While subtractive synthesis can be applied to any source audio signal, the sound most commonly associated with the technique is that of analog synthesizers of the 1960s and 1970s, in which the harmonics of simple waveforms such as sawtooth, pulse or square waves are attenuated with a voltage-controlled resonant low-pass filter. Many digital, virtual analog and software synthesizers use subtractive synthesis, sometimes in conjunction with other methods of sound synthesis.

<span class="mw-page-title-main">Music technology (electronic and digital)</span>

Digital music technology encompasses digital instruments, computers, electronic effects units, software, or digital audio equipment by a performer, composer, sound engineer, DJ, or record producer to produce, perform or record music. The term refers to electronic devices, instruments, computer hardware, and software used in performance, playback, recording, composition, mixing, analysis, and editing of music.

Wavetable synthesis is a sound synthesis technique used to create quasi-periodic waveforms often used in the production of musical tones or notes.

In music, montage or sound collage is a technique where newly branded sound objects or compositions, including songs, are created from collage, also known as montage. This is often done through the use of sampling, while some playable sound collages were produced by gluing together sectors of different vinyl records. In any case, it may be achieved through the use of previous sound recordings or musical scores. Like its visual cousin, the collage work may have a completely different effect than that of the component parts, even if the original parts are completely recognizable or from only one source.

Electroacoustic music is a genre of popular and Western art music in which composers use technology to manipulate the timbres of acoustic sounds, sometimes by using audio signal processing, such as reverb or harmonizing, on acoustical instruments. It originated around the middle of the 20th century, following the incorporation of electric sound production into compositional practice. The initial developments in electroacoustic music composition to fixed media during the 20th century are associated with the activities of the Groupe de recherches musicales at the ORTF in Paris, the home of musique concrète, the Studio for Electronic Music in Cologne, where the focus was on the composition of elektronische Musik, and the Columbia-Princeton Electronic Music Center in New York City, where tape music, electronic music, and computer music were all explored. Practical electronic music instruments began to appear in the early 20th century.

Programming is a form of music production and performance using electronic devices and computer software, such as sequencers and workstations or hardware synthesizers, sampler and sequencers, to generate sounds of musical instruments. These musical sounds are created through the use of music coding languages. There are many music coding languages of varying complexity. Music programming is also frequently used in modern pop and rock music from various regions of the world, and sometimes in jazz and contemporary classical music. It gained popularity in the 1950s and has been emerging ever since.

In music, a cloud is a sound mass consisting of statistical clouds of microsounds and characterized first by the set of elements used in the texture, secondly density, including rhythmic and pitch density. Clouds may include ambiguity of rhythmic foreground and background or rhythmic hierarchy.

A soundscape is the acoustic environment as perceived by humans, in context. The term was originally coined by Michael Southworth, and popularised by R. Murray Schafer. There is a varied history of the use of soundscape depending on discipline, ranging from urban design to wildlife ecology to computer science. An important distinction is to separate soundscape from the broader acoustic environment. The acoustic environment is the combination of all the acoustic resources, natural and artificial, within a given area as modified by the environment. The International Organization for Standardization (ISO) standardized these definitions in 2014.

The digital sound revolution refers to the widespread adoption of digital audio technology in the computer industry beginning in the 1980s.

A phase vocoder is a type of vocoder-purposed algorithm which can interpolate information present in the frequency and time domains of audio signals by using phase information extracted from a frequency transform. The computer algorithm allows frequency-domain modifications to a digital sound file.

Curtis Roads is an American composer, author and computer programmer. He composes electronic and electroacoustic music, specializing in granular and pulsar synthesis.

Horacio Vaggione is an Argentinian composer of electroacoustic and instrumental music who specializes in micromontage, granular synthesis, and microsound and whose pieces are often scored for performers and computers.

In musique concrete and electronic music theory the term sound object is used to refer to a primary unit of sonic material and often specifically refers to recorded sound rather than written music using manuscript or a score. It was coined by Pierre Schaeffer in his publication Traité des objets musicaux (1966).

Manuel Rocha Iturbide is a Mexican composer and sound artist.

Gareth Loy is an American author, composer, musician and mathematician. Loy is the author of the two volume series about the intersection of music and mathematics titled Musimathics. Loy was an early practitioner of music synthesis at Stanford, and wrote the first software compiler for the Systems Concepts Digital Synthesizer. More recently, Loy has published the freeware music programming language Musimat, designed specifically for subjects covered in Musimathics, available as a free download. Although Musimathics was first published in 2006 and 2007, the series continues to evolve with updates by the author and publishers. The texts are being used in numerous math and music classes at both the graduate and undergraduate level, with more current reviews noting that the originally targeted academic distribution is now reaching a much wider audience. Music synthesis pioneer Max Mathews stated that Loy's books are a "guided tour-de-force of the mathematics of physics and music... Loy has always been a brilliantly clear writer. In Musimathics, he is also an encyclopedic writer. He covers everything needed to understand existing music and musical instruments, or to create new music or new instruments. Loy's book and John R. Pierce's famous The Science of Musical Sound belong on everyone's bookshelf, and the rest of the shelf can be empty." John Chowning states, in regard to Nekyia and the Samson Box, "After completing the software, Loy composed Nekyia, a beautiful and powerful composition in four channels that fully exploited the capabilities of the Samson Box. As an integral part of the community, Loy has paid back many times over all that he learned, by conceiving the (Samson) system with maximal generality such that it could be used for research projects in psychoacoustics as well as for hundreds of compositions by a host of composers having diverse compositional strategies."

Concret PH (1958) is a musique concrète piece by Iannis Xenakis, originally created for the Philips Pavilion at the Expo 58 and heard as audiences entered and exited the building. Edgard Varèse's Poème électronique was played once they were inside the building.

<i>nscor</i>

nscor is a 1980 electronic composition by Curtis Roads. The piece is built upon multiple synthesis methods and was composed at different studios during a period of five years. It was included on the 1986 compilation album New Computer Music released by WERGO, and the inclusion of the piece on the album was generally positively received by critics.

References

  1. Xenakis, Iannis (1971) Formalized Music: Thought and Mathematics in Composition. Bloomington and London: Indiana University Press.
  2. 1 2 Roads, Curtis (1996). The Computer Music Tutorial. Cambridge: The MIT Press. p. 169. ISBN   0-262-18158-4.
  3. Roads, Curtis (2001). Microsound. Cambridge, Massachusetts: MIT Press. ISBN   0-262-18158-4.
  4. Truax, Barry (1988). "Real-Time Granular Synthesis with a Digital Signal Processor". Computer Music Journal. 12 (2): 14–26. doi:10.2307/3679938. JSTOR   3679938.
  5. 1 2 3 Roads, Curtis (2001). Microsound, p. vii and 20-28. Cambridge: MIT Press. ISBN   0-262-18215-7.
  6. Horacio Vaggione, "Articulating Microtime", Computer Music Journal, Vol. 20, No. 2. (Summer, 1996), pp. 33–38.[ page needed ]
  7. "Software".
  8. "Understanding Clouds and Its Derivatives". After Later Audio. 21 August 2021. Retrieved 2022-11-09.
  9. "Morphagene". Signal Flux. 6 July 2019. Retrieved 2022-11-09.
  10. "Make Noise Co. | Morphagene". www.makenoisemusic.com. Retrieved 2022-11-09.
  11. "Tasty Chips GR-1". Sound on Sound.

Bibliography

Articles

Books

Discography