Sound object

Last updated

In musique concrete and electronic music theory the term sound object (originally l'objet sonore) is used to refer to a primary unit of sonic material and often specifically refers to recorded sound rather than written music using manuscript or a score. It was coined by Pierre Schaeffer in his publication Traité des objets musicaux (1966).

Contents

Definitions

Pierre Schaeffer

According to Schaeffer:

This unit of sound [sound-object] is the equivalent to a unit of breath or articulation, a unit of instrumental gesture. The sound object is therefore an acoustic action and intention of listening. [1]

Schaeffer believed that the sound object should be free from its sonic origin (its sound source, or source bonding) so that a listener could not identify it, what he termed as acousmatic listening. Schaeffer's four functions of the "What Can be Heard" include:

1. A sonic entity is detected by its signal being picked up by the autonomous mechanism of hearing (ouïr)

2. The signalled sonic entity (having been detected) 'sound character' is deciphered by the active perception of listening (écouter)

3. The signalled sonic entity is then subjected to a twofold focused attention that judges then describes it

4. The signalled sonic entities' significance is then understood by abstraction, comparison, deduction and by linking it to different sources and types (either the initial meaning is confirmed or if denied an additional meaning is worked out. [2]

This leads to the acousmatic situation that is focused on subjective "listening itself which becomes the phenomena under study" [3] rather than the object sound source. Music theorist Brian Kane, in his book Sound Unseen notes that Schaeffer states, "the sound object, is never revealed clearly except in the acousmatic experience.”

Schaeffer's theory of acousmatic experience, the sound object, and a technique he called reduced listening (écoute réduite) utilizes a phenomenological approach derived from the work of Edmund Husserl and Maurice Merleau-Ponty. According to Kane a good grasp of Husserlian theory is required in order to fully comprehend the relationship between the three. [4]

Curtis Roads

Curtis Roads, in his 2001 book 'Microsound', while attributing the origin of the term to Pierre Schaeffer, describes the sound object as "a basic unit of musical structure, generalizing the traditional concept of note to include complex and mutating sound events on a time scale ranging from a fraction of a second to several seconds." This broader interpretation includes the following categories of sound objects:

1. Infinite The ideal time span of mathematical durations such as the infinite sine waves of classical Fourier analysis.

2. Supra A time scale beyond that of an individual composition and extending into months, years, decades, and centuries.

3. Macro The time scale of overall musical architecture or form, measured in minutes or hours, or in extreme cases, days.

4. Meso Divisions of form.  Groupings of sound objects into hierarchies of phrase structures of various sizes, measured in minutes or seconds.

5. Sound object A basic unit of musical structure, generalizing the traditional concept of note to include complex and mutating sound events on a time scale ranging from a fraction of a second to several seconds.

6. Micro Sound particles on a time scale that extends down to the threshold of auditory perception (measured in thousandths of a second or milli-seconds).

7. Sample The atomic level of digital audio systems: individual binary samples or numerical amplitude values, one following another at a fixed time interval.  The period between samples is measured in millionths of a second (microseconds).

8. Subsample Fluctuations on a time scale too brief to be properly recorded or perceived, measured in billionths of a second (nanoseconds) or less.

9. Infinitesimal The ideal time span of mathematical durations such as the infinitely brief delta functions."

[5]

Trevor Wishart

English composer Trevor Wishart derives his own version of sound object from Schaeffer's, but unlike Schaeffer Wishart favours a materialist or physicalist notion, saying:

Given that we have established a coherent aural image of a real acoustic space, we may then begin to position sound-objects within the space. Imagine for a moment that we have established the acoustic space of a forest (width represented by the spread across a pair of stereo speakers, depth represented by decreasing amplitude and high-frequency components and increasing reverberation) then position the sounds of various birds and animals within this space. [6]

Denis Smalley

Denis Smalley, inspired by Schaeffer's theories, developed 'spectromorphology’ (Smalley, 1997) as tool for analysing sound materials, he states:

"I have developed the concepts and terminology of spectromorphologyas tools for describing and analysing listening experience…  A spectromorphological approach sets out spectral and morphological models and processes, and provides a framework for understanding structural relations and behaviours as experienced in the temporal flux of the music." [7]

An important aspect of spectromorphology is, what Smalley calls ‘source bonding’, which he describes as the duality of any given listening situation. According to Smalley sound objects have an extrinsic nature if its source bonding remains intact, but if not, it has a sonic characteristic that is intrinsic in nature. The condition in which a sound object has an intrinsic or extrinsic source bonding depends on the experiences of the listener.

Related Research Articles

<span class="mw-page-title-main">Frequency</span> Number of occurrences or cycles per unit time

Frequency is the number of occurrences of a repeating event per unit of time. It is also occasionally referred to as temporal frequency for clarity, and is distinct from angular frequency. Frequency is measured in hertz (Hz) which is equal to one event per second. The period is the interval of time between events, so the period is the reciprocal of the frequency.

Musique concrète is a type of music composition that utilizes recorded sounds as raw material. Sounds are often modified through the application of audio signal processing and tape music techniques, and may be assembled into a form of montage. It can feature sounds derived from recordings of musical instruments, the human voice, and the natural environment as well as those created using sound synthesis and computer-based digital signal processing. Compositions in this idiom are not restricted to the normal musical rules of melody, harmony, rhythm, metre, and so on. The technique exploits acousmatic sound, such that sound identities can often be intentionally obscured or appear unconnected to their source cause.

Rhythm generally means a "movement marked by the regulated succession of strong and weak elements, or of opposite or different conditions". This general meaning of regular recurrence or pattern in time can apply to a wide variety of cyclical natural phenomena having a periodicity or frequency of anything from microseconds to several seconds ; to several minutes or hours, or, at the most extreme, even over many years.

<span class="mw-page-title-main">Pierre Schaeffer</span> French musicologist

Pierre Henri Marie Schaeffer was a French composer, writer, broadcaster, engineer, musicologist, acoustician and founder of Groupe de Recherche de Musique Concrète (GRMC). His innovative work in both the sciences—particularly communications and acoustics—and the various arts of music, literature and radio presentation after the end of World War II, as well as his anti-nuclear activism and cultural criticism garnered him widespread recognition in his lifetime.

Noise music is a genre of music that is characterised by the expressive use of noise within a musical context. This type of music tends to challenge the distinction that is made in conventional musical practices between musical and non-musical sound. Noise music includes a wide range of musical styles and sound-based creative practices that feature noise as a primary aspect.

Granular synthesis is a sound synthesis method that operates on the microsound time scale.

Reverberation, in acoustics, is a persistence of sound, after a sound is produced. Reverberation is created when a sound or signal is reflected causing numerous reflections to build up and then decay as the sound is absorbed by the surfaces of objects in the space – which could include furniture, people and air. This is most noticeable when the sound source stops but the reflections continue, their amplitude decreasing, until zero is reached.

In audio signal processing and acoustics, an echo is a reflection of sound that arrives at the listener with a delay after the direct sound. The delay is directly proportional to the distance of the reflecting surface from the source and the listener. Typical examples are the echo produced by the bottom of a well, by a building, or by the walls of an enclosed room and an empty room. A true echo is a single reflection of the sound source.

Flanging is an audio effect produced by mixing two identical signals together, one signal delayed by a small and (usually) gradually changing period, usually smaller than 20 milliseconds. This produces a swept comb filter effect: peaks and notches are produced in the resulting frequency spectrum, related to each other in a linear harmonic series. Varying the time delay causes these to sweep up and down the frequency spectrum. A flanger is an effects unit that creates this effect.

In music, montage or sound collage is a technique where newly branded sound objects or compositions, including songs, are created from collage, also known as montage. This is often done through the use of sampling, while some playable sound collages were produced by gluing together sectors of different vinyl records. In any case, it may be achieved through the use of previous sound recordings or musical scores. Like its visual cousin, the collage work may have a completely different effect than that of the component parts, even if the original parts are completely recognizable or from only one source.

Electroacoustic music is a genre of popular and Western art music in which composers use technology to manipulate the timbres of acoustic sounds, sometimes by using audio signal processing, such as reverb or harmonizing, on acoustical instruments. It originated around the middle of the 20th century, following the incorporation of electric sound production into compositional practice. The initial developments in electroacoustic music composition to fixed media during the 20th century are associated with the activities of the Groupe de recherches musicales at the ORTF in Paris, the home of musique concrète, the Studio for Electronic Music in Cologne, where the focus was on the composition of elektronische Musik, and the Columbia-Princeton Electronic Music Center in New York City, where tape music, electronic music, and computer music were all explored. Practical electronic music instruments began to appear in the early 20th century.

Acousmatic sound is sound that is heard without an originating cause being seen. The word acousmatic, from the French acousmatique, is derived from the Greek word akousmatikoi (ἀκουσματικοί), which referred to probationary pupils of the philosopher Pythagoras who were required to sit in absolute silence while they listened to him deliver his lecture from behind a veil or screen to make them better concentrate on his teachings. The term acousmatique was first used by the French composer and pioneer of musique concrète Pierre Schaeffer. In acousmatic art one hears sound from behind a "veil" of loudspeakers, the source cause remaining unseen. More generally, any sound, whether it is natural or manipulated, may be described as acousmatic if the cause of the sound remains unseen. The term has also been used by the French writer and composer Michel Chion in reference to the use of off-screen sound in film. More recently, in the article Space-form and the acousmatic image (2007), composer and academic Prof. Denis Smalley has expanded on some of Schaeffers' acousmatic concepts. Since the 2000s, the term acousmatic has been used, notably in North America to refer to fixed media composition and pieces.

Acousmatic music is a form of electroacoustic music that is specifically composed for presentation using speakers, as opposed to a live performance. It stems from a compositional tradition that dates back to the introduction of musique concrète in the late 1940s. Unlike musical works that are realised using sheet music exclusively, compositions that are purely acousmatic often exist solely as fixed media audio recordings.

Michel Chion is a French film theorist and composer of experimental music.

<span class="mw-page-title-main">Coordinate time</span> Time scale

In the theory of relativity, it is convenient to express results in terms of a spacetime coordinate system relative to an implied observer. In many coordinate systems, an event is specified by one time coordinate and three spatial coordinates. The time specified by the time coordinate is referred to as coordinate time to distinguish it from proper time.

Experimental music is a general label for any music or music genre that pushes existing boundaries and genre definitions. Experimental compositional practice is defined broadly by exploratory sensibilities radically opposed to, and questioning of, institutionalized compositional, performing, and aesthetic conventions in music. Elements of experimental music include indeterminate music, in which the composer introduces the elements of chance or unpredictability with regard to either the composition or its performance. Artists may also approach a hybrid of disparate styles or incorporate unorthodox and unique elements.

<span class="mw-page-title-main">Sound</span> Vibration that travels via pressure waves in matter

In physics, sound is a vibration that propagates as an acoustic wave, through a transmission medium such as a gas, liquid or solid. In human physiology and psychology, sound is the reception of such waves and their perception by the brain. Only acoustic waves that have frequencies lying between about 20 Hz and 20 kHz, the audio frequency range, elicit an auditory percept in humans. In air at atmospheric pressure, these represent sound waves with wavelengths of 17 meters (56 ft) to 1.7 centimeters (0.67 in). Sound waves above 20 kHz are known as ultrasound and are not audible to humans. Sound waves below 20 Hz are known as infrasound. Different animal species have varying hearing ranges.

Spectromorphology is the perceived sonic footprint of a sound spectrum as it manifests in time. A descriptive spectromorphological analysis of sound is sometimes used in the analysis of electroacoustic music, especially acousmatic music. The term was coined by Denis Smalley in 1986 and is considered the most adequate English term to designate the field of sound research associated with the French writer, composer, and academic, Pierre Schaeffer.

<span class="mw-page-title-main">Denis Dufour</span> French composer

Denis Dufour is a composer of art music.

A soundwalk is a walk with a focus on listening to the environment. The term was first used by members of the World Soundscape Project under the leadership of composer R. Murray Schafer in Vancouver in the 1970s. Hildegard Westerkamp, from the same group of artists and founder of the World Forum of Acoustic Ecology, defines soundwalking as "... any excursion whose main purpose is listening to the environment. It is exposing our ears to every sound around us no matter where we are."

References

  1. Schaeffer, Pierre (2002) [1966]. Traité Des Objets Musicaux: Essai Interdisciplines (in French) (2nd/Nouv. ed.). Paris: Éditions du Seuil. p. 271. ISBN   978-2-02-002608-6. OCLC   751268549. For English translation, see: Schaeffer, Pierre (2012). In Search of a Concrete Music. Translated by North, Christine; Dack, John. London: University of California. ISBN   978-0-520-26573-8. OCLC   788263789.
  2. Schaeffer (2002), pp. 74-79.
  3. Schaeffer, North & Dack, Pierre, Christine, John (2017). Treatise on Musical Objects: An Essay across Disciplines. California: University of California. p. 65. ISBN   9780520294301.
  4. Kane, Brian (2014). Unseen Sound. Acousmatic Sound in Theory and Practice. Oxford: Oxford University Press. p. 17. ISBN   978-0-19-934784-1 (hardback) and ISBN   978-0-19-934787-2 (online content)
  5. Roads, Curtis (2004). Microsound. London: MIT Press. p. 3. and Microsound (2nd ed.). Cambridge, Mass: MIT Press. 2001. p. 409. ISBN   978-0-262-18215-7.
  6. Wishart, Trevor (1996). On Sonic Art. Amsterdam: Harwood. p. 146. ISBN   3-7186-5847-X.
  7. Smalley, Denis. "Spectromorphology: explaining sound-shapes". Organised Sound. 2, (No. 2): 107–126 via Cambridge, Cambridge University Press.