Spectromorphology is the perceived sonic footprint of a sound spectrum as it manifests in time. A descriptive spectromorphological analysis of sound is sometimes used in the analysis of electroacoustic music, especially acousmatic music. The term was coined by Denis Smalley in 1986 and is considered the most adequate English term to designate the field of sound research associated with the French writer, composer, and academic, Pierre Schaeffer [ citation needed ].
Schaeffer's work at INA/GRM in Paris, beginning in the late 1940s, culminated in the publication of the book Traité des objets musicaux in 1966. Smalley's notion of spectromorphology builds upon Schaeffer's theories relating to the use of a classification system for various categories of sound. [1]
Smalley's term refers to the descriptive analysis of perceived morphological developments in sound spectra over time, and it implies that the "spectro" cannot exist without the morphology: something has to be shaped and that something must have sonic content (Smalley, 1986, 1997).
The theoretical framework of spectro-morphology is articulated mainly in four parts:
Smalley defines three different spectral typologies that exist in what he calls the noise-note continuum. This continuum is subdivided into three principal elements:
Smalley also designates different morphological archetypes:
Electronic music is music that employs electronic musical instruments, digital instruments, or circuitry-based music technology in its creation. It includes both music made using electronic and electromechanical means. Pure electronic instruments depended entirely on circuitry-based sound generation, for instance using devices such as an electronic oscillator, theremin, or synthesizer. Electromechanical instruments can have mechanical parts such as strings, hammers, and electric elements including magnetic pickups, power amplifiers and loudspeakers. Such electromechanical devices include the telharmonium, Hammond organ, electric piano and the electric guitar.
Musique concrète is a type of music composition that utilizes recorded sounds as raw material. Sounds are often modified through the application of audio effects and tape manipulation techniques, and may be assembled into a form of montage. It can feature sounds derived from recordings of musical instruments, the human voice, and the natural environment as well as those created using synthesizers and computer-based digital signal processing. Compositions in this idiom are not restricted to the normal musical rules of melody, harmony, rhythm, metre, and so on. It exploits acousmatic listening, meaning sound identities can often be intentionally obscured or appear unconnected to their source cause.
In linguistics, morphology is the study of words, how they are formed, and their relationship to other words in the same language. It analyzes the structure of words and parts of words such as stems, root words, prefixes, and suffixes. Morphology also looks at parts of speech, intonation and stress, and the ways context can change a word's pronunciation and meaning. Morphology differs from morphological typology, which is the classification of languages based on their use of words, and lexicology, which is the study of words and how they make up a language's vocabulary.
Phonology is a branch of linguistics that studies how languages or dialects systematically organize their sounds. The term also refers to the sound system of any particular language variety. At one time, the study of phonology only related to the study of the systems of phonemes in spoken languages. Now it may relate to
A spectrum is a condition that is not limited to a specific set of values but can vary, without steps, across a continuum. The word was first used scientifically in optics to describe the rainbow of colors in visible light after passing through a prism. As scientific understanding of light advanced, it came to apply to the entire electromagnetic spectrum. It thereby became a mapping of a range of magnitudes (wavelengths) to a range of qualities, which are the perceived "colors of the rainbow" and other properties which correspond to wavelengths that lie outside of the visible light spectrum.
In music, timbre, also known as tone color or tone quality, is the perceived sound quality of a musical note, sound or tone. Timbre distinguishes different types of sound production, such as choir voices and musical instruments. It also enables listeners to distinguish different instruments in the same category.
Noise music is a genre of music that is characterised by the expressive use of noise within a musical context. This type of music tends to challenge the distinction that is made in conventional musical practices between musical and non-musical sound. Noise music includes a wide range of musical styles and sound-based creative practices that feature noise as a primary aspect.
Electroacoustic music is a genre of Western art music in which composers use technology to manipulate the timbres of acoustic sounds, sometimes by using audio signal processing, such as reverb or harmonizing, on acoustical instruments. It originated around the middle of the 20th century, following the incorporation of electric sound production into compositional practice. The initial developments in electroacoustic music composition to fixed media during the 20th century are associated with the activities of the Groupe de recherches musicales at the ORTF in Paris, the home of musique concrète, the Studio for Electronic Music in Cologne, where the focus was on the composition of elektronische Musik, and the Columbia-Princeton Electronic Music Center in New York City, where tape music, electronic music, and computer music were all explored. Practical electronic music instruments began to appear in the early 1900s.
A soundscape is the acoustic environment as perceived by humans, in context. The term was originally coined by Michael Southworth, and popularised by R. Murray Schafer. There is a varied history of the use of soundscape depending on discipline, ranging from urban design to wildlife ecology to computer science. An important distinction is to separate soundscape from the broader acoustic environment. The acoustic environment is the combination of all the acoustic resources, natural and artificial, within a given area as modified by the environment. The International Organization for Standardization (ISO) standardized these definitions in 2014.(ISO 12913-1:2014)
A linguistic universal is a pattern that occurs systematically across natural languages, potentially true for all of them. For example, All languages have nouns and verbs, or If a language is spoken, it has consonants and vowels. Research in this area of linguistics is closely tied to the study of linguistic typology, and intends to reveal generalizations across languages, likely tied to cognition, perception, or other abilities of the mind. The field originates from discussions influenced by Noam Chomsky's proposal of a Universal Grammar, but was largely pioneered by the linguist Joseph Greenberg, who derived a set of forty-five basic universals, mostly dealing with syntax, from a study of some thirty languages.
In electronic music theory and electronic composition theory a sound object refers to a primary unit of music that could be played on an instrument or sung by a vocalist. A sound object specifically refers to recorded sound rather than written music using manuscript or a score.
Acousmatic music is a form of electroacoustic music that is specifically composed for presentation using speakers, as opposed to a live performance. It stems from a compositional tradition that dates back to the introduction of musique concrète in the late 1940s. Unlike musical works that are realised using sheet music exclusively, compositions that are purely acousmatic often exist solely as fixed media audio recordings.
Categorical perception is a phenomenon of perception of distinct categories when there is a gradual change in a variable along a continuum. It was originally observed for auditory stimuli but now found to be applicable to other perceptual modalities.
Robert Cogan was an American music theorist, composer and teacher.
In perception and psychophysics, auditory scene analysis (ASA) is a proposed model for the basis of auditory perception. This is understood as the process by which the human auditory system organizes sound into perceptually meaningful elements. The term was coined by psychologist Albert Bregman. The related concept in machine perception is computational auditory scene analysis (CASA), which is closely related to source separation and blind signal separation.
In linguistics, speech synthesis, and music, the pitch contour of a sound is a function or curve that tracks the perceived pitch of the sound over time. Pitch contour may include multiple sounds utilizing many pitches, and can relate the frequency function at one point in time to the frequency function at a later point.
Experimental music is a general label for any music that pushes existing boundaries and genre definitions. Experimental compositional practice is defined broadly by exploratory sensibilities radically opposed to, and questioning of, institutionalized compositional, performing, and aesthetic conventions in music. Elements of experimental music include indeterminate music, in which the composer introduces the elements of chance or unpredictability with regard to either the composition or its performance. Artists may also approach a hybrid of disparate styles or incorporate unorthodox and unique elements.
This article is about the sound system of the Navajo language. The phonology of Navajo is intimately connected to its morphology. For example, the entire range of contrastive consonants is found only at the beginning of word stems. In stem-final position and in prefixes, the number of contrasts is drastically reduced. Similarly, vowel contrasts found outside of the stem are significantly neutralized. For details about the morphology of Navajo, see Navajo grammar.
In physics, sound is a vibration that propagates as an acoustic wave, through a transmission medium such as a gas, liquid or solid.
Live electronic music is a form of music that can include traditional electronic sound-generating devices, modified electric musical instruments, hacked sound generating technologies, and computers. Initially the practice developed in reaction to sound-based composition for fixed media such as musique concrète, electronic music and early computer music. Musical improvisation often plays a large role in the performance of this music. The timbres of various sounds may be transformed extensively using devices such as amplifiers, filters, ring modulators and other forms of circuitry. Real-time generation and manipulation of audio using live coding is now commonplace.