In music, timbre ( /, -, -/ ), also known as tone color or tone quality (from psychoacoustics), is the perceived sound quality of a musical note, sound or tone. Timbre distinguishes different types of sound production, such as choir voices and musical instruments. It also enables listeners to distinguish different instruments in the same category (e.g., an oboe and a clarinet, both woodwind instruments).
In simple terms, timbre is what makes a particular musical instrument or human voice have a different sound from another, even when they play or sing the same note. For instance, it is the difference in sound between a guitar and a piano playing the same note at the same volume. Both instruments can sound equally tuned in relation to each other as they play the same note, and while playing at the same amplitude level each instrument will still sound distinctively with its own unique tone color. Experienced musicians are able to distinguish between different instruments of the same type based on their varied timbres, even if those instruments are playing notes at the same fundamental pitch and loudness.
The physical characteristics of sound that determine the perception of timbre include frequency spectrum and envelope. Singers and instrumental musicians can change the timbre of the music they are singing/playing by using different singing or playing techniques. For example, a violinist can use different bowing styles or play on different parts of the string to obtain different timbres (e.g., playing sul tasto produces a light, airy timbre, whereas playing sul ponticello produces a harsh, even and aggressive tone). On electric guitar and electric piano, performers can change the timbre using effects units and graphic equalizers.
Tone quality and tone color are synonyms for timbre, as well as the "texture attributed to a single instrument". However, the word texture can also refer to the type of music, such as multiple, interweaving melody lines versus a singable melody accompanied by subordinate chords. Hermann von Helmholtz used the German Klangfarbe (tone color), and John Tyndall proposed an English translation, clangtint, but both terms were disapproved of by Alexander Ellis, who also discredits register and color for their pre-existing English meanings.Determined by its frequency composition, the sound of a musical instrument may be described with words such as bright, dark, warm, harsh, and other terms. There are also colors of noise, such as pink and white. In visual representations of sound, timbre corresponds to the shape of the image, while loudness corresponds to brightness; pitch corresponds to the y-shift of the spectrogram.
The Acoustical Society of America (ASA) Acoustical Terminology definition 12.09 of timbre describes it as "that attribute of auditory sensation which enables a listener to judge that two nonidentical sounds, similarly presented and having the same loudness and pitch, are dissimilar", adding, "Timbre depends primarily upon the frequency spectrum, although it also depends upon the sound pressure and the temporal characteristics of the sound".
Many commentators have attempted to decompose timbre into component attributes. For example, J. F. Schouten (1968, 42) describes the "elusive attributes of timbre" as "determined by at least five major acoustic parameters", which Robert Erickson finds, "scaled to the concerns of much contemporary music":
An example of a tonal sound is a musical sound that has a definite pitch, such as pressing a key on a piano; a sound with a noiselike character would be white noise, the sound similar to that produced when a radio is not tuned to a station.
Erickson gives a table of subjective experiences and related physical phenomena based on Schouten's five attributes:
|Tonal character, usually pitched||Periodic sound|
|Noisy, with or without some tonal character, including rustle noise||Noise, including random pulses characterized by the rustle time (the mean interval between pulses)|
|Beginning/ending||Physical rise and decay time|
|Coloration glide or formant glide||Change of spectral envelope|
|Microintonation||Small change (one up and down) in frequency|
See also Psychoacoustic evidence below.
The richness of a sound or note a musical instrument produces is sometimes described in terms of a sum of a number of distinct frequencies. The lowest frequency is called the fundamental frequency , and the pitch it produces is used to name the note, but the fundamental frequency is not always the dominant frequency. The dominant frequency is the frequency that is most heard, and it is always a multiple of the fundamental frequency. For example, the dominant frequency for the transverse flute is double the fundamental frequency. Other significant frequencies are called overtones of the fundamental frequency, which may include harmonics and partials. Harmonics are whole number multiples of the fundamental frequency, such as ×2, ×3, ×4, etc. Partials are other overtones. There are also sometimes subharmonics at whole number divisions of the fundamental frequency. Most instruments produce harmonic sounds, but many instruments produce partials and inharmonic tones, such as cymbals and other indefinite-pitched instruments.
When the tuning note in an orchestra or concert band is played, the sound is a combination of 440 Hz, 880 Hz, 1320 Hz, 1760 Hz and so on. Each instrument in the orchestra or concert band produces a different combination of these frequencies, as well as harmonics and overtones. The sound waves of the different frequencies overlap and combine, and the balance of these amplitudes is a major factor in the characteristic sound of each instrument.
William Sethares wrote that just intonation and the western equal tempered scale are related to the harmonic spectra/timbre of many western instruments in an analogous way that the inharmonic timbre of the Thai renat (a xylophone-like instrument) is related to the seven-tone near-equal tempered pelog scale in which they are tuned. Similarly, the inharmonic spectra of Balinese metallophones combined with harmonic instruments such as the stringed rebab or the voice, are related to the five-note near-equal tempered slendro scale commonly found in Indonesian gamelan music.
The timbre of a sound is also greatly affected by the following aspects of its envelope: attack time and characteristics, decay, sustain, release (ADSR envelope) and transients. Thus these are all common controls on professional synthesizers. For instance, if one takes away the attack from the sound of a piano or trumpet, it becomes more difficult to identify the sound correctly, since the sound of the hammer hitting the strings or the first blast of the player's lips on the trumpet mouthpiece are highly characteristic of those instruments. The envelope is the overall amplitude structure of a sound.
Instrumental timbre played an increasing role in the practice of orchestration during the eighteenth and nineteenth centuries. Berliozand Wagner made significant contributions to its development during the nineteenth century. For example, Wagner's "Sleep motif" from Act 3 of his opera Die Walküre , features a descending chromatic scale that passes through a gamut of orchestral timbres. First the woodwind (flute, followed by oboe), then the massed sound of strings with the violins carrying the melody, and finally the brass (French horns).
Debussy, who composed during the last decades of the nineteenth and the first decades of the twentieth centuries, has been credited with elevating further the role of timbre: "To a marked degree the music of Debussy elevates timbre to an unprecedented structural status; already in Prélude à l'après-midi d'un faune the color of flute and harp functions referentially".Mahler’s approach to orchestration illustrates the increasing role of differentiated timbres in music of the early twentieth century. Norman Del Mar describes the following passage from the Scherzo movement of his Sixth Symphony, as "a seven-bar link to the trio consisting of an extension in diminuendo of the repeated As… though now rising in a succession of piled octaves which moreover leap-frog with Cs added to the As. The lower octaves then drop away and only the Cs remain so as to dovetail with the first oboe phrase of the trio." During these bars, Mahler passes the repeated notes through a gamut of instrumental colors, mixed and single: starting with horns and pizzicato strings, progressing through trumpet, clarinet, flute, piccolo and finally, oboe:
See also Klangfarbenmelodie.
In rock music from the late 1960s to the 2000s, the timbre of specific sounds is important to a song. For example, in heavy metal music, the sonic impact of the heavily amplified, heavily distorted power chord played on electric guitar through very loud guitar amplifiers and rows of speaker cabinets is an essential part of the style's musical identity.
Often, listeners can identify an instrument, even at different pitches and loudness, in different environments, and with different players. In the case of the clarinet, acoustic analysis shows waveforms irregular enough to suggest three instruments rather than one. David Luce suggests that this implies that "[C]ertain strong regularities in the acoustic waveform of the above instruments must exist which are invariant with respect to the above variables".However, Robert Erickson argues that there are few regularities and they do not explain our "...powers of recognition and identification." He suggests borrowing the concept of subjective constancy from studies of vision and visual perception.
Psychoacoustic experiments from the 1960s onwards tried to elucidate the nature of timbre. One method involves playing pairs of sounds to listeners, then using a multidimensional scaling algorithm to aggregate their dissimilarity judgments into a timbre space. The most consistent outcomes from such experiments are that brightness or spectral energy distribution,and the bite, or rate and synchronicity and rise time, of the attack are important factors.
The concept of tristimulus originates in the world of color, describing the way three primary colors can be mixed together to create a given color. By analogy, the musical tristimulus measures the mixture of harmonics in a given sound, grouped into three sections. It is basically a proposal of reducing a huge number of sound partials, that can amount to dozens or hundreds in some cases, down to only three values. The first tristimulus measures the relative weight of the first harmonic; the second tristimulus measures the relative weight of the second, third, and fourth harmonics taken together; and the third tristimulus measures the relative weight of all the remaining harmonics. [ page needed ]
However, more evidence, studies and applications would be needed regarding this type of representation, in order to validate it.
The term "brightness" is also used in discussions of sound timbres, in a rough analogy with visual brightness. Timbre researchers consider brightness to be one of the perceptually strongest distinctions between sounds,and formalize it acoustically as an indication of the amount of high-frequency content in a sound, using a measure such as the spectral centroid.
Additive synthesis is a sound synthesis technique that creates timbre by adding sine waves together.
A harmonic series is the sequence of harmonics, musical tones, or pure tones whose frequency is an integer multiple of a fundamental frequency.
In music, there are two common meanings for tuning:
A harmonic is a sinusoidal wave with a frequency that is a positive integer multiple of the fundamental frequency of a periodic signal. The fundamental frequency is also called the 1st harmonic, the other harmonics are known as higher harmonics. As all harmonics are periodic at the fundamental frequency, the sum of harmonics is also periodic at that frequency. The set of harmonics forms a harmonic series.
An overtone is any resonant frequency above the fundamental frequency of a sound. In other words, overtones are all pitches higher than the lowest pitch within an individual sound; the fundamental is the lowest pitch. While the fundamental is usually heard most prominently, overtones are actually present in any pitch except a true sine wave. The relative volume or amplitude of various overtone partials is one of the key identifying features of timbre, or the individual characteristic of a sound.
Pitch is a perceptual property of sounds that allows their ordering on a frequency-related scale, or more commonly, pitch is the quality that makes it possible to judge sounds as "higher" and "lower" in the sense associated with musical melodies. Pitch is a major auditory attribute of musical tones, along with duration, loudness, and timbre.
In music, inharmonicity is the degree to which the frequencies of overtones depart from whole multiples of the fundamental frequency.
A harmonic sound is said to have a missing fundamental, suppressed fundamental, or phantom fundamental when its overtones suggest a fundamental frequency but the sound lacks a component at the fundamental frequency itself. The brain perceives the pitch of a tone not only by its fundamental frequency, but also by the periodicity implied by the relationship between the higher harmonics; we may perceive the same pitch even if the fundamental frequency is missing from a tone.
Piano acoustics is the set of physical properties of the piano that affect its sound. It is an area of study within musical acoustics.
Piano tuning is the act of adjusting the tension of the strings of an acoustic piano so that the musical intervals between strings are in tune. The meaning of the term 'in tune', in the context of piano tuning, is not simply a particular fixed set of pitches. Fine piano tuning requires an assessment of the vibration interaction among notes, which is different for every piano, thus in practice requiring slightly different pitches from any theoretical standard. Pianos are usually tuned to a modified version of the system called equal temperament.
Musical acoustics or music acoustics is a multidisciplinary field that combines knowledge from physics, psychophysics, organology, physiology, music theory, ethnomusicology, signal processing and instrument building, among other disciplines. As a branch of acoustics, it is concerned with researching and describing the physics of music – how sounds are employed to make music. Examples of areas of study are the function of musical instruments, the human voice, computer analysis of melody, and in the clinical use of music in music therapy.
In music, consonance and dissonance are categorizations of simultaneous or successive sounds. Within the Western tradition, some listeners associate consonance with sweetness, pleasantness, and acceptability, and dissonance with harshness, unpleasantness, or unacceptability, although there is broad acknowledgement that this depends also on familiarity and musical expertise. The terms form a structural dichotomy in which they define each other by mutual exclusion: a consonance is what is not dissonant, and a dissonance is what is not consonant. However, a finer consideration shows that the distinction forms a gradation, from the most consonant to the most dissonant. In casual discourse, as German composer and music theorist Paul Hindemith stressed, "The two concepts have never been completely explained, and for a thousand years the definitions have varied". The term sonance has been proposed to encompass or refer indistinctly to the terms consonance and dissonance.
Stretched tuning is a detail of musical tuning, applied to wire-stringed musical instruments, older, non-digital electric pianos, and some sample-based synthesizers based on these instruments, to accommodate the natural inharmonicity of their vibrating elements. In stretched tuning, two notes an octave apart, whose fundamental frequencies theoretically have an exact 2:1 ratio, are tuned slightly farther apart. "For a stretched tuning the octave is greater than a factor of 2; for a compressed tuning the octave is smaller than a factor of 2."
In perception and psychophysics, auditory scene analysis (ASA) is a proposed model for the basis of auditory perception. This is understood as the process by which the human auditory system organizes sound into perceptually meaningful elements. The term was coined by psychologist Albert Bregman. The related concept in machine perception is computational auditory scene analysis (CASA), which is closely related to source separation and blind signal separation.
Computer audition (CA) or machine listening is the general field of study of algorithms and systems for audio interpretation by machines. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer Paris Smaragdis, interviewed in Technology Review, talks about these systems — "software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents."
Harshness, in music information retrieval, is a Non-Contextual Low-Level Audio Descriptors (NLDs) that represents one dimension of the multi-dimensional psychoacoustic feature called as musical timbre.
Virtual pitch is the pitch of a complex tone. Virtual pitch corresponds approximately to the fundamental of a harmonic series that is recognized among the audible partials. A virtual pitch may be perceived even if the perceived pattern is incomplete or mistuned. In that respect, virtual pitch perception is similar to other forms of pattern recognition. It corresponds to the phenomenon whereby one's brain extracts tones from everyday signals and music, even if parts of the signal are masked by other sounds. Virtual pitch is contrasted to spectral pitch, which is the pitch of a pure tone or spectral component. Virtual pitch is called "virtual" because there is no acoustical correlate at the frequency corresponding to the pitch: even when a virtual pitch corresponds to a physically present fundamental, as it often does in everyday harmonic complex tones, the exact virtual pitch depends on the exact frequencies of higher harmonics and is almost independent of the exact frequency of the fundamental.
In physics, sound is a vibration that propagates as an acoustic wave, through a transmission medium such as a gas, liquid or solid. In human physiology and psychology, sound is the reception of such waves and their perception by the brain. Only acoustic waves that have frequencies lying between about 20 Hz and 20 kHz, the audio frequency range, elicit an auditory percept in humans. In air at atmospheric pressure, these represent sound waves with wavelengths of 17 meters (56 ft) to 1.7 centimeters (0.67 in). Sound waves above 20 kHz are known as ultrasound and are not audible to humans. Sound waves below 20 Hz are known as infrasound. Different animal species have varying hearing ranges.
Psychoacoustics is the branch of psychophysics involving the scientific study of sound perception and audiology—how human auditory system perceives various sounds. More specifically, it is the branch of science studying the psychological responses associated with sound. Psychoacoustics is an interdisciplinary field of many areas, including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science.
Traditionally in Western music, a musical tone is a steady periodic sound. A musical tone is characterized by its duration, pitch, intensity, and timbre. The notes used in music can be more complex than musical tones, as they may include aperiodic aspects, such as attack transients, vibrato, and envelope modulation.