Sound is a phenomenon in which pressure disturbances propagate through a transmission medium. In the context of physics, it is characterised as a mechanical wave of pressure or related quantities (e.g. displacement), whereas in physiological-psychological contexts it refers to the reception of such waves and their perception by the brain. [1] Though sensitivity to sound varies among all organisms, the human ear is sensitive to frequencies ranging from 20 Hz to 20 kHz. Examples of the significance and application of sound include music, medical imaging techniques, oral language and parts of science.
According to the technical standard established by ANSI/ASA S1.1-2013, the American National Standard for Acoustical Terminology, sound is defined as:
This two-part definition of sound states that sound can be taken as a wave motion in an elastic medium, making it also a stimulus, or as an excitation of the hearing mechanism that results in the perception of sound, making it a sensation.
Acoustics is the interdisciplinary scientific study of mechanical waves, vibrations, sound, ultrasound, and infrasound in gaseous, liquid, or solid media. A scientist who works in the field of acoustics is called an acoustician, while an individual specialising in acoustical engineering may be referred to as an acoustical engineer. [3] An audio engineer, by contrast, is concerned with the recording, manipulation, mixing, and reproduction of sound.
Applications of acoustics are found in many areas of modern society. Subdisciplines include aeroacoustics, audio signal processing, architectural acoustics, bioacoustics, electroacoustics, environmental noise, musical acoustics, noise control, psychoacoustics, speech, ultrasound, underwater acoustics, and vibration. [4]
Sound travels as a mechanical wave through a medium (e.g. water, crystals, air). Sound waves are generated by a sound source, such as a vibrating diaphragm of a loudspeaker. As the sound source vibrates the surrounding medium, mechanical disturbances propagate away from the source at the local speed of sound, thus resulting in a sound wave. At a fixed distance from the source, the pressure, velocity, and displacement of the medium's particles vary in time. At an instant in time, the pressure, velocity, and displacement vary spatially. The particles of the medium do not travel with the sound wave; instead, the disturbance and its mechanical energy propagate through the medium. Though intuitively obvious for solids, this also applies for liquids and gases. During propagation, waves can be reflected, refracted, or attenuated by the medium. [5]
The matter that supports the transmission of a sound is named the transmission medium. Media may be any form of matter, whether solids, liquids, gases or plasmas. However, sound cannot propagate through a vacuum because there is no medium to support mechanical disturbances. [6] [7]
The propagation of sound in a medium is influenced primarily by:
When sound is moving through a medium that isn't uniform in its physical properties, it may be refracted (either dispersed or focused). [5]
Some theoretical work suggests that sound waves may carry an extremely small effective mass and be associated with a weak gravitational field. [8]
Sound is transmitted through fluids (e.g. gases, plasmas, and liquids) as longitudinal waves, also called compression waves. Through solids, however, sound can be transmitted as both longitudinal waves and transverse waves. Longitudinal sound waves are waves of alternating pressure deviations from the equilibrium pressure, causing local regions of compression and rarefaction, while transverse waves (in solids) are waves of alternating shear stress perpendicular to the direction of propagation. Unlike longitudinal sound waves, transverse sound waves have the property of polarisation. [9]
Sound waves may be viewed using parabolic mirrors and objects that produce sound. [10]
The energy carried by a periodic sound wave alternates between the potential energy of the extra compression (in the case of longitudinal waves) or lateral displacement strain (in the case of transverse waves) of the matter, and the kinetic energy of the particles' displacement velocity in the medium.
Although sound transmission involves many physical processes, the signal received at a point (such as a microphone or the ear) can be fully described as a time‑varying pressure. This pressure‑versus‑time waveform provides a complete representation of any sound or audio signal detected at that location.
Sound waves are often simplified as sinusoidal plane waves, which are characterized by these generic properties:
Sometimes speed and direction are combined as a velocity vector; wave number and direction are combined as a wave vector.
To analyse audio, a complicated waveform—such as the one shown on the right—can be represented as a linear combination of sinusoidal components of different frequencies, amplitudes, and phases. [11] [12] [13]
The speed of sound depends on the medium the waves pass through, and is a fundamental property of the material. The first significant effort towards measurement of the speed of sound was made by Isaac Newton. He believed the speed of sound in a particular substance was equal to the square root of the pressure acting on it divided by its density:
This was later proven wrong and the French mathematician Laplace corrected the formula by deducing that the phenomenon of sound travelling is not isothermal, as believed by Newton, but adiabatic. He added another factor to the equation—gamma—and multiplied by , thus coming up with the equation . Since , the final equation came up to be , which is also known as the Newton–Laplace equation. In this equation, K is the elastic bulk modulus, c is the velocity of sound, and is the density. Thus, the speed of sound is proportional to the square root of the ratio of the bulk modulus of the medium to its density.
Those physical properties and the speed of sound change with ambient conditions. For example, the speed of sound in gases depends on temperature. In 20 °C (68 °F) air at sea level, the speed of sound is approximately 343 m/s (1,230 km/h; 767 mph) using the formula v [m/s] = 331 + 0.6 T [°C]. The speed of sound is also slightly sensitive, being subject to a second-order anharmonic effect, to the sound amplitude, which means there are non-linear propagation effects, such as the production of harmonics and mixed tones not present in the original sound (see parametric array). If relativistic effects are important, the speed of sound is calculated from the relativistic Euler equations.
In fresh water the speed of sound is approximately 1,482 m/s (5,335 km/h; 3,315 mph). In steel, the speed of sound is about 5,960 m/s (21,460 km/h; 13,330 mph). Sound moves the fastest in solid atomic hydrogen at about 36,000 m/s (129,600 km/h; 80,530 mph). [15] [16]
| Sound measurements | |
|---|---|
Characteristic | Symbols |
| Sound pressure | p, SPL, LPA |
| Particle velocity | v, SVL |
| Particle displacement | δ |
| Sound intensity | I, SIL |
| Sound power | P, SWL, LWA |
| Sound energy | W |
| Sound energy density | w |
| Sound exposure | E, SEL |
| Acoustic impedance | Z |
| Audio frequency | AF |
| Transmission loss | TL |
Sound pressure is the difference, in a given medium, between average local pressure and the pressure in the sound wave. A square of this difference (i.e., a square of the deviation from the equilibrium pressure) is usually averaged over time and/or space, and a square root of this average provides a root mean square (RMS) value. For example, 1 Pa RMS sound pressure (94 dBSPL) in atmospheric air implies that the actual pressure in the sound wave oscillates between (1 atm Pa) and (1 atm Pa), that is between 101323.6 and 101326.4 Pa. As the human ear can detect sounds with a wide range of amplitudes, sound pressure is often measured as a level on a logarithmic decibel scale. The sound pressure level (SPL) or Lp is defined as
Since the human ear does not have a flat spectral response, sound pressures are often frequency weighted so that the measured level matches perceived levels more closely. The International Electrotechnical Commission (IEC) has defined several weighting schemes. A-weighting attempts to match the response of the human ear to noise and A-weighted sound pressure levels are labeled dBA. C-weighting is used to measure peak levels.
A distinct use of the term sound from its use in physics is that in physiology and psychology, where the term refers to the subject of perception by the brain. The field of psychoacoustics is dedicated to such studies. Webster's dictionary defined sound as: "1. The sensation of hearing, that which is heard; specif.: a. Psychophysics. Sensation due to stimulation of the auditory nerves and auditory centers of the brain, usually by vibrations transmitted in a material medium, commonly air, affecting the organ of hearing. b. Physics. Vibrational energy which occasions such a sensation. Sound is propagated by progressive longitudinal vibratory disturbances (sound waves)." [17] This means that the correct response to the question: "if a tree falls in a forest and no one is around to hear it, does it make a sound?" is "yes", and "no", dependent on whether being answered using the physical, or the psychophysical definition, respectively.
The physical reception of sound in any hearing organism is limited to a range of frequencies. Humans normally hear sound as pitch for frequencies between approximately 20 Hz and 20,000 Hz (20 kHz), [18] : 382 The upper limit decreases with age. [18] : 249 Below 20 Hz, sound waves are heard as discrete stuttering sounds (for discrete pulses) or fast 'wow-wow-wow' sounds(for continuous sounds like sine waves). Sometimes sound refers to only those vibrations with frequencies that are within the hearing range for humans [19] or sometimes it relates to a particular animal. Other species have different ranges of hearing. For example, dogs can perceive vibrations higher than 20 kHz.
As a signal perceived by one of the major senses, sound is used by many species for detecting danger, navigation, predation, and communication. Earth's atmosphere, water, and virtually any physical phenomenon, such as fire, rain, wind, surf, or earthquake, produces (and is characterized by) its unique sounds. Many species, such as frogs, birds, marine and terrestrial mammals, have also developed special organs to produce sound. In some species, these produce song and speech. Furthermore, humans have developed culture and technology (such as music, telephone and radio) that allows them to generate, record, transmit, and broadcast sound.
Noise is a term often used to refer to an unwanted sound. In science and engineering, noise is an undesirable component that obscures a wanted signal. However, in sound perception it can often be used to identify the source of a sound and is an important component of timbre perception (see below).
Soundscape is the component of the acoustic environment that can be perceived by humans. The acoustic environment is the combination of all sounds (whether audible to humans or not) within a given area as modified by the environment and understood by people, in context of the surrounding environment.
There are, historically, six experimentally separable ways in which sound waves are analysed. They are: pitch, duration, loudness, timbre, sonic texture and spatial location. [20] Some of these terms have a standardised definition (for instance in the ANSI Acoustical Terminology ANSI/ASA S1.1-2013). More recent approaches have also considered temporal envelope and temporal fine structure as perceptually relevant analyses. [21] [22] [23]
Pitch is perceived as how "low" or "high" a sound is and represents the cyclic, repetitive nature of the vibrations that make up sound. For simple sounds, pitch relates to the frequency of the slowest vibration in the sound (called the fundamental harmonic). In the case of complex sounds, pitch perception can vary. Sometimes individuals identify different pitches for the same sound, based on their personal experience of particular sound patterns. Selection of a particular pitch is determined by pre-conscious examination of vibrations, including their frequencies and the balance between them. Specific attention is given to recognising potential harmonics. [24] [25] Every sound is placed on a pitch continuum from low to high.
For example: white noise (random noise spread evenly across all frequencies) sounds higher in pitch than pink noise (random noise spread evenly across octaves) as white noise has more high frequency content.
Duration is perceived as how "long" or "short" a sound is and relates to onset and offset signals created by nerve responses to sounds. The duration of a sound usually lasts from the time the sound is first noticed until the sound is identified as having changed or ceased. [26] Sometimes this is not directly related to the physical duration of a sound. For example; in a noisy environment, gapped sounds (sounds that stop and start) can sound as if they are continuous because the offset messages are missed owing to disruptions from noises in the same general bandwidth. [27] This can be of great benefit in understanding distorted messages such as radio signals that suffer from interference, as (owing to this effect) the message is heard as if it was continuous.
Loudness is perceived as how “loud” or “soft” a sound is, and reflects the overall pattern of auditory‑nerve activity produced by a sound. In general, louder sounds create greater displacement of the basilar membrane, which stimulates more auditory‑nerve fibres and results in a stronger neural representation of loudness. [28]
Perceived loudness also depends on how sound energy is distributed over time. When a sound is very brief, the auditory system does not fully integrate its energy, so it is heard as softer than a longer sound presented at the same physical intensity. This process, known as temporal summation, operates over a window of roughly 200 ms. [29] Beyond this duration, increasing the length of the sound no longer increases its perceived loudness.
The spectral complexity of a sound can also influence loudness perception. Complex tones, which activate a broader range of auditory‑nerve fibres, are often judged as louder than simple tones (such as sine waves) even when matched for physical amplitude. [30]
Timbre is perceived as the quality of different sounds (e.g. the thud of a fallen rock, the whir of a drill, the tone of a musical instrument or the quality of a voice) and represents the pre-conscious allocation of a sonic identity to a sound (e.g. "it's an oboe!"). This identity is based on information gained from frequency transients, noisiness, unsteadiness, perceived pitch and the spread and intensity of overtones in the sound over an extended time frame. [11] [12] [13] The way a sound changes over time provides most of the information for timbre identification. Even though a small section of the wave form from each instrument looks very similar, differences in changes over time between the clarinet and the piano are evident in both loudness and harmonic content. Less noticeable are the different noises heard, such as air hisses for the clarinet and hammer strikes for the piano.
Sonic texture relates to the number of sound sources and the interaction between them. [31] [32] The word texture, in this context, relates to the cognitive separation of auditory objects. [33] In music, texture is often referred to as the difference between unison, polyphony and homophony, but it can also relate (for example) to a busy cafe; a sound which might be referred to as cacophony .
Spatial location represents the cognitive placement of a sound in an environmental context; including the placement of a sound on both the horizontal and vertical plane, the distance from the sound source and the characteristics of the sonic environment. [33] [34] In a thick texture, it is possible to identify multiple sound sources using a combination of spatial location and timbre identification.
Ultrasound is sound waves with frequencies higher than 20,000 Hz. Ultrasound is not different from audible sound in its physical properties, but cannot be heard by humans. Ultrasound devices operate with frequencies from 20 kHz up to several gigahertz.
Medical ultrasound is commonly used for diagnostics and treatment.
Infrasound is sound waves with frequencies lower than 20 Hz. Although sounds of such low frequency are too low for humans to hear as a pitch, these sound are heard as discrete pulses (like the 'popping' sound of an idling motorcycle). Whales, elephants and other animals can detect infrasound and use it to communicate. It can be used to detect volcanic eruptions and is used in some types of music. [35]
{{citation}}: CS1 maint: work parameter with ISBN (link)