Microphone array

Last updated
A gunfire locator using a microphone array Boomerang 3 Gunfire Acoustic Detection System MOD 45153048.jpg
A gunfire locator using a microphone array

A microphone array is any number of microphones operating in tandem. There are many applications:

Contents

Typically, an array is made up of omnidirectional microphones, directional microphones, or a mix of omnidirectional and directional microphones distributed about the perimeter of a space, linked to a computer that records and interprets the results into a coherent form. Arrays may also be formed using numbers of very closely spaced microphones. Given a fixed physical relationship in space between the different individual microphone transducer array elements, simultaneous DSP (digital signal processor) processing of the signals from each of the individual microphone array elements can create one or more "virtual" microphones. Different algorithms permit the creation of virtual microphones with extremely complex virtual polar patterns and even the possibility to steer the individual lobes of the virtual microphones patterns so as to home-in-on, or to reject, particular sources of sound. The application of these algorithms can produce varying levels of accuracy when calculating source level and location, and as such, care should be taken when deciding how the individual lobes of the virtual microphones are derived. [3]

In case the array consists of omnidirectional microphones they accept sound from all directions, so electrical signals of the microphones contain the information about the sounds coming from all directions. Joint processing of these sounds allows selecting the sound signal coming from the given direction. [4]

An array of 1020 microphones, [5] the largest in the world until August 21, 2014, was built by researchers at the MIT Computer Science and Artificial Intelligence Laboratory.

Currently the largest microphone array in the world was constructed by Sorama, a Netherlands-based sound engineering firm, in August 2014. Their array consists of 4096 microphones. [6]

Soundfield microphone

The soundfield microphone system is a well-established example of the use of a microphone array in professional sound recording.

See also

Notes

  1. Environmental Noise Compass
  2. Evers, Christine; Naylor, Patrick A. (September 2018). "Acoustic SLAM" (PDF). IEEE/ACM Transactions on Audio, Speech, and Language Processing. 26 (9): 1484–1498. doi: 10.1109/TASLP.2018.2828321 . ISSN   2329-9290. Archived (PDF) from the original on 2020-05-05.
  3. Jesse Tribby (9 November 2016), Assessing the accuracy of directional real-time noise monitoring systems (PDF), archived (PDF) from the original on 2017-03-16, retrieved 2020-05-09
  4. Stolbov M.B. (2015). "Application of microphone arrays for distant speech capture". Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 15 (4): 661–675.
  5. LOUD: Large acOUstic Data Array Project
  6. Largest microphone array

Related Research Articles

A hydrophone is a microphone designed to be used underwater for recording or listening to underwater sound. Most hydrophones are based on a piezoelectric transducer that generates an electric potential when subjected to a pressure change, such as a sound wave. Some piezoelectric transducers can also serve as a sound projector, but not all have this capability, and some may be destroyed if used in such a manner.

<span class="mw-page-title-main">Microphone</span> Device that converts sound into an electrical signal

A microphone, colloquially called a mic or mike, is a transducer that converts sound into an electrical signal. Microphones are used in many applications such as telephones, hearing aids, public address systems for concert halls and public events, motion picture production, live and recorded audio engineering, sound recording, two-way radios, megaphones, and radio and television broadcasting. They are also used in computers for recording voice, speech recognition, VoIP, and for other purposes such as ultrasonic sensors or knock sensors.

<span class="mw-page-title-main">Head-related transfer function</span> Response that characterizes how anĀ earĀ receives a sound from a point in space

A head-related transfer function (HRTF), also known as anatomical transfer function (ATF),, or a Head Shadow is a response that characterizes how an ear receives a sound from a point in space. As sound strikes the listener, the size and shape of the head, ears, ear canal, density of the head, size and shape of nasal and oral cavities, all transform the sound and affect how it is perceived, boosting some frequencies and attenuating others. Generally speaking, the HRTF boosts frequencies from 2–5 kHz with a primary resonance of +17 dB at 2,700 Hz. But the response curve is more complex than a single bump, affects a broad frequency spectrum, and varies significantly from person to person.

<span class="mw-page-title-main">Ambisonics</span> Full-sphere surround sound format

Ambisonics is a full-sphere surround sound format: in addition to the horizontal plane, it covers sound sources above and below the listener.

<span class="mw-page-title-main">Surround sound</span> System with loudspeakers that surround the listener

Surround sound is a technique for enriching the fidelity and depth of sound reproduction by using multiple audio channels from speakers that surround the listener. Its first application was in movie theaters. Prior to surround sound, theater sound systems commonly had three screen channels of sound that played from three loudspeakers located in front of the audience. Surround sound adds one or more channels from loudspeakers to the side or behind the listener that are able to create the sensation of sound coming from any horizontal direction around the listener.

<span class="mw-page-title-main">Hearing aid</span> Electroacoustic device

A hearing aid is a device designed to improve hearing by making sound audible to a person with hearing loss. Hearing aids are classified as medical devices in most countries, and regulated by the respective regulations. Small audio amplifiers such as personal sound amplification products (PSAPs) or other plain sound reinforcing systems cannot be sold as "hearing aids".

<span class="mw-page-title-main">Simultaneous localization and mapping</span> Computational navigational technique used by robots and autonomous vehicles

Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in computational geometry and computer vision, and are used in robot navigation, robotic mapping and odometry for virtual reality or augmented reality.

<span class="mw-page-title-main">Sound reinforcement system</span> Amplified sound system for public events

A sound reinforcement system is the combination of microphones, signal processors, amplifiers, and loudspeakers in enclosures all controlled by a mixing console that makes live or pre-recorded sounds louder and may also distribute those sounds to a larger or more distant audience. In many situations, a sound reinforcement system is also used to enhance or alter the sound of the sources on the stage, typically by using electronic effects, such as reverb, as opposed to simply amplifying the sources unaltered.

Sound localization is a listener's ability to identify the location or origin of a detected sound in direction and distance.

<span class="mw-page-title-main">Array processing</span>

Array processing is a wide area of research in the field of signal processing that extends from the simplest form of 1 dimensional line arrays to 2 and 3 dimensional array geometries. Array structure can be defined as a set of sensors that are spatially separated, e.g. radio antenna and seismic arrays. The sensors used for a specific problem may vary widely, for example microphones, accelerometers and telescopes. However, many similarities exist, the most fundamental of which may be an assumption of wave propagation. Wave propagation means there is a systemic relationship between the signal received on spatially separated sensors. By creating a physical model of the wave propagation, or in machine learning applications a training data set, the relationships between the signals received on spatially separated sensors can be leveraged for many applications.

Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception. This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array.

The Soundfield microphone is an audio microphone composed of four closely spaced subcardioid or cardioid (unidirectional) microphone capsules arranged in a tetrahedron. It was invented by Michael Gerzon and Peter Craven, and is a part of, but not exclusive to, Ambisonics, a surround sound technology. It can function as a mono, stereo or surround sound microphone, optionally including height information.

<span class="mw-page-title-main">Acoustic location</span> Use of reflected sound waves to locate objects

Acoustic location is the use of sound to determine the distance and direction of its source or reflector. Location can be done actively or passively, and can take place in gases, liquids, and in solids.

<span class="mw-page-title-main">Automixer</span> Live sound mixing device

An automixer, or automatic microphone mixer, is a live sound mixing device that automatically reduces the strength of a microphone's audio signal when it is not being used. Automixers reduce extraneous noise picked up when several microphones operate simultaneously.

An acoustic camera is an imaging device used to locate sound sources and to characterize them. It consists of a group of microphones, also called a microphone array, from which signals are simultaneously collected and processed to form a representation of the location of the sound sources.

3D sound localization refers to an acoustic technology that is used to locate the source of a sound in a three-dimensional space. The source location is usually determined by the direction of the incoming sound waves and the distance between the source and sensors. It involves the structure arrangement design of the sensors and signal processing techniques.

Perceptual-based 3D sound localization is the application of knowledge of the human auditory system to develop 3D sound localization technology.

3D sound reconstruction is the application of reconstruction techniques to 3D sound localization technology. These methods of reconstructing three-dimensional sound are used to recreate sounds to match natural environments and provide spatial cues of the sound source. They also see applications in creating 3D visualizations on a sound field to include physical aspects of sound waves including direction, pressure, and intensity. This technology is used in entertainment to reproduce a live performance through computer speakers. The technology is also used in military applications to determine location of sound sources. Reconstructing sound fields is also applicable to medical imaging to measure points in ultrasound.

3D sound is most commonly defined as the daily human experience of sounds. The sounds arrive at the ears from every direction and varying distances, which contribute to the three-dimensional aural image humans hear. Scientists and engineers who work with 3D sound work to accurately synthesize the complexity of real-world sounds.