Sound from ultrasound is the name given here to the generation of audible sound from modulated ultrasound without using an active receiver. This happens when the modulated ultrasound passes through a nonlinear medium which acts, intentionally or unintentionally, as a demodulator.
Since the early 1960s, researchers have been experimenting with creating directive low-frequency sound from nonlinear interaction of an aimed beam of ultrasound waves produced by a parametric array using heterodyning. Ultrasound has much shorter wavelengths than audible sound, so that it propagates in a much narrower beam than any normal loudspeaker system using audio frequencies. Most of the work was performed in liquids (for underwater sound use).
The first modern device for air acoustic use was created in 1998, [1] and is now known by the trademark name "Audio Spotlight", a term first coined in 1983 by the Japanese researchers [2] who abandoned the technology as infeasible in the mid-1980s.
A transducer can be made to project a narrow beam of modulated ultrasound that is powerful enough, at 100 to 110 dBSPL, to substantially change the speed of sound in the air that it passes through. The air within the beam behaves nonlinearly and extracts the modulation signal from the ultrasound, resulting in sound that can be heard only along the path of the beam, or that appears to radiate from any surface that the beam strikes. This technology allows a beam of sound to be projected over a long distance to be heard only in a small well-defined area; [3] for a listener outside the beam the sound pressure decreases substantially. This effect cannot be achieved with conventional loudspeakers, because sound at audible frequencies cannot be focused into such a narrow beam. [3]
There are some limitations with this approach. Anything that interrupts the beam will prevent the ultrasound from propagating, like interrupting a spotlight's beam. For this reason, most systems are mounted overhead, like lighting.
A sound signal can be aimed so that only a particular passer-by, or somebody very close, can hear it. In commercial applications, it can target sound to a single person without the peripheral sound and related noise of a loudspeaker.
It can be used for personal audio, either to have sounds audible to only one person, or that which a group wants to listen to. The navigation instructions for example are only interesting for the driver in a car, not for the passengers. Another possibility are future applications for true stereo sound, where one ear does not hear what the other is hearing. [4]
Directional audio train signaling may be accomplished through the use of an ultrasonic beam which will warn of the approach of a train while avoiding the nuisance of loud train signals on surrounding homes and businesses. [5]
This technology was originally developed by the US Navy and Soviet Navy for underwater sonar in the mid-1960s, and was briefly investigated by Japanese researchers in the early 1980s, but these efforts were abandoned due to extremely poor sound quality (high distortion) and substantial system cost. These problems went unsolved until a paper published by Dr. F. Joseph Pompei of the Massachusetts Institute of Technology in 1998 [1] fully described a working device that reduced audible distortion essentially to that of a traditional loudspeaker.
As of 2014 [update] there were known to be five devices which have been marketed that use ultrasound to create an audible beam of sound.
F. Joseph Pompei of MIT developed technology he calls the "Audio Spotlight", [6] and made it commercially available in 2000 by his company Holosonics, which according to their website claims to have sold "thousands" of their "Audio Spotlight" systems. Disney was among the first major corporations to adopt it for use at the Epcot Center, and many other application examples are shown on the Holosonics website. [7]
Audio Spotlight is a narrow beam of sound that can be controlled with similar precision to light from a spotlight. It uses a beam of ultrasound as a "virtual acoustic source", enabling control of sound distribution. The ultrasound has wavelengths only a few millimeters long which are much smaller than the source, and therefore naturally travel in an extremely narrow beam. The ultrasound, which contains frequencies far outside the range of human hearing, is completely inaudible. But as the ultrasonic beam travels through the air, the inherent properties of the air cause the ultrasound to change shape in a predictable way. This gives rise to frequency components in the audible band, which can be predicted and controlled.
Elwood "Woody" Norris, founder and Chairman of American Technology Corporation (ATC), announced he had successfully created a device which achieved ultrasound transmission of sound in 1996. [8] This device used piezoelectric transducers to send two ultrasonic waves of differing frequencies toward a point, giving the illusion that the audible sound from their interference pattern was originating at that point. [9] ATC named and trademarked their device as "HyperSonic Sound" (HSS). In December 1997, HSS was one of the items in the Best of What's New issue of Popular Science. [10] In December 2002, Popular Science named HyperSonic Sound the best invention of 2002.[ citation needed ] Norris received the 2005 Lemelson–MIT Prize for his invention of a "hypersonic sound". [11] ATC (now named LRAD Corporation) spun off the technology to Parametric Sound Corporation in September 2010 to focus on their long-range acoustic device (LRAD) products, according to their quarterly reports, press releases, and executive statements. [12] [13]
Mitsubishi apparently offers a sound from ultrasound product named the "MSP-50E" [14] and commercially available from Mitsubishi electrical engineering company. [15]
German audio company Sennheiser Electronic once listed their "AudioBeam" product for about $4,500. [16] There is no indication that the product has been used in any public applications. The product has since been discontinued. [17]
The first experimental systems were built over 30 years ago, although these first versions only played simple tones. It was not until much later (see above) that the systems were built for practical listening use.
A chronological summary of the experimental approaches taken to examine Audio Spotlight systems in the past will be presented here. At the turn of the millennium working versions of an Audio Spotlight capable of reproducing speech and music could be bought from Holosonics, a company founded on Dr. Pompei's work in the MIT Media Lab. [18]
Related topics were researched almost 40 years earlier in the context of underwater acoustics.
Both articles were supported by the U.S. Office of Naval Research, specifically for the use of the phenomenon for underwater sonar pulses. The goal of these systems was not high directivity per se, but rather higher usable bandwidth of a typically band-limited transducer.
The 1970s saw some activity in experimental airborne systems, both in air [21] and underwater. [22] Again supported by the U.S. Office of Naval Research, the primary aim of the underwater experiments was to determine the range limitations of sonar pulse propagation due to nonlinear distortion. The airborne experiments were aimed at recording quantitative data about the directivity and propagation loss of both the ultrasonic carrier and demodulated waves, rather than developing the capability to reproduce an audio signal.
In 1983 the idea was again revisited experimentally [2] but this time with the firm intent to analyze the use of the system in air to form a more complex base band signal in a highly directional manner. The signal processing used to achieve this was simple DSB-AM with no precompensation, and because of the lack of precompensation applied to the input signal, the THD (total harmonic distortion) levels of this system would have probably been satisfactory for speech reproduction, but prohibitive for the reproduction of music. An interesting feature of the experimental set up [2] was the use of 547 ultrasonic transducers to produce a 40 kHz ultrasonic sound source of over 130db at 4 m, which would demand significant safety considerations. [23] [24] Even though this experiment clearly demonstrated the potential to reproduce audio signals using an ultrasonic system, it also showed that the system suffered from heavy distortion, especially when no precompensation was used.
The equations that govern nonlinear acoustics are quite complex [25] [26] and unfortunately they do not have general analytical solutions. They usually require the use of a computer simulation. [27] However, as early as 1965, Berktay performed an analysis [28] under some simplifying assumptions that allowed the demodulated SPL to be written in terms of the amplitude modulated ultrasonic carrier wave pressure Pc and various physical parameters. Note that the demodulation process is extremely lossy, with a minimum loss in the order of 60 dB from the ultrasonic SPL to the audible wave SPL. A precompensation scheme can be based from Berktay's expression, shown in Equation 1, by taking the square root of the base band signal envelope E and then integrating twice to invert the effect of the double partial-time derivative. The analogue electronic circuit equivalents of a square root function is simply an op-amp with feedback, and an equalizer is analogous to an integration function. However, these topic areas lie outside the scope of this project.
where
This equation says that the audible demodulated ultrasonic pressure wave (output signal) is proportional to the twice differentiated, squared version of the envelope function (input signal). Precompensation refers to the trick of anticipating these transforms and applying the inverse transforms on the input, hoping that the output is then closer to the untransformed input.
By the 1990s, it was well known that the Audio Spotlight could work but suffered from heavy distortion. It was also known that the precompensation schemes placed an added demand on the frequency response of the ultrasonic transducers. In effect the transducers needed to keep up with what the digital precompensation demanded of them, namely a broader frequency response. In 1998 the negative effects on THD of an insufficiently broad frequency response of the ultrasonic transducers was quantified [29] with computer simulations by using a precompensation scheme based on Berktay's expression. In 1999 Pompei's article [18] discussed how a new prototype transducer met the increased frequency response demands placed on the ultrasonic transducers by the precompensation scheme, which was once again based on Berktay's expression. In addition impressive reductions in the THD of the output when the precompensation scheme was employed were graphed against the case of using no precompensation.
In summary, the technology that originated with underwater sonar 40 years ago has been made practical for reproduction of audible sound in air by Pompei's paper and device, which, according to his AES paper (1998), demonstrated that distortion had been reduced to levels comparable to traditional loudspeaker systems.
This section needs additional citations for verification .(December 2013) |
The nonlinear interaction mixes ultrasonic tones in air to produce sum and difference frequencies. A DSB (double-sideband) amplitude-modulation scheme with an appropriately large baseband DC offset, to produce the demodulating tone superimposed on the modulated audio spectrum, is one way to generate the signal that encodes the desired baseband audio spectrum. This technique suffers from extremely heavy distortion as not only the demodulating tone interferes, but also all other frequencies present interfere with one another. The modulated spectrum is convolved with itself, doubling its bandwidth by the length property of the convolution. The baseband distortion in the bandwidth of the original audio spectrum is inversely proportional to the magnitude of the DC offset (demodulation tone) superimposed on the signal. A larger tone results in less distortion.
Further distortion is introduced by the second order differentiation property of the demodulation process. The result is a multiplication of the desired signal by the function -ω² in frequency. This distortion may be equalized out with the use of preemphasis filtering (increase amplitude of high frequency signal).
By the time-convolution property of the Fourier transform, multiplication in the time domain is a convolution in the frequency domain. Convolution between a baseband signal and a unity gain pure carrier frequency shifts the baseband spectrum in frequency and halves its magnitude, though no energy is lost. One half-scale copy of the replica resides on each half of the frequency axis. This is consistent with Parseval's theorem.
The modulation depth m is a convenient experimental parameter when assessing the total harmonic distortion in the demodulated signal. It is inversely proportional to the magnitude of the DC offset. THD increases proportionally with m1².
These distorting effects may be better mitigated by using another modulation scheme that takes advantage of the differential squaring device nature of the nonlinear acoustic effect. Modulation of the second integral of the square root of the desired baseband audio signal, without adding a DC offset, results in convolution in frequency of the modulated square-root spectrum, half the bandwidth of the original signal, with itself due to the nonlinear channel effects. This convolution in frequency is a multiplication in time of the signal by itself, or a squaring. This again doubles the bandwidth of the spectrum, reproducing the second time integral of the input audio spectrum. The double integration corrects for the -ω² filtering characteristic associated with the nonlinear acoustic effect. This recovers the scaled original spectrum at baseband.
The harmonic distortion process has to do with the high frequency replicas associated with each squaring demodulation, for either modulation scheme. These iteratively demodulate and self-modulate, adding a spectrally smeared-out and time-exponentiated copy of the original signal to baseband and twice the original center frequency each time, with one iteration corresponding to one traversal of the space between the emitter and target. Only sound with parallel collinear phase velocity vectors interfere to produce this nonlinear effect. Even-numbered iterations will produce their modulation products, baseband and high frequency, as reflected emissions from the target. Odd-numbered iterations will produce their modulation products as reflected emissions off the emitter.
This effect still holds when the emitter and the reflector are not parallel, though due to diffraction effects the baseband products of each iteration will originate from a different location each time, with the originating location corresponding to the path of the reflected high frequency self-modulation products.
These harmonic copies are largely attenuated by the natural losses at those higher frequencies when propagating through air.
The figure provided in [30] provides an estimation of the attenuation that the ultrasound would suffer as it propagated through air. The figures from this graph correspond to completely linear propagation, and the exact effect of the nonlinear demodulation phenomena on the attenuation of the ultrasonic carrier waves in air was not considered. There is an interesting dependence on humidity. Nevertheless, a 50 kHz wave suffers an attenuation level in the order of 1 dB per meter at one atmosphere of pressure.
For the nonlinear effect to occur, relatively high-intensity ultrasonics are required. The SPL involved was typically greater than 100 dB of ultrasound at a nominal distance of 1 m from the face of the ultrasonic transducer.[ citation needed ] Exposure to more intense ultrasound over 140 dB[ citation needed ] near the audible range (20–40 kHz) can lead to a syndrome involving manifestations of nausea, headache, tinnitus, pain, dizziness, and fatigue, [24] but this is around 100 times the 100 dB level cited above, and is generally not a concern. Dr Joseph Pompei of Audio Spotlight has published data showing that their product generates ultrasonic sound pressure levels around 130 dB (at 60 kHz) measured at 3 meters. [31]
The UK's independent Advisory Group on Non-ionising Radiation (AGNIR) produced a 180-page report on the health effects of human exposure to ultrasound and infrasound in 2010. The UK Health Protection Agency (HPA) published their report, which recommended an exposure limit for the general public to airborne ultrasound sound pressure levels (SPL) of 100 dB (at 25 kHz and above). [32]
OSHA specifies a safe ceiling value of ultrasound as 145 dB SPL exposure at the frequency range used by commercial systems in air, as long as there is no possibility of contact with the transducer surface or coupling medium (i.e. submerged). [33] This is several times the highest levels used by commercial Audio Spotlight systems, so there is a significant margin for safety[ citation needed ]. In a review of international acceptable exposure limits Howard et al. (2005) [34] noted the general agreement among standards organizations, but expressed concern with the decision by United States of America's Occupational Safety and Health Administration (OSHA) to increase the exposure limit by an additional 30 dB under some conditions (equivalent to a factor of 1000 in intensity [35] ).
For frequencies of ultrasound from 25 to 50 kHz, a guideline of 110 dB had been recommended by Canada, Japan, the USSR, and the International Radiation Protection Agency, and 115 dB by Sweden [24] in the late 1970s to early 1980s, but these were primarily based on subjective effects. The more recent OSHA guidelines above are based on ACGIH (American Conference of Governmental Industrial Hygienists) research from 1987.
Lawton(2001) [36] reviewed international guidelines for airborne ultrasound in a report published by the United Kingdom's Health and Safety Executive, this included a discussion of the guidelines issued by the American Conference of Governmental Industrial Hygienists (ACGIH), 1988. Lawton states "This reviewer believes that the ACGIH has pushed its acceptable exposure limits to the very edge of potentially injurious exposure". The ACGIH document also mentioned the possible need for hearing protection.
Amplitude modulation (AM) is a modulation technique used in electronic communication, most commonly for transmitting messages with a radio wave. In amplitude modulation, the amplitude of the wave is varied in proportion to that of the message signal, such as an audio signal. This technique contrasts with angle modulation, in which either the frequency of the carrier wave is varied, as in frequency modulation, or its phase, as in phase modulation.
In electronics and telecommunications, modulation is the process of varying one or more properties of a periodic waveform, called the carrier signal, with a separate signal called the modulation signal that typically contains information to be transmitted. For example, the modulation signal might be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal representing a sequence of binary digits, a bitstream from a computer.
In radio communications, single-sideband modulation (SSB) or single-sideband suppressed-carrier modulation (SSB-SC) is a type of modulation used to transmit information, such as an audio signal, by radio waves. A refinement of amplitude modulation, it uses transmitter power and bandwidth more efficiently. Amplitude modulation produces an output signal the bandwidth of which is twice the maximum frequency of the original baseband signal. Single-sideband modulation avoids this bandwidth increase, and the power wasted on a carrier, at the cost of increased device complexity and more difficult tuning at the receiver.
Ultrasound is sound with frequencies greater than 20 kilohertz. This frequency is the approximate upper audible limit of human hearing in healthy young adults. The physical principles of acoustic waves apply to any frequency range, including ultrasound. Ultrasonic devices operate with frequencies from 20 kHz up to several gigahertz.
In telecommunications and signal processing, baseband is the range of frequencies occupied by a signal that has not been modulated to higher frequencies. Baseband signals typically originate from transducers, converting some other variable into an electrical signal. For example, the electronic output of a microphone is a baseband signal that is analogous to the applied voice audio. In conventional analog radio broadcasting, the baseband audio signal is used to modulate an RF carrier signal of a much higher frequency.
A microphone, colloquially called a mic, or mike, is a transducer that converts sound into an electrical signal. Microphones are used in many applications such as telephones, hearing aids, public address systems for concert halls and public events, motion picture production, live and recorded audio engineering, sound recording, two-way radios, megaphones, and radio and television broadcasting. They are also used in computers and other electronic devices, such as mobile phones, for recording sounds, speech recognition, VoIP, and other purposes, such as ultrasonic sensors or knock sensors.
Demodulation is extracting the original information-bearing signal from a carrier wave. A demodulator is an electronic circuit that is used to recover the information content from the modulated carrier wave. There are many types of modulation so there are many types of demodulators. The signal output from a demodulator may represent sound, images or binary data.
Medical ultrasound includes diagnostic techniques using ultrasound, as well as therapeutic applications of ultrasound. In diagnosis, it is used to create an image of internal body structures such as tendons, muscles, joints, blood vessels, and internal organs, to measure some characteristics or to generate an informative audible sound. The usage of ultrasound to produce visual images for medicine is called medical ultrasonography or simply sonography, or echography. The practice of examining pregnant women using ultrasound is called obstetric ultrasonography, and was an early development of clinical ultrasonography. The machine used is called an ultrasound machine, a sonograph or an echograph. The visual image formed using this technique is called an ultrasonogram, a sonogram or an echogram.
Sonic and ultrasonic weapons (USW) are weapons of various types that use sound to injure or incapacitate an opponent. Some sonic weapons make a focused beam of sound or of ultrasound; others produce an area field of sound. As of 2023 military and police forces make some limited use of sonic weapons.
Laser-ultrasonics uses lasers to generate and detect ultrasonic waves. It is a non-contact technique used to measure materials thickness, detect flaws and carry out materials characterization. The basic components of a laser-ultrasonic system are a generation laser, a detection laser and a detector.
Ultrasonic testing (UT) is a family of non-destructive testing techniques based on the propagation of ultrasonic waves in the object or material tested. In most common UT applications, very short ultrasonic pulse waves with centre frequencies ranging from 0.1-15 MHz and occasionally up to 50 MHz, are transmitted into materials to detect internal flaws or to characterize materials. A common example is ultrasonic thickness measurement, which tests the thickness of the test object, for example, to monitor pipework corrosion and erosion. Ultrasonic testing is extensively used to detect flaws in welds.
In radio, a detector is a device or circuit that extracts information from a modulated radio frequency current or voltage. The term dates from the first three decades of radio (1888–1918). Unlike modern radio stations which transmit sound on an uninterrupted carrier wave, early radio stations transmitted information by radiotelegraphy. The transmitter was switched on and off to produce long or short periods of radio waves, spelling out text messages in Morse code. Therefore, early radio receivers could reproduce the Morse code "dots" and "dashes" by simply distinguishing between the presence or absence of a radio signal. The device that performed this function in the receiver circuit was called a detector. A variety of different detector devices, such as the coherer, electrolytic detector, magnetic detector and the crystal detector, were used during the wireless telegraphy era until superseded by vacuum tube technology.
Ultrasonic hearing is a recognised auditory effect which allows humans to perceive sounds of a much higher frequency than would ordinarily be audible using the inner ear, usually by stimulation of the base of the cochlea through bone conduction. Normal human hearing is recognised as having an upper bound of 15–28 kHz, depending on the person.
The angular spectrum method is a technique for modeling the propagation of a wave field. This technique involves expanding a complex wave field into a summation of infinite number of plane waves of the same frequency and different directions. Its mathematical origins lie in the field of Fourier optics but it has been applied extensively in the field of ultrasound. The technique can predict an acoustic pressure field distribution over a plane, based upon knowledge of the pressure field distribution at a parallel plane. Predictions in both the forward and backward propagation directions are possible.
A parametric array, in the field of acoustics, is a nonlinear transduction mechanism that generates narrow, nearly side lobe-free beams of low frequency sound, through the mixing and interaction of high frequency sound waves, effectively overcoming the diffraction limit associated with linear acoustics. The main side lobe-free beam of low frequency sound is created as a result of nonlinear mixing of two high frequency sound beams at their difference frequency. Parametric arrays can be formed in water, air, and earth materials/rock.
Acousto-optics is a branch of physics that studies the interactions between sound waves and light waves, especially the diffraction of laser light by ultrasound through an ultrasonic grating.
Ultrasonic transducers and ultrasonic sensors are devices that generate or sense ultrasound energy. They can be divided into three broad categories: transmitters, receivers and transceivers. Transmitters convert electrical signals into ultrasound, receivers convert ultrasound into electrical signals, and transceivers can both transmit and receive ultrasound.
Ultrasound-modulated optical tomography (UOT), also known as Acousto-Optic Tomography (AOT), is a hybrid imaging modality that combines light and sound; it is a form of tomography involving ultrasound. It is used in imaging of biological soft tissues and has potential applications for early cancer detection. As a hybrid modality which uses both light and sound, UOT provides some of the best features of both: the use of light provides strong contrast and sensitivity ; these two features are derived from the optical component of UOT. The use of ultrasound allows for high resolution, as well as a high imaging depth. However, the difficulty of tackling the two fundamental problems with UOT have caused UOT to evolve relatively slowly; most work in the field is limited to theoretical simulations or phantom / sample studies.
Phase conjugation is a physical transformation of a wave field where the resulting field has a reversed propagation direction but keeps its amplitudes and phases.
Acoustic phase conjugation is a set of techniques meant to perform phase conjugation on acoustic waves.
{{cite web}}
: CS1 maint: unfit URL (link)