Sound from ultrasound

Last updated

Sound from ultrasound is the name given here to the generation of audible sound from modulated ultrasound without using an active receiver. This happens when the modulated ultrasound passes through a nonlinear [ disambiguation needed ] medium which acts, intentionally or unintentionally, as a demodulator.

Contents

Parametric array

Since the early 1960s, researchers have been experimenting with creating directive low-frequency sound from nonlinear interaction of an aimed beam of ultrasound waves produced by a parametric array using heterodyning. Ultrasound has much shorter wavelengths than audible sound, so that it propagates in a much narrower beam than any normal loudspeaker system using audio frequencies. Most of the work was performed in liquids (for underwater sound use).

The first modern device for air acoustic use was created in 1998, [1] and is now known by the trademark name "Audio Spotlight", a term first coined in 1983 by the Japanese researchers [2] who abandoned the technology as infeasible in the mid-1980s.

A transducer can be made to project a narrow beam of modulated ultrasound that is powerful enough, at 100 to 110  dBSPL, to substantially change the speed of sound in the air that it passes through. The air within the beam behaves nonlinear [ disambiguation needed ]ly and extracts the modulation signal from the ultrasound, resulting in sound that can be heard only along the path of the beam, or that appears to radiate from any surface that the beam strikes. This technology allows a beam of sound to be projected over a long distance to be heard only in a small well-defined area; [3] for a listener outside the beam the sound pressure decreases substantially. This effect cannot be achieved with conventional loudspeakers, because sound at audible frequencies cannot be focused into such a narrow beam. [3]

There are some limitations with this approach. Anything that interrupts the beam will prevent the ultrasound from propagating, like interrupting a spotlight's beam. For this reason, most systems are mounted overhead, like lighting.

Applications

Commercial advertising

A sound signal can be aimed so that only a particular passer-by, or somebody very close, can hear it. In commercial applications, it can target sound to a single person without the peripheral sound and related noise of a loudspeaker.

Personal audio

It can be used for personal audio, either to have sounds audible to only one person, or that which a group wants to listen to. The navigation instructions for example are only interesting for the driver in a car, not for the passengers. Another possibility are future applications for true stereo sound, where one ear does not hear what the other is hearing. [4]

Train signaling device

Directional audio train signaling may be accomplished through the use of an ultrasonic beam which will warn of the approach of a train while avoiding the nuisance of loud train signals on surrounding homes and businesses. [5]

History

This technology was originally developed by the US Navy and Soviet Navy for underwater sonar in the mid-1960s, and was briefly investigated by Japanese researchers in the early 1980s, but these efforts were abandoned due to extremely poor sound quality (high distortion) and substantial system cost. These problems went unsolved until a paper published by Dr. F. Joseph Pompei of the Massachusetts Institute of Technology in 1998 [1] fully described a working device that reduced audible distortion essentially to that of a traditional loudspeaker.

Products

As of 2014 there were known to be five devices which have been marketed that use ultrasound to create an audible beam of sound.

Audio Spotlight

F. Joseph Pompei of MIT developed technology he calls the "Audio Spotlight", [6] and made it commercially available in 2000 by his company Holosonics, which according to their website claims to have sold "thousands" of their "Audio Spotlight" systems. Disney was among the first major corporations to adopt it for use at the Epcot Center, and many other application examples are shown on the Holosonics website. [7]

Audio Spotlight is a narrow beam of sound that can be controlled with similar precision to light from a spotlight. It uses a beam of ultrasound as a "virtual acoustic source", enabling control of sound distribution. The ultrasound has wavelengths only a few millimeters long which are much smaller than the source, and therefore naturally travel in an extremely narrow beam. The ultrasound, which contains frequencies far outside the range of human hearing, is completely inaudible. But as the ultrasonic beam travels through the air, the inherent properties of the air cause the ultrasound to change shape in a predictable way. This gives rise to frequency components in the audible band, which can be predicted and controlled.

HyperSonic Sound

Elwood "Woody" Norris, founder and Chairman of American Technology Corporation (ATC), announced he had successfully created a device which achieved ultrasound transmission of sound in 1996. [8] This device used piezoelectric transducers to send two ultrasonic waves of differing frequencies toward a point, giving the illusion that the audible sound from their interference pattern was originating at that point. [9] ATC named and trademarked their device as "HyperSonic Sound" (HSS). In December 1997, HSS was one of the items in the Best of What's New issue of Popular Science. [10] In December 2002, Popular Science named HyperSonic Sound the best invention of 2002.[ citation needed ] Norris received the 2005 Lemelson–MIT Prize for his invention of a "hypersonic sound". [11] ATC (now named LRAD Corporation) spun off the technology to Parametric Sound Corporation in September 2010 to focus on their long-range acoustic device (LRAD) products, according to their quarterly reports, press releases, and executive statements. [12] [13]

Mitsubishi Electric Engineering Corporation

Mitsubishi apparently offers a sound from ultrasound product named the "MSP-50E" [14] and commercially available from Mitsubishi electrical engineering company. [15]

AudioBeam

German audio company Sennheiser Electronic once listed their "AudioBeam" product for about $4,500. [16] There is no indication that the product has been used in any public applications. The product has since been discontinued. [17]

Literature survey

The first experimental systems were built over 30 years ago, although these first versions only played simple tones. It was not until much later (see above) that the systems were built for practical listening use.

Experimental ultrasonic nonlinear acoustics

A chronological summary of the experimental approaches taken to examine Audio Spotlight systems in the past will be presented here. At the turn of the millennium working versions of an Audio Spotlight capable of reproducing speech and music could be bought from Holosonics, a company founded on Dr. Pompei's work in the MIT Media Lab. [18]

Related topics were researched almost 40 years earlier in the context of underwater acoustics.

  1. The first article [19] consisted of a theoretical formulation of the half pressure angle of the demodulated signal.
  2. The second article [20] provided an experimental comparison to the theoretical predictions.

Both articles were supported by the U.S. Office of Naval Research, specifically for the use of the phenomenon for underwater sonar pulses. The goal of these systems was not high directivity per se, but rather higher usable bandwidth of a typically band-limited transducer.

The 1970s saw some activity in experimental airborne systems, both in air [21] and underwater. [22] Again supported by the U.S. Office of Naval Research, the primary aim of the underwater experiments was to determine the range limitations of sonar pulse propagation due to nonlinear distortion. The airborne experiments were aimed at recording quantitative data about the directivity and propagation loss of both the ultrasonic carrier and demodulated waves, rather than developing the capability to reproduce an audio signal.

In 1983 the idea was again revisited experimentally [2] but this time with the firm intent to analyze the use of the system in air to form a more complex base band signal in a highly directional manner. The signal processing used to achieve this was simple DSB-AM with no precompensation, and because of the lack of precompensation applied to the input signal, the THD (total harmonic distortion) levels of this system would have probably been satisfactory for speech reproduction, but prohibitive for the reproduction of music. An interesting feature of the experimental set up [2] was the use of 547 ultrasonic transducers to produce a 40 kHz ultrasonic sound source of over 130db at 4 m, which would demand significant safety considerations. [23] [24] Even though this experiment clearly demonstrated the potential to reproduce audio signals using an ultrasonic system, it also showed that the system suffered from heavy distortion, especially when no precompensation was used.

Theoretical ultrasonic nonlinear acoustics

The equations that govern nonlinear acoustics are quite complex [25] [26] and unfortunately they do not have general analytical solutions. They usually require the use of a computer simulation. [27] However, as early as 1965, Berktay performed an analysis [28] under some simplifying assumptions that allowed the demodulated SPL to be written in terms of the amplitude modulated ultrasonic carrier wave pressure Pc and various physical parameters. Note that the demodulation process is extremely lossy, with a minimum loss in the order of 60 dB from the ultrasonic SPL to the audible wave SPL. A precompensation scheme can be based from Berktay's expression, shown in Equation 1, by taking the square root of the base band signal envelope E and then integrating twice to invert the effect of the double partial-time derivative. The analogue electronic circuit equivalents of a square root function is simply an op-amp with feedback, and an equalizer is analogous to an integration function. However, these topic areas lie outside the scope of this project.

where

This equation says that the audible demodulated ultrasonic pressure wave (output signal) is proportional to the twice differentiated, squared version of the envelope function (input signal). Precompensation refers to the trick of anticipating these transforms and applying the inverse transforms on the input, hoping that the output is then closer to the untransformed input.

By the 1990s, it was well known that the Audio Spotlight could work but suffered from heavy distortion. It was also known that the precompensation schemes placed an added demand on the frequency response of the ultrasonic transducers. In effect the transducers needed to keep up with what the digital precompensation demanded of them, namely a broader frequency response. In 1998 the negative effects on THD of an insufficiently broad frequency response of the ultrasonic transducers was quantified [29] with computer simulations by using a precompensation scheme based on Berktay's expression. In 1999 Pompei's article [18] discussed how a new prototype transducer met the increased frequency response demands placed on the ultrasonic transducers by the precompensation scheme, which was once again based on Berktay's expression. In addition impressive reductions in the THD of the output when the precompensation scheme was employed were graphed against the case of using no precompensation.

In summary, the technology that originated with underwater sonar 40 years ago has been made practical for reproduction of audible sound in air by Pompei's paper and device, which, according to his AES paper (1998), demonstrated that distortion had been reduced to levels comparable to traditional loudspeaker systems.

Modulation scheme

The nonlinear interaction mixes ultrasonic tones in air to produce sum and difference frequencies. A DSB (double-sideband) amplitude-modulation scheme with an appropriately large baseband DC offset, to produce the demodulating tone superimposed on the modulated audio spectrum, is one way to generate the signal that encodes the desired baseband audio spectrum. This technique suffers from extremely heavy distortion as not only the demodulating tone interferes, but also all other frequencies present interfere with one another. The modulated spectrum is convolved with itself, doubling its bandwidth by the length property of the convolution. The baseband distortion in the bandwidth of the original audio spectrum is inversely proportional to the magnitude of the DC offset (demodulation tone) superimposed on the signal. A larger tone results in less distortion.

Further distortion is introduced by the second order differentiation property of the demodulation process. The result is a multiplication of the desired signal by the function -ω² in frequency. This distortion may be equalized out with the use of preemphasis filtering (increase amplitude of high frequency signal).

By the time-convolution property of the Fourier transform, multiplication in the time domain is a convolution in the frequency domain. Convolution between a baseband signal and a unity gain pure carrier frequency shifts the baseband spectrum in frequency and halves its magnitude, though no energy is lost. One half-scale copy of the replica resides on each half of the frequency axis. This is consistent with Parseval's theorem.

The modulation depth m is a convenient experimental parameter when assessing the total harmonic distortion in the demodulated signal. It is inversely proportional to the magnitude of the DC offset. THD increases proportionally with m1².

These distorting effects may be better mitigated by using another modulation scheme that takes advantage of the differential squaring device nature of the nonlinear acoustic effect. Modulation of the second integral of the square root of the desired baseband audio signal, without adding a DC offset, results in convolution in frequency of the modulated square-root spectrum, half the bandwidth of the original signal, with itself due to the nonlinear channel effects. This convolution in frequency is a multiplication in time of the signal by itself, or a squaring. This again doubles the bandwidth of the spectrum, reproducing the second time integral of the input audio spectrum. The double integration corrects for the -ω² filtering characteristic associated with the nonlinear acoustic effect. This recovers the scaled original spectrum at baseband.

The harmonic distortion process has to do with the high frequency replicas associated with each squaring demodulation, for either modulation scheme. These iteratively demodulate and self-modulate, adding a spectrally smeared-out and time-exponentiated copy of the original signal to baseband and twice the original center frequency each time, with one iteration corresponding to one traversal of the space between the emitter and target. Only sound with parallel collinear phase velocity vectors interfere to produce this nonlinear effect. Even-numbered iterations will produce their modulation products, baseband and high frequency, as reflected emissions from the target. Odd-numbered iterations will produce their modulation products as reflected emissions off the emitter.

This effect still holds when the emitter and the reflector are not parallel, though due to diffraction effects the baseband products of each iteration will originate from a different location each time, with the originating location corresponding to the path of the reflected high frequency self-modulation products.

These harmonic copies are largely attenuated by the natural losses at those higher frequencies when propagating through air.

Attenuation of ultrasound in air

The figure provided in [30] provides an estimation of the attenuation that the ultrasound would suffer as it propagated through air. The figures from this graph correspond to completely linear propagation, and the exact effect of the nonlinear demodulation phenomena on the attenuation of the ultrasonic carrier waves in air was not considered. There is an interesting dependence on humidity. Nevertheless, a 50 kHz wave suffers an attenuation level in the order of 1 dB per meter at one atmosphere of pressure.

Safe use of high-intensity ultrasound

For the nonlinear effect to occur, relatively high-intensity ultrasonics are required. The SPL involved was typically greater than 100 dB of ultrasound at a nominal distance of 1 m from the face of the ultrasonic transducer.[ citation needed ] Exposure to more intense ultrasound over 140 dB[ citation needed ] near the audible range (20–40 kHz) can lead to a syndrome involving manifestations of nausea, headache, tinnitus, pain, dizziness, and fatigue, [24] but this is around 100 times the 100 dB level cited above, and is generally not a concern. Dr Joseph Pompei of Audio Spotlight has published data showing that their product generates ultrasonic sound pressure levels around 130 dB (at 60 kHz) measured at 3 meters. [31]

The UK's independent Advisory Group on Non-ionising Radiation (AGNIR) produced a 180-page report on the health effects of human exposure to ultrasound and infrasound in 2010. The UK Health Protection Agency (HPA) published their report, which recommended an exposure limit for the general public to airborne ultrasound sound pressure levels (SPL) of 100 dB (at 25 kHz and above). [32]

OSHA specifies a safe ceiling value of ultrasound as 145 dB SPL exposure at the frequency range used by commercial systems in air, as long as there is no possibility of contact with the transducer surface or coupling medium (i.e. submerged). [33] This is several times the highest levels used by commercial Audio Spotlight systems, so there is a significant margin for safety[ citation needed ]. In a review of international acceptable exposure limits Howard et al. (2005) [34] noted the general agreement among standards organizations, but expressed concern with the decision by United States of America's Occupational Safety and Health Administration (OSHA) to increase the exposure limit by an additional 30 dB under some conditions (equivalent to a factor of 1000 in intensity [35] ).

For frequencies of ultrasound from 25 to 50 kHz, a guideline of 110 dB had been recommended by Canada, Japan, the USSR, and the International Radiation Protection Agency, and 115 dB by Sweden [24] in the late 1970s to early 1980s, but these were primarily based on subjective effects. The more recent OSHA guidelines above are based on ACGIH (American Conference of Governmental Industrial Hygienists) research from 1987.

Lawton(2001) [36] reviewed international guidelines for airborne ultrasound in a report published by the United Kingdom's Health and Safety Executive, this included a discussion of the guidelines issued by the American Conference of Governmental Industrial Hygienists (ACGIH), 1988. Lawton states "This reviewer believes that the ACGIH has pushed its acceptable exposure limits to the very edge of potentially injurious exposure". The ACGIH document also mentioned the possible need for hearing protection.

See also

Further resources

Related Research Articles

<span class="mw-page-title-main">Amplitude modulation</span> Radio modulation via wave amplitude

Amplitude modulation (AM) is a modulation technique used in electronic communication, most commonly for transmitting messages with a radio wave. In amplitude modulation, the amplitude of the wave is varied in proportion to that of the message signal, such as an audio signal. This technique contrasts with angle modulation, in which either the frequency of the carrier wave is varied, as in frequency modulation, or its phase, as in phase modulation.

<span class="mw-page-title-main">Frequency modulation</span> Encoding of information in a carrier wave by varying the instantaneous frequency of the wave

Frequency modulation (FM) is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. The technology is used in telecommunications, radio broadcasting, signal processing, and computing.

In electronics and telecommunications, modulation is the process of varying one or more properties of a periodic waveform, called the carrier signal, with a separate signal called the modulation signal that typically contains information to be transmitted. For example, the modulation signal might be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal representing a sequence of binary digits, a bitstream from a computer.

Phase modulation (PM) is a modulation pattern for conditioning communication signals for transmission. It encodes a message signal as variations in the instantaneous phase of a carrier wave. Phase modulation is one of the two principal forms of angle modulation, together with frequency modulation.

<span class="mw-page-title-main">Single-sideband modulation</span> Type of modulation

In radio communications, single-sideband modulation (SSB) or single-sideband suppressed-carrier modulation (SSB-SC) is a type of modulation used to transmit information, such as an audio signal, by radio waves. A refinement of amplitude modulation, it uses transmitter power and bandwidth more efficiently. Amplitude modulation produces an output signal the bandwidth of which is twice the maximum frequency of the original baseband signal. Single-sideband modulation avoids this bandwidth increase, and the power wasted on a carrier, at the cost of increased device complexity and more difficult tuning at the receiver.

<span class="mw-page-title-main">Ultrasound</span> Sound waves with frequencies above the human hearing range

Ultrasound is sound with frequencies greater than 20 kilohertz. This frequency is the approximate upper audible limit of human hearing in healthy young adults. The physical principles of acoustic waves apply to any frequency range, including ultrasound. Ultrasonic devices operate with frequencies from 20 kHz up to several gigahertz.

<span class="mw-page-title-main">Medical ultrasound</span> Diagnostic and therapeutic technique

Medical ultrasound includes diagnostic techniques using ultrasound, as well as therapeutic applications of ultrasound. In diagnosis, it is used to create an image of internal body structures such as tendons, muscles, joints, blood vessels, and internal organs, to measure some characteristics or to generate an informative audible sound. The usage of ultrasound to produce visual images for medicine is called medical ultrasonography or simply sonography, or echography. The practice of examining pregnant women using ultrasound is called obstetric ultrasonography, and was an early development of clinical ultrasonography. The machine used is called an ultrasound machine, a sonograph or an echograph. The visual image formed using this technique is called an ultrasonogram, a sonogram or an echogram.

<span class="mw-page-title-main">Envelope detector</span> Electronic circuit

An envelope detector is an electronic circuit that takes a (relatively) high-frequency amplitude modulated signal as input and provides an output, which is the demodulated envelope of the original signal.

<span class="mw-page-title-main">Sonic weapon</span> Weapon that uses soundwaves against people

Sonic and ultrasonic weapons (USW) are weapons of various types that use sound to injure or incapacitate an opponent. Some sonic weapons make a focused beam of sound or of ultrasound; others produce an area field of sound. As of 2023 military and police forces make some limited use of sonic weapons.

Laser-ultrasonics uses lasers to generate and detect ultrasonic waves. It is a non-contact technique used to measure materials thickness, detect flaws and carry out materials characterization. The basic components of a laser-ultrasonic system are a generation laser, a detection laser and a detector.

<span class="mw-page-title-main">Ultrasonic testing</span> Non-destructive material testing using ultrasonic waves

Ultrasonic testing (UT) is a family of non-destructive testing techniques based on the propagation of ultrasonic waves in the object or material tested. In most common UT applications, very short ultrasonic pulse waves with centre frequencies ranging from 0.1-15 MHz and occasionally up to 50 MHz, are transmitted into materials to detect internal flaws or to characterize materials. A common example is ultrasonic thickness measurement, which tests the thickness of the test object, for example, to monitor pipework corrosion and erosion. Ultrasonic testing is extensively used to detect flaws in welds.

The angular spectrum method is a technique for modeling the propagation of a wave field. This technique involves expanding a complex wave field into a summation of infinite number of plane waves of the same frequency and different directions. Its mathematical origins lie in the field of Fourier optics but it has been applied extensively in the field of ultrasound. The technique can predict an acoustic pressure field distribution over a plane, based upon knowledge of the pressure field distribution at a parallel plane. Predictions in both the forward and backward propagation directions are possible.

A parametric array, in the field of acoustics, is a nonlinear transduction mechanism that generates narrow, nearly side lobe-free beams of low frequency sound, through the mixing and interaction of high frequency sound waves, effectively overcoming the diffraction limit associated with linear acoustics. The main side lobe-free beam of low frequency sound is created as a result of nonlinear mixing of two high frequency sound beams at their difference frequency. Parametric arrays can be formed in water, air, and earth materials/rock.

The hypersonic effect is a phenomenon reported in a controversial scientific study by Tsutomu Oohashi et al., which claims that, although humans cannot consciously hear ultrasound, the presence or absence of those frequencies has a measurable effect on their physiological and psychological reactions.

<span class="mw-page-title-main">Ultrasonic transducer</span> Acoustic sensor

Ultrasonic transducers and ultrasonic sensors are devices that generate or sense ultrasound energy. They can be divided into three broad categories: transmitters, receivers and transceivers. Transmitters convert electrical signals into ultrasound, receivers convert ultrasound into electrical signals, and transceivers can both transmit and receive ultrasound.

Acoustic microscopy is microscopy that employs very high or ultra high frequency ultrasound. Acoustic microscopes operate non-destructively and penetrate most solid materials to make visible images of internal features, including defects such as cracks, delaminations and voids.

Ultrasound-modulated optical tomography (UOT), also known as Acousto-Optic Tomography (AOT), is a hybrid imaging modality that combines light and sound; it is a form of tomography involving ultrasound. It is used in imaging of biological soft tissues and has potential applications for early cancer detection. As a hybrid modality which uses both light and sound, UOT provides some of the best features of both: the use of light provides strong contrast and sensitivity ; these two features are derived from the optical component of UOT. The use of ultrasound allows for high resolution, as well as a high imaging depth. However, the difficulty of tackling the two fundamental problems with UOT have caused UOT to evolve relatively slowly; most work in the field is limited to theoretical simulations or phantom / sample studies.

Psychoacoustics is the branch of psychophysics involving the scientific study of sound perception and audiology—how the human auditory system perceives various sounds. More specifically, it is the branch of science studying the psychological responses associated with sound. Psychoacoustics is an interdisciplinary field including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science.

Phase conjugation is a physical transformation of a wave field where the resulting field has a reversed propagation direction but keeps its amplitudes and phases.

Acoustic phase conjugation is a set of techniques meant to perform phase conjugation on acoustic waves.

References

  1. 1 2 105th AES Conv, Preprint 4853, 1998
  2. 1 2 3 Yoneyama, Masahide; Jun Ichiroh, Fujimoto (1983). "The audio spotlight: An application of nonlinear interaction of sound waves to a new type of loudspeaker design". Journal of the Acoustical Society of America. 73 (5): 1532–1536. Bibcode:1983ASAJ...73.1532Y. doi:10.1121/1.389414.
  3. 1 2 Pompei, F. Joseph (June 2002). Sound From Ultrasound: The Parametric Array as an Audible Sound Source (PDF) (PhD). MIT. Retrieved 15 March 2020.
  4. Norris, Woody (26 January 2009). "Hypersonic sound and other inventions" . Retrieved 22 October 2017.
  5. "US Patent 7429935 B1". September 30, 2008. Retrieved February 1, 2015.
  6. "Audio Spotlight Directional Sound System by Holosonics".
  7. ABC news 21 August 2006
  8. "Parametric Sound Corporation – About Us – History and Background". ParametricSound.com. n.d. Archived from the original on March 22, 2012. Retrieved February 19, 2016.{{cite web}}: CS1 maint: unfit URL (link)
  9. Eastwood, Gary (7 September 1996). "Perfect sound from thin air". New Scientist. p. 22.
  10. "Best of What's New: Sound Projectors". Popular Science. Vol. 251, no. 6. Bonnier Corporation. December 1997. p. 78. ISSN   0161-7370.
  11. "Inventor Wins $500,000 Lemelson–MIT Prize for Revolutionizing Acoustics" (Press release). Massachusetts Institute of Technology. 2004-04-18. Archived from the original on October 12, 2007. Retrieved 2007-11-14.
  12. "LRAD Corporation Press Releases". LRAD Corporation.
  13. "LRAD To Spin Off Parametric Sound, The Company Nobody Wanted – Stock Spinoffs". Stock Spinoffs. 2010-07-19.
  14. "超指向性音響システム「ここだけ」新製品 本格的に発売開始" (Press release). 2007-07-26. Retrieved 2008-11-23.
  15. "超指向性音響システム MSP-50E" (PDF) (Press release). 2012-01-01. Retrieved 2023-05-22.
  16. AudioBeam
  17. Audiobeam discontinued
  18. 1 2 Pompei, F. Joseph (September 1999). "The Use of Airborne Ultrasonics for Generating Audible Sound Beams". Journal of the Audio Engineering Society. 47 (9): 726–731.
  19. Westervelt, P. J. (1963). "Parametric acoustic array". Journal of the Acoustical Society of America. 35 (4): 535–537. Bibcode:1963ASAJ...35..535W. doi:10.1121/1.1918525.
  20. Bellin, J. L. S.; Beyer, R. T. (1962). "Experimental investigation of an end-fire array". Journal of the Acoustical Society of America. 34 (8): 1051–1054. Bibcode:1962ASAJ...34.1051B. doi:10.1121/1.1918243.
  21. Mary Beth, Bennett; Blackstock, David T. (1974). "Parametric array in air". Journal of the Acoustical Society of America. 57 (3): 562–568. Bibcode:1975ASAJ...57..562B. doi:10.1121/1.380484.
  22. Muir, T. G.; Willette, J. G. (1972). "Parametric acoustic transmitting arrays". Journal of the Acoustical Society of America. 52 (5): 1481–1486. Bibcode:1972ASAJ...52.1481M. doi:10.1121/1.1913264.
  23. "Run 3 - Cool Math Games". Archived from the original on 2007-12-11. Retrieved 2007-12-04.. Everyday Sound Pressure Levels.
  24. 1 2 3 Guidelines for the Safe Use of Ultrasound: Part II – Industrial & Commercial Applications - Safety Code 24 Non-Ionizing Radiation Section, Bureau of Radiation and Medical Devices, Department of National Health and Welfare
  25. Jacqueline Naze, Tjøtta; Tjøtta, Sigve (1980). "Nonlinear interaction of two collinear, spherically spreading sound beams". Journal of the Acoustical Society of America. 67 (2): 484–490. Bibcode:1980ASAJ...67..484T. doi:10.1121/1.383912.
  26. Jacqueline Naze, Tjotta; Tjotta, Sigve (1981). "Nonlinear equations of acoustics, with application to parametric acoustic arrays". Journal of the Acoustical Society of America. 69 (6): 1644–1652. Bibcode:1981ASAJ...69.1644T. doi:10.1121/1.385942.
  27. Kurganov, Alexander; Noelle, Sebastian; Petrova, Guergana (2001). "Semidiscrete central-upwind schemes for hyperbolic conservation laws and hamilton-jacobi equations". SIAM Journal on Scientific Computing. 23 (3): 707–740. Bibcode:2001SJSC...23..707K. CiteSeerX   10.1.1.588.4360 . doi:10.1137/S1064827500373413.
  28. Berktay, H. O. (1965). "Possible exploitation of nonlinear acoustics in underwater transmitting applications". Journal of Sound and Vibration. 2 (4): 435–461. Bibcode:1965JSV.....2..435B. doi:10.1016/0022-460X(65)90122-7.
  29. Kite, Thomas D.; Post, John T.; Hamilton, Mark F. (1998). "Parametric array in air: Distortion reduction by preprocessing". Journal of the Acoustical Society of America. 2 (5): 1091–1092. Bibcode:1998ASAJ..103.2871K. doi:10.1121/1.421645.
  30. Bass, H. E.; Sutherland, L. C.; Zuckerwar, A. J.; Blackstock, D. T.; Hester, D. M. (1995). "Atmospheric absorption of sound: Further developments". Journal of the Acoustical Society of America. 97 (1): 680–683. Bibcode:1995ASAJ...97..680B. doi:10.1121/1.412989. S2CID   123385958.
  31. Pompei, F Joseph (Sep 1999). "The Use of Airborne Ultrasonics for Generating Audible Sound Beams". Journal of the Audio Engineering Society. 47 (9): 728. Fig. 3. Retrieved 19 November 2011.
  32. AGNIR (2010). Health Effects of Exposure to Ultrasound and Infrasound. Health Protection Agency, UK. pp. 167–170.
  33. "OSHA Technical Manual (OTM) Section III: Chapter 5 (Occupational Noise): Appendix C--Ultrasound". osha.gov.
  34. Howard; et al. (2005). "A Review of Current Ultrasound Exposure Limits" (PDF). The J. Occupational Health and Safety of Australia and New Zealand. 21 (3): 253–257.
  35. Leighton, Tim (2007). "What is Ultrasound?". Progress in Biophysics and Molecular Biology. 93 (1–3): 3–83. doi: 10.1016/j.pbiomolbio.2006.07.026 . PMID   17045633.
  36. Lawton (2001). Damage to human hearing by airborne sound of very high frequency or ultrasonic frequency (PDF). Health & Safety Executive, UK. pp. 9–10. ISBN   0-7176-2019-0.