Audification is an auditory display technique for representing a sequence of data values as sound. By definition, it is described as a "direct translation of a data waveform to the audible domain." [1] Audification interprets a data sequence and usually a time series, as an audio waveform where input data are mapped to sound pressure levels. Various signal processing techniques are used to assess data features. The technique allows the listener to hear periodic components as frequencies. Audification typically requires large data sets with periodic components. [2]
Audification is most commonly applied to get the most direct and simple representation of data from sound and to convert it into a visual. In most cases it will always be used for taking sounds and breaking it down in a way that we can visually understand it and construct more data from it.
This article provides insufficient context for those unfamiliar with the subject.(June 2019) |
The idea of audification was introduced in 1992 by Greg Kramer, initially as a sonification technique. This was the beginning of audification, but is also why most people to this day still consider audification a type of sonification.
The goal of audification is to allow the listener to audibly experience the results of scientific measurements or simulations.
A 2007 study by Sandra Pauletto and Andy Hunt at the University of York suggested that users were able to detect attributes such as noise, repetitive elements, regular oscillations, discontinuities, and signal power in audification of time-series data to a degree comparable with visual inspection of spectrograms. [3]
Applications include audification of seismic data [4] and of human neurophysiological signals. [5] An example is the esophageal stethoscope, which amplifies naturally occurring sound without conveying inherently noiseless variables such as the result of gas analysis. [6]
Converting ultrasound to audible sound is a form of audification that provides a form of echolocation. [7] [8] Other uses in the medical field include the stethoscope [9] and the audification of an EEG. [10]
The development of electronic music can also be considered the history of audification. This is because all electronic instruments involve electric process audified using a loudspeaker.
Audification is of interest for research into Auditory Seismology. It is used in earthquake prediction. [11] Applications include using seismic data to differentiate bomb blasts from earthquakes. [1]
The technique presents sound waves of earthquakes alongside a visual representation. The addition of audio allows both the eye and ears to contribute to better understanding.
NASA has used audification to represent radio and plasma wave [12] measurements. [13]
Both sonification and audification are representational techniques in which data sets or its selected features are mapped into audio signals. [14] However, audification is a kind of sonification, a term which encompasses all techniques for representing data in non-speech audio.[ citation needed ] Their relationship can be demonstrated in the way data values in some sonifications that directly define audio signals are called audification. [15] [ clarification needed ]
Seismology is the scientific study of earthquakes and the propagation of elastic waves through the Earth or other planetary bodies. It also includes studies of earthquake environmental effects such as tsunamis as well as diverse seismic sources such as volcanic, tectonic, glacial, fluvial, oceanic, atmospheric, and artificial processes such as explosions. A related field that uses geology to infer information regarding past earthquakes is paleoseismology. A recording of Earth motion as a function of time is called a seismogram. A seismologist is a scientist who does research in seismology.
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing signals, such as sound, images, potential fields, seismic signals, altimetry processing, and scientific measurements. Signal processing techniques are used to optimize transmissions, digital storage efficiency, correcting distorted signals, subjective video quality and to also detect or pinpoint components of interest in a measured signal.
An evoked potential or evoked response is an electrical potential in a specific pattern recorded from a specific part of the nervous system, especially the brain, of a human or other animals following presentation of a stimulus such as a light flash or a pure tone. Different types of potentials result from stimuli of different modalities and types. Evoked potential is distinct from spontaneous potentials as detected by electroencephalography (EEG), electromyography (EMG), or other electrophysiologic recording method. Such potentials are useful for electrodiagnosis and monitoring that include detections of disease and drug-related sensory dysfunction and intraoperative monitoring of sensory pathway integrity.
A spectrogram is a visual representation of the spectrum of frequencies of a signal as it varies with time. When applied to an audio signal, spectrograms are sometimes called sonographs, voiceprints, or voicegrams. When the data are represented in a 3D plot they may be called waterfall displays.
Seismic tomography is a technique for imaging the subsurface of the Earth with seismic waves produced by earthquakes or explosions. P-, S-, and surface waves can be used for tomographic models of different resolutions based on seismic wavelength, wave source distance, and the seismograph array coverage. The data received at seismometers are used to solve an inverse problem, wherein the locations of reflection and refraction of the wave paths are determined. This solution can be used to create 3D images of velocity anomalies which may be interpreted as structural, thermal, or compositional variations. Geoscientists use these images to better understand core, mantle, and plate tectonic processes.
Audio analysis refers to the extraction of information and meaning from audio signals for analysis, classification, storage, retrieval, synthesis, etc. The observation mediums and interpretation methods vary, as audio analysis can refer to the human ear and how people interpret the audible sound source, or it could refer to using technology such as an Audio analyzer to evaluate other qualities of a sound source such as amplitude, distortion, frequency response, and more. Once an audio source's information has been observed, the information revealed can then be processed for the logical, emotional, descriptive, or otherwise relevant interpretation by the user.
Sonification is the use of non-speech audio to convey information or perceptualize data. Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques.
Magnetotellurics (MT) is an electromagnetic geophysical method for inferring the earth's subsurface electrical conductivity from measurements of natural geomagnetic and geoelectric field variation at the Earth's surface.
Hearing range describes the range of frequencies that can be heard by humans or other animals, though it can also refer to the range of levels. The human range is commonly given as 20 to 20,000 Hz, although there is considerable variation between individuals, especially at high frequencies, and a gradual loss of sensitivity to higher frequencies with age is considered normal. Sensitivity also varies with frequency, as shown by equal-loudness contours. Routine investigation for hearing loss usually involves an audiogram which shows threshold levels relative to a normal.
The baudline time-frequency browser is a signal analysis tool designed for scientific visualization. It runs on several Unix-like operating systems under the X Window System. Baudline is useful for real-time spectral monitoring, collected signals analysis, generating test signals, making distortion measurements, and playing back audio files.
Computer audition (CA) or machine listening is the general field of study of algorithms and systems for audio interpretation by machines. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer Paris Smaragdis, interviewed in Technology Review, talks about these systems — "software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents."
Auditory display is the use of sound to communicate information from a computer to the user. The primary forum for exploring these techniques is the International Community for Auditory Display (ICAD), which was founded by Gregory Kramer in 1992 as a forum for research in the field.
The receiver function technique is a way to image the structure of the Earth and its internal boundaries by using the information from teleseismic earthquakes recorded at a three-component seismograph.
Electroencephalography (EEG) is a method to record an electrogram of the spontaneous electrical activity of the brain. The biosignals detected by EEG have been shown to represent the postsynaptic potentials of pyramidal neurons in the neocortex and allocortex. It is typically non-invasive, with the EEG electrodes placed along the scalp using the International 10–20 system, or variations of it. Electrocorticography, involving surgical placement of electrodes, is sometimes called "intracranial EEG". Clinical interpretation of EEG recordings is most often performed by visual inspection of the tracing or quantitative EEG analysis.
David Worrall is an Australian composer and sound artist working a range of genres, including data sonification, sound sculpture and immersive polymedia as well as traditional instrumental music composition.
Psychoacoustics is the branch of psychophysics involving the scientific study of sound perception and audiology—how human auditory system perceives various sounds. More specifically, it is the branch of science studying the psychological responses associated with sound. Psychoacoustics is an interdisciplinary field of many areas, including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science.
Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.
The International Community for Auditory Display (ICAD), founded in 1992, provides an annual conference for research in auditory display, the use of sound to display information. Research and implementation of sonification, audification, earcons and speech synthesis are central interests of the ICAD. ICAD is home to auditory display researchers, who come from different disciplines, through its conference and peer-reviewed proceedings. Auditory display researchers have various backgrounds in science, arts, and humanities, like computer science, cognitive science, human factors, systematic musicology and soundscape design. Most of the proceedings are freely available through the Georgia Tech SMARTech repository.
Data sonification is the presentation of data as sound using sonification. It is the auditory equivalent of the more established practice of data visualization.
Gregory Paul Kramer, is an American composer, researcher, inventor, meditation teacher and author. In 1975 he co-founded Electronic Musicmobile, a pioneer synthesizer ensemble later renamed Electronic Art Ensemble, in which Kramer was a musician and the principal composer. His pioneering work extended to developing synthesizer and related equipment. Kramer also co-founded the not-for-profit arts organization Harvestworks in New York City. He is recognized as the founding figure of the intensely cross-disciplinary field of data sonification. Since 1980, Kramer teaches Buddhist meditation. He is credited as co-founder of Insight Dialogue, an interpersonal meditation practice. Kramer is the author of several books in diverse fields, as well as (co-)author of scientific papers in the field of data sonification.
{{cite journal}}
: Cite journal requires |journal=
(help)