Data sonification

Last updated

Data sonification is the presentation of data as sound using sonification. It is the auditory equivalent of the more established practice of data visualization.

Contents

The usual process for data sonification is directing digital media of a dataset through a software synthesizer and into a digital-to-analog converter to produce sound for humans to experience. [1]

Applications of data sonification include astronomy studies of star creation, [2] interpreting cluster analysis, [3] and geoscience. [4]

Various projects describe the production of sonifications as a collaboration between scientists and musicians. [5]

A target demographic for using data sonification is the blind community because of the inaccessibility of data visualizations. [6]

Related Research Articles

<span class="mw-page-title-main">Acoustics</span> Branch of physics involving mechanical waves

Acoustics is a branch of physics that deals with the study of mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries.

<span class="mw-page-title-main">Bird vocalization</span> Sounds birds use to communicate

Bird vocalization includes both bird calls and bird songs. In non-technical use, bird songs are the bird sounds that are melodious to the human ear. In ornithology and birding, songs are distinguished by function from calls.

<span class="mw-page-title-main">Auditory system</span> Sensory system used for hearing

The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs and the auditory parts of the sensory system.

<span class="mw-page-title-main">Sonification</span>

Sonification is the use of non-speech audio to convey information or perceptualize data. Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques.

<span class="mw-page-title-main">Auditory cortex</span> Part of the temporal lobe of the brain

The auditory cortex is the part of the temporal lobe that processes auditory information in humans and many other vertebrates. It is a part of the auditory system, performing basic and higher functions in hearing, such as possible relations to language switching. It is located bilaterally, roughly at the upper sides of the temporal lobes – in humans, curving down and onto the medial surface, on the superior temporal plane, within the lateral sulcus and comprising parts of the transverse temporal gyri, and the superior temporal gyrus, including the planum polare and planum temporale.

<span class="mw-page-title-main">Language processing in the brain</span> How humans use words to communicate

In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.

Sensory substitution is a change of the characteristics of one sensory modality into stimuli of another sensory modality.

Music psychology, or the psychology of music, may be regarded as a branch of both psychology and musicology. It aims to explain and understand musical behaviour and experience, including the processes through which music is perceived, created, responded to, and incorporated into everyday life. Modern music psychology is primarily empirical; its knowledge tends to advance on the basis of interpretations of data collected by systematic observation of and interaction with human participants. Music psychology is a field of research with practical relevance for many areas, including music performance, composition, education, criticism, and therapy, as well as investigations of human attitude, skill, performance, intelligence, creativity, and social behavior.

<span class="mw-page-title-main">Immersion (virtual reality)</span> Perception of being physically present in a non-physical world

Immersion into virtual reality (VR) is a perception of being physically present in a non-physical world. The perception is created by surrounding the user of the VR system in images, sound or other stimuli that provide an engrossing total environment.

Auditory display is the use of sound to communicate information from a computer to the user. The primary forum for exploring these techniques is the International Community for Auditory Display (ICAD), which was founded by Gregory Kramer in 1992 as a forum for research in the field.

<span class="mw-page-title-main">Information</span> Facts provided or learned about something or someone

Information is an abstract concept that refers to that which has the power to inform. At the most fundamental level, information pertains to the interpretation of that which may be sensed, or their abstractions. Any natural process that is not completely random and any observable pattern in any medium can be said to convey some amount of information. Whereas digital signals and other data use discrete signs to convey information, other phenomena and artifacts such as analogue signals, poems, pictures, music or other sounds, and currents convey information in a more continuous form. Information is not knowledge itself, but the meaning that may be derived from a representation through interpretation.

<span class="mw-page-title-main">Hearing</span> Sensory perception of sound by living organisms

Hearing, or auditory perception, is the ability to perceive sounds through an organ, such as an ear, by detecting vibrations as periodic changes in the pressure of a surrounding medium. The academic field concerned with hearing is auditory science.

The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.

Psychoacoustics is the branch of psychophysics involving the scientific study of sound perception and audiology—how the human auditory system perceives various sounds. More specifically, it is the branch of science studying the psychological responses associated with sound. Psychoacoustics is an interdisciplinary field including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science.

Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.

Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches.

Dichotic listening is a psychological test commonly used to investigate selective attention and the lateralization of brain function within the auditory system. It is used within the fields of cognitive psychology and neuroscience.

Audification is an auditory display technique for representing a sequence of data values as sound. By definition, it is described as a "direct translation of a data waveform to the audible domain." Audification interprets a data sequence and usually a time series, as an audio waveform where input data are mapped to sound pressure levels. Various signal processing techniques are used to assess data features. The technique allows the listener to hear periodic components as frequencies. Audification typically requires large data sets with periodic components.

Auditory feedback (AF) is an aid used by humans to control speech production and singing by helping the individual verify whether the current production of speech or singing is in accordance with his acoustic-auditory intention. This process is possible through what is known as the auditory feedback loop, a three-part cycle that allows individuals to first speak, then listen to what they have said, and lastly, correct it when necessary. From the viewpoint of movement sciences and neurosciences, the acoustic-auditory speech signal can be interpreted as the result of movements of speech articulators. Auditory feedback can hence be inferred as a feedback mechanism controlling skilled actions in the same way that visual feedback controls limb movements.

<span class="mw-page-title-main">Data science</span> Interdisciplinary field of study on deriving knowledge and insights from data

Data science is an interdisciplinary academic field that uses statistics, scientific computing, scientific methods, processes, algorithms and systems to extract or extrapolate knowledge and insights from potentially noisy, structured, or unstructured data.

References

  1. Kaper, H.G.; Wiebel, E.; Tipei, S. (1999). "Data sonification and sound visualization". Computing in Science & Engineering. 1 (4): 48–58. arXiv: cs/0007007 . Bibcode:1999CSE.....1d..48K. doi:10.1109/5992.774840. S2CID   8087002.
  2. Guglielmi, Giorgia (21 July 2017). "Meet the scientist who turns data into music—and listen to the sound of a neutron star". Science.
  3. Hermann, T; Ritter, H (1999). "Listen to your Data: Model-Based Sonification for Data Analysis". Advances in intelligent computation and multimedia systems. International Institute for Advanced Studies in Systems Research and Cybernetics. ISBN   0-921836-80-5.
  4. Romans, Brian (11 April 2007). "Data Sonification". Wired.
  5. Beans, Carolyn (1 May 2017). "Science and Culture: Musicians join scientists to explore data through sound". Proceedings of the National Academy of Sciences. 114 (18): 4563–4565. Bibcode:2017PNAS..114.4563B. doi: 10.1073/pnas.1705325114 . PMC   5422826 . PMID   28461386.
  6. Zhao, Haixia; Plaisant, Catherine; Shneiderman, Ben; Lazar, Jonathan (1 May 2008). "Data Sonification for Users with Visual Impairment". ACM Transactions on Computer-Human Interaction. 15 (1): 1–28. doi:10.1145/1352782.1352786. S2CID   17199537.

Further media