Sonification

Last updated
Video of air pollution data from Beijing being conveyed as a piece of music

Sonification is the use of non-speech audio to convey information or perceptualize data. [1] Auditory perception has advantages in temporal, spatial, amplitude, and frequency resolution that open possibilities as an alternative or complement to visualization techniques.

Contents

For example, the rate of clicking of a Geiger counter conveys the level of radiation in the immediate vicinity of the device.

Though many experiments with data sonification have been explored in forums such as the International Community for Auditory Display (ICAD), sonification faces many challenges to widespread use for presenting and analyzing data. For example, studies show it is difficult, but essential, to provide adequate context for interpreting sonifications of data. [1] [2] Many sonification attempts are coded from scratch due to the lack of flexible tooling for sonification research and data exploration. [3]

History

The Geiger counter, invented in 1908, is one of the earliest and most successful applications of sonification. A Geiger counter has a tube of low-pressure gas; each particle detected produces a pulse of current when it ionizes the gas, producing an audio click. The original version was only capable of detecting alpha particles. In 1928, Geiger and Walther Müller (a PhD student of Geiger) improved the counter so that it could detect more types of ionizing radiation.

In 1913, Dr. Edmund Fournier d'Albe of University of Birmingham invented the optophone, which used selenium photosensors to detect black print and convert it into an audible output. [4] A blind reader could hold a book up to the device and hold an apparatus to the area she wanted to read. The optophone played a set group of notes: g c' d' e' g' b' c e. Each note corresponded with a position on the optophone's reading area, and that note was silenced if black ink was sensed. Thus, the missing notes indicated the positions where black ink was on the page and could be used to read.

Pollack and Ficks published the first perceptual experiments on the transmission of information via auditory display in 1954. [5] They experimented with combining sound dimensions such as timing, frequency, loudness, duration, and spatialization and found that they could get subjects to register changes in multiple dimensions at once. These experiments did not get into much more detail than that, since each dimension had only two possible values.

In 1970, Nonesuch Records released a new electronic music composition by the American composer Charles Dodge, "The Earth's Magnetic Field." It was produced at the Columbia-Princeton Electronic Music Center. As the title suggests, the composition's electronic sounds were synthesized from data from the earth's magnetic field. As such, it may well be the first sonification of scientific data for artistic, rather than scientific, purposes. [6]

John M. Chambers, Max Mathews, and F.R. Moore at Bell Laboratories did the earliest work on auditory graphing in their "Auditory Data Inspection" technical memorandum in 1974. [7] They augmented a scatterplot using sounds that varied along frequency, spectral content, and amplitude modulation dimensions to use in classification. They did not do any formal assessment of the effectiveness of these experiments. [8]

In 1976, philosopher of technology, Don Ihde, wrote, "Just as science seems to produce an infinite set of visual images for virtually all of its phenomena--atoms to galaxies are familiar to us from coffee table books to science magazines; so 'musics,' too, could be produced from the same data that produces visualizations." [9] This appears to be one of the earliest references to sonification as a creative practice.

In early 1982 Sara Bly of the University of California, Davis, released two publications - with examples - of her work on the use of computer-generated sound to present data. At the time, the field of scientific visualization was gaining momentum. Among other things, her studies and the accompanying examples compared the properties between visual and aural presentation, demonstrating that "Sound offers and enhancement and an alternative to graphic tools." Her work provides early experiment-based data to help inform matching appropriate data representation to type and purpose. [10] [11]

Also in the 1980s, pulse oximeters came into widespread use. Pulse oximeters can sonify oxygen concentration of blood by emitting higher pitches for higher concentrations. However, in practice this particular feature of pulse oximeters may not be widely utilized by medical professionals because of the risk of too many audio stimuli in medical environments. [12]

In 1990, the National Center for Supercomputing Applications began generating scientific data sonifications and visualizations from the same source data and a paper describing this work was presented at the June 1991 SPIE Conference on Extracting Meaning from Complex Data. [13] Included in the supporting information for the paper was a video, winner of the 1991 Nicograph Multimedia Grand Prize, comprising several data visualizations paired with their corresponding data sonifications.

In 1992, the International Community for Auditory Display (ICAD) was founded by Gregory Kramer as a forum for research on auditory display which includes data sonification. ICAD has since become a home for researchers from many different disciplines interested in the use of sound to convey information through its conference and peer-reviewed proceedings. [14]

In May 2022, NASA reported the sonification (converting astronomical data associated with pressure waves into sound) of the black hole at the center of the Perseus galaxy cluster. [15] [16]

In 2024, Adhyâropa Records released The Volcano Listening Project by Leif Karlstrom, which merges geophysics research and computer music synthesis with acoustic instrumental and vocal performances by Billy Contreras, Todd Sickafoose, and other acoustic musicians. [17]

Some existing applications and projects

Sonification techniques

Many different components can be altered to change the user's perception of the sound, and in turn, their perception of the underlying information being portrayed. Often, an increase or decrease in some level in this information is indicated by an increase or decrease in pitch, amplitude or tempo, but could also be indicated by varying other less commonly used components. For example, a stock market price could be portrayed by rising pitch as the stock price rose, and lowering pitch as it fell. To allow the user to determine that more than one stock was being portrayed, different timbres or brightnesses might be used for the different stocks, or they may be played to the user from different points in space, for example, through different sides of their headphones.

Many studies have been undertaken to try to find the best techniques for various types of information to be presented, and as yet, no conclusive set of techniques to be used has been formulated. As the area of sonification is still considered to be in its infancy, current studies are working towards determining the best set of sound components to vary in different situations.

Several different techniques for auditory rendering of data can be categorized:

An alternative approach to traditional sonification is "sonification by replacement", for example Pulsed Melodic Affective Processing (PMAP). [58] [59] [60] In PMAP rather than sonifying a data stream, the computational protocol is musical data itself, for example MIDI. The data stream represents a non-musical state: in PMAP an affective state. Calculations can then be done directly on the musical data, and the results can be listened to with the minimum of translation.

See also

Related Research Articles

<span class="mw-page-title-main">Geiger counter</span> Instrument used for measuring ionizing radiation

A Geiger counter is an electronic instrument used for detecting and measuring ionizing radiation. It is widely used in applications such as radiation dosimetry, radiological protection, experimental physics and the nuclear industry.

Auditory illusions are illusions of real sound or outside stimulus. These false perceptions are the equivalent of an optical illusion: the listener hears either sounds which are not present in the stimulus, or sounds that should not be possible given the circumstance on how they were created.

Sensory substitution is a change of the characteristics of one sensory modality into stimuli of another sensory modality.

<span class="mw-page-title-main">Wave field synthesis</span> Technique for creating virtual acoustic environments

Wave field synthesis (WFS) is a spatial audio rendering technique, characterized by creation of virtual acoustic environments. It produces artificial wavefronts synthesized by a large number of individually driven loudspeakers from elementary waves. Such wavefronts seem to originate from a virtual starting point, the virtual sound source. Contrary to traditional phantom sound sources, the localization of WFS established virtual sound sources does not depend on the listener's position. Like as a genuine sound source the virtual source remains at fixed starting point.

The psychology of music, or music psychology, may be regarded as a branch of psychology, cognitive science, neuroscience, and/or musicology. It aims to explain and understand musical behaviour and experience, including the processes through which music is perceived, created, responded to, and incorporated into everyday life. Modern psychology of music is primarily empirical; its knowledge tends to advance on the basis of interpretations of data collected by systematic observation of and interaction with human participants. The field has practical relevance for many areas, including music performance, composition, education, criticism, and therapy, as well as investigations of human attitude, skill, performance, intelligence, creativity, and social behavior.

Kyma is a visual programming language for sound design used by musicians, researchers, and sound designers. In Kyma, a user programs a multiprocessor digital signal processor (DSP) by graphically connecting modules on the display of a Macintosh or Windows computer.

Auditory display is the use of sound to communicate information from a computer to the user. The primary forum for exploring these techniques is the International Community for Auditory Display (ICAD), which was founded by Gregory Kramer in 1992 as a forum for research in the field.

<span class="mw-page-title-main">Sound</span> Vibration that travels via pressure waves in matter

In physics, sound is a vibration that propagates as an acoustic wave through a transmission medium such as a gas, liquid or solid. In human physiology and psychology, sound is the reception of such waves and their perception by the brain. Only acoustic waves that have frequencies lying between about 20 Hz and 20 kHz, the audio frequency range, elicit an auditory percept in humans. In air at atmospheric pressure, these represent sound waves with wavelengths of 17 meters (56 ft) to 1.7 centimeters (0.67 in). Sound waves above 20 kHz are known as ultrasound and are not audible to humans. Sound waves below 20 Hz are known as infrasound. Different animal species have varying hearing ranges, allowing some to even hear ultrasounds.

<span class="mw-page-title-main">David Worrall (composer)</span> Australian composer and sound artist (born 1954)

David Worrall is an Australian composer and sound artist working a range of genres, including data sonification, sound sculpture and immersive polymedia as well as traditional instrumental music composition.

A networked music performance or network musical performance is a real-time interaction over a computer network that enables musicians in different locations to perform as if they were in the same room. These interactions can include performances, rehearsals, improvisation or jamming sessions, and situations for learning such as master classes. Participants may be connected by "high fidelity multichannel audio and video links" as well as MIDI data connections and specialized collaborative software tools. While not intended to be a replacement for traditional live stage performance, networked music performance supports musical interaction when co-presence is not possible and allows for novel forms of music expression. Remote audience members and possibly a conductor may also participate.

Psychoacoustics is the branch of psychophysics involving the scientific study of the perception of sound by the human auditory system. It is the branch of science studying the psychological responses associated with sound including noise, speech, and music. Psychoacoustics is an interdisciplinary field including psychology, acoustics, electronic engineering, physics, biology, physiology, and computer science.

Sonic interaction design is the study and exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts. Sonic interaction design is at the intersection of interaction design and sound and music computing. If interaction design is about designing objects people interact with, and such interactions are facilitated by computational means, in sonic interaction design, sound is mediating interaction either as a display of processes or as an input medium.

Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches.

The International Community for Auditory Display (ICAD), founded in 1992, provides an annual conference for research in auditory display, the use of sound to display information. Research and implementation of sonification, audification, earcons and speech synthesis are central interests of the ICAD.

Audification is an auditory display technique for representing a sequence of data values as sound. By definition, it is described as a "direct translation of a data waveform to the audible domain." Audification interprets a data sequence and usually a time series, as an audio waveform where input data are mapped to sound pressure levels. Various signal processing techniques are used to assess data features. The technique allows the listener to hear periodic components as frequencies. Audification typically requires large data sets with periodic components.

3D sound is most commonly defined as the sounds of everyday human experience. Sound arrives at the ears from every direction and distance, which contribute to the three-dimensional aural image of what humans hear. Scientists and engineers who work with 3D sound work to accurately synthesize the complexity of real-world sounds.

Auditory science or hearing science is a field of research and education concerning the perception of sounds by humans, animals, or machines. It is a heavily interdisciplinary field at the crossroad between acoustics, neuroscience, and psychology. It is often related to one or many of these other fields: psychophysics, psychoacoustics, audiology, physiology, otorhinolaryngology, speech science, automatic speech recognition, music psychology, linguistics, and psycholinguistics.

Apparent source width (ASW) is the audible impression of a spatially extended sound source. This psychoacoustic impression results from the sound radiation characteristics of the source and the properties of the acoustic space into which it is radiating. Wide source widths are desired by listeners of music because these are associated with the sound of acoustic music, opera, classical music, and historically informed performance. Research concerning ASW comes from the field of room acoustics, architectural acoustics and auralization, as well as musical acoustics, psychoacoustics and systematic musicology.

Data sonification is the presentation of data as sound using sonification. It is the auditory equivalent of the more established practice of data visualization.

Gregory Paul Kramer, is an American composer, researcher, inventor, meditation teacher and author. In 1975 he co-founded Electronic Musicmobile, a pioneer synthesizer ensemble later renamed Electronic Art Ensemble, in which Kramer was a musician and composer. His pioneering work extended to developing synthesizer and related equipment. Kramer also co-founded the not-for-profit arts organization Harvestworks in New York City. He is recognized as the founding figure of the intensely cross-disciplinary field of data sonification. Since 1980, Kramer teaches Buddhist meditation. He is credited as developer of Insight Dialogue, an interpersonal meditation practice. Kramer is the author of several books in diverse fields, as well as (co-)author of scientific papers in the field of data sonification.

References

  1. 1 2 Kramer, Gregory, ed. (1994). Auditory Display: Sonification, Audification, and Auditory Interfaces . Santa Fe Institute Studies in the Sciences of Complexity. Vol. Proceedings Volume XVIII. Reading, MA: Addison-Wesley. ISBN   978-0-201-62603-2.
  2. Smith, Daniel R.; Walker, Bruce N. (2005). "Effects of Auditory Context Cues and Training on Performance of a Point Estimation Sonification Task". Journal of Applied Cognitive Psychology. 19 (8): 1065–1087. doi:10.1002/acp.1146.
  3. Flowers, J. H. (2005), "Thirteen years of reflection on auditory graphing: Promises, pitfalls, and potential new directions" (PDF), in Brazil, Eoin (ed.), Proceedings of the 11th International Conference on Auditory Display, pp. 406–409
  4. Fournier d'Albe, E. E. (May 1914), "On a Type-Reading Optophone", Proceedings of the Royal Society of London
  5. Pollack, I. & Ficks, L. (1954), "Information of elementary multidimensional auditory displays", Journal of the Acoustical Society of America, 26 (1): 136, Bibcode:1954ASAJ...26Q.136P, doi: 10.1121/1.1917759
  6. Dodge, C. (1970), The Earth's Magnetic Field., vol. Nonesuch Records-H-71250
  7. Chambers, J. M.; Mathews, M. V.; Moore, F. R (1974), Auditory Data Inspection (Technical Memorandum), AT&T Bell Laboratories, 74-1214-20
  8. Frysinger, S. P. (2005), "A brief history of auditory data representation to the 1980s" (PDF), in Brazil, Eoin (ed.), Proceedings of the 11th International Conference on Auditory Display, pp. 410–413
  9. Ihde, Don (2007-10-04). Listening and Voice: Phenomenologies of Sound, Second Edition. SUNY Press. p. xvi. ISBN   978-0-7914-7256-9.
  10. Bly, S. (1982), Sound and Computer Information Presentation, vol. Ph.D. Thesis, University of California, Davis, pp. 1–127, doi: 10.2172/5221536
  11. Bly, S., "Presenting information in sound", Proceedings of the 1982 conference on Human factors in computing systems - CHI '82, pp. 371–375, doi: 10.1145/800049.801814
  12. Craven, R M; McIndoe, A K (1999), "Continuous auditory monitoring—how much information do we register?" (PDF), British Journal of Anaesthesia, 83 (5): 747–749, doi: 10.1093/bja/83.5.747 , PMID   10690137 [ dead link ]
  13. Scaletti, C; Craig, A B (1991), "Using sound to extract meaning from complex data", Proc. SPIE 1459, Extracting Meaning from Complex Data: Processing, Display, Interaction II, 1459, doi:10.1117/12.44397
  14. Kramer, G.; Walker, B.N. (2005), "Sound science: Marking ten international conferences on auditory display", ACM Transactions on Applied Perception, 2 (4): 383–388, CiteSeerX   10.1.1.88.7945 , doi:10.1145/1101530.1101531, S2CID   1187647
  15. Watzke, Megan; Porter, Molly; Mohon, Lee (4 May 2022). "New NASA Black Hole Sonifications with a Remix". NASA . Retrieved 11 May 2022.
  16. Overbye, Dennis (7 May 2022). "Hear the Weird Sounds of a Black Hole Singing - As part of an effort to "sonify" the cosmos, researchers have converted the pressure waves from a black hole into an audible … something". The New York Times . Retrieved 11 May 2022.
  17. "The Volcano Listening Project". volcanolisteningproject.org. Retrieved 16 September 2024.
  18. Quincke, G. (1897). "Ein akustisches Thermometer für hohe und niedrige Temperaturen". Annalen der Physik. 299 (13): 66–71. Bibcode:1897AnP...299...66Q. doi:10.1002/andp.18972991311. ISSN   0003-3804.
  19. Martin, Edward J.; Meagher, Thomas R.; Barker, Daniel. "Representing biodiversity decline data by manipulating familiar audio files to create emotional responses: A novel sonification method of soundwave-level deletion". Biological Conservation. 300: 110852. doi: 10.1016/j.biocon.2024.110852 .
  20. Ismailogullari, Abdullah; Ziemer, Tim (2019). "Soundscape clock: Soundscape compositions that display the time of day". International Conference on Auditory Display. Vol. 25. pp. 91–95. doi: 10.21785/icad2019.034 . hdl:1853/61510. ISBN   978-0-9670904-6-7.
  21. Yang, Jiajun; Hermann, Thomas (June 20–23, 2017). PARALLEL COMPUTING OF PARTICLE TRAJECTORY SONIFICATION TO ENABLE REAL-TIME INTERACTIVITY (PDF). The 23rd International Conference on Auditory Display.
  22. Mannone, Maria (2018). "Knots, Music and DNA". Journal of Creative Music Systems. 2 (2). arXiv: 2003.10884 . doi:10.5920/jcms.2018.02. S2CID   64956325.
  23. "PriceSquawk". pricesquawk.com. 15 January 2014.
  24. "Justin Joque". justinjoque.com. Retrieved 2019-05-21.
  25. LIGO Gravitational Wave Chirp, 11 February 2016, archived from the original on 2021-12-22, retrieved 2021-09-15
  26. Banf, Michael; Blanz, Volker (2013). "Sonification of images for the visually impaired using a multi-level approach". Proceedings of the 4th Augmented Human International Conference. New York, New York, USA: ACM Press. pp. 162–169. doi:10.1145/2459236.2459264. ISBN   978-1-4503-1904-1. S2CID   7505236.
  27. Banf, Michael; Mikalay, Ruben; Watzke, Baris; Blanz, Volker (June 2016). "PictureSensation – a mobile application to help the blind explore the visual world through touch and sound". Journal of Rehabilitation and Assistive Technologies Engineering. 3: 205566831667458. doi:10.1177/2055668316674582. ISSN   2055-6683. PMC   6453065 . PMID   31186914.
  28. Hunt, A.; Hermann, T.; Pauletto, S. (2004). "Interacting with sonification systems: closing the loop". Proceedings. Eighth International Conference on Information Visualisation, 2004. IV 2004. pp. 879–884. doi:10.1109/IV.2004.1320244. ISBN   0-7695-2177-0. S2CID   9492137.
  29. Thomas Hermann, and Andy Hunt. The Importance of Interaction in Sonification. Proceedings of ICAD Tenth Meeting of the International Conference on Auditory Display, Sydney, Australia, July 6–9, 2004. Available: online
  30. Sandra Pauletto and Andy Hunt. A Toolkit for Interactive Sonification. Proceedings of ICAD Tenth Meeting of the International Conference on Auditory Display, Sydney, Australia, July 6–9, 2004. Available: online.
  31. Kather, Jakob Nikolas; Hermann, Thomas; Bukschat, Yannick; Kramer, Tilmann; Schad, Lothar R.; Zöllner, Frank Gerrit (2017). "Polyphonic sonification of electrocardiography signals for diagnosis of cardiac pathologies". Scientific Reports. 7: Article-number 44549. Bibcode:2017NatSR...744549K. doi:10.1038/srep44549. PMC   5357951 . PMID   28317848.
  32. Edworthy, Judy (2013). "Medical audible alarms: a review". J Am Med Inform Assoc. 20 (3): 584–589. doi:10.1136/amiajnl-2012-001061. PMC   3628049 . PMID   23100127.
  33. Woerdeman, Peter A.; Willems, Peter W.A.; Noordsmans, Herke Jan; Berkelbach van der Sprenken, Jan Willem (2009). "Auditory feedback during frameless image-guided surgery in a phantom model and initial clinical experience". J Neurosurg. 110 (2): 257–262. doi:10.3171/2008.3.17431. PMID   18928352.
  34. Ziemer, Tim; Black, David (2017). "Psychoacoustically motivated sonification for surgeons". International Journal of Computer Assisted Radiology and Surgery. 12 ((Suppl 1):1): 265–266. arXiv: 1611.04138 . doi:10.1007/s11548-017-1588-3. PMID   28527024. S2CID   51971992.
  35. Ziemer, Tim; Black, David; Schultheis, Holger (2017). Psychoacoustic sonification design for navigation in surgical interventions. Proceedings of Meetings on Acoustics. Vol. 30. p. 050005. doi: 10.1121/2.0000557 .
  36. Ziemer, Tim; Black, David (2017). "Psychoacoustic sonification for tracked medical instrument guidance". The Journal of the Acoustical Society of America. 141 (5): 3694. Bibcode:2017ASAJ..141.3694Z. doi:10.1121/1.4988051.
  37. CURAT. "Games and Training for Minimally Invasive Surgery". CURAT Project. University of Bremen. Retrieved 15 July 2020.
  38. Nagel, F; Stter, F R; Degara, N; Balke, S; Worrall, D (2014). "Fast and accurate guidance - response times to navigational sounds". International Conference on Auditory Display. hdl:1853/52058.
  39. Florez, L (1936). "True blind flight". J Aeronaut Sci. 3 (5): 168–170. doi:10.2514/8.176.
  40. 1 2 Ziemer, Tim; Schultheis, Holger; Black, David; Kikinis, Ron (2018). "Psychoacoustical Interactive Sonification for Short-Range Navigation". Acta Acustica United with Acustica. 104 (6): 1075–1093. doi:10.3813/AAA.919273. S2CID   125466508.
  41. 1 2 Ziemer, Tim; Schultheis, Holger (2018). "Psychoacoustic auditory display for navigation: an auditory assistance system for spatial orientation tasks". Journal on Multimodal User Interfaces. 2018 (Special Issue: Interactive Sonification): 205–218. doi:10.1007/s12193-018-0282-2. S2CID   53721138 . Retrieved 24 January 2019.
  42. "Accessible Oceans: Exploring Ocean Data Through Sound" . Retrieved January 9, 2025.
  43. Scaletti C, Rickard MM, Hebel KJ, Pogorelov TV, Taylor SA, Gruebele M (Feb 2022). "Sonification-enhanced lattice model animations for teaching the protein folding reaction". Journal of Chemical Education. 99 (3): 1220–30. doi:10.1021/acs.jchemed.1c00857.
  44. Scaletti C, Samuel Russell PP, Hebel KJ, Rickard MM, Boob M, Danksagmüller F, Taylor SA, Pogorelov TV, Gruebele M (May 2024). "Hydrogen bonding heterogeneity correlates with protein folding transition state passage time as revealed by data sonification". Proceedings of the National Academy of Sciences of the United States of America. 121 (22): 1–8. doi:10.1073/pnas.2319094121. PMC   11145292 .
  45. Hinckfuss, Kelly; Sanderson, Penelope; Loeb, Robert G.; Liley, Helen G.; Liu, David (2016). "Novel Pulse Oximetry Sonifications for Neonatal Oxygen Saturation Monitoring". Human Factors. 58 (2): 344–359. doi:10.1177/0018720815617406. PMID   26715687. S2CID   23156157.
  46. Sanderson, Penelope M.; Watson, Marcus O.; Russell, John (2005). "Advanced Patient Monitoring Displays: Tools for Continuous Informing". Anesthesia & Analgesia. 101 (1): 161–168. doi: 10.1213/01.ANE.0000154080.67496.AE . PMID   15976225.
  47. Schwarz, Sebastian; Ziemer, Tim (2019). "A psychoacoustic sound design for pulse oximetry". International Conference on Auditory Display. Vol. 25. pp. 214–221. doi: 10.21785/icad2019.024 . hdl:1853/61504. ISBN   978-0-9670904-6-7.
  48. "SPDF - Sonification". jcms.org.uk/. 2005-11-13. Archived from the original on 2005-11-13. Retrieved 2021-09-15.
  49. Schuett, Jonathan H.; Winton, Riley J.; Batterman, Jared M.; Walker, Bruce N. (2014). "Auditory weather reports". Proceedings of the 9th Audio Mostly: A Conference on Interaction with Sound. AM '14. New York, NY, USA: ACM. pp. 17:1–17:7. doi:10.1145/2636879.2636898. ISBN   978-1-4503-3032-9. S2CID   5765787.
  50. Polli, Andrea (July 6–9, 2004). ATMOSPHERICS/WEATHER WORKS: A MULTI-CHANNEL STORM SONIFICATION PROJECT (PDF). ICAD 04-Tenth Meeting of the International Conference on Auditory Display. Archived from the original (PDF) on 2021-07-11.
  51. Silberman, S. (February 6, 2012). "Inside the Mind of a Synaesthete". PLOS ONE.
  52. Winkler, Helena; Schade, Eve Emely Sophie; Kruesilp, Jatawan; Ahmadi, Fida. "Tiltification – The Spirit Level Using Sound". Tiltification. University of Bremen. Retrieved 21 April 2021.
  53. Weidenfeld, J. September 28, 2013. "10 Cool Ways To Create Music With Technology". Listserve.
  54. Byrne, M. February 14, 2012. "With Images for Your Earholes, Sonified Wins Augmented Reality with Custom Synesthesia". Vice / Motherboard
  55. Barrass S. (2012) Digital Fabrication of Acoustic Sonifications, Journal of the Audio Engineering Society, September 2012. online
  56. Barrass, S. and Best, G. (2008). Stream-based Sonification Diagrams. Proceedings of the 14th International Conference on Auditory Display, IRCAM Paris, 24–27 June 2008. online
  57. Barrass S. (2009) Developing the Practice and Theory of Stream-based Sonification. Scan: Journal of Media Arts Culture , Macquarie University.
  58. Kirke, Alexis; Miranda, Eduardo (2014-05-06). "Pulsed Melodic Affective Processing: Musical structures for increasing transparency in emotional computation". Simulation. 90 (5): 606. doi:10.1177/0037549714531060. hdl: 10026.1/6621 . S2CID   15555997.
  59. "Towards Harmonic Extensions of Pulsed Melodic Affective Processing – Further Musical Structures for Increasing Transparency in Emotional Computation" (PDF). 2014-11-11. Retrieved 2017-06-05.
  60. "A Hybrid Computer Case Study for Unconventional Virtual Computing". 2015-06-01. Retrieved 2017-06-05.