Computational audiology

Last updated

Computational audiology is a branch of audiology that employs techniques from mathematics and computer science to improve clinical treatments and scientific understanding of the auditory system. Computational audiology is closely related to computational medicine, which uses quantitative models to develop improved methods for general disease diagnosis and treatment. [1]

Contents

Overview

In contrast to traditional methods in audiology and hearing science research, computational audiology emphasizes predictive modeling and large-scale analytics ("big data") rather than inferential statistics and small-cohort hypothesis testing. The aim of computational audiology is to translate advances in hearing science, data science, information technology, and machine learning to clinical audiological care. Research to understand hearing function and auditory processing in humans as well as relevant animal species represents translatable work that supports this aim. Research and development to implement more effective diagnostics and treatments represent translational work that supports this aim. [2]

For people with hearing difficulties, tinnitus, hyperacusis, or balance problems, these advances might lead to more precise diagnoses, novel therapies, and advanced rehabilitation options including smart prostheses and e-Health/mHealth apps. For care providers, it can provide actionable knowledge and tools for automating part of the clinical pathway. [3]

The field is interdisciplinary and includes foundations in audiology, auditory neuroscience, computer science, data science, machine learning, psychology, signal processing, natural language processing, otology and vestibulology.

Applications

In computational audiology, models and algorithms are used to understand the principles that govern the auditory system, to screen for hearing loss, to diagnose hearing disorders, to provide rehabilitation, and to generate simulations for patient education, among others.

Computational models of hearing, speech and auditory perception

For decades, phenomenological & biophysical (computational) models have been developed to simulate characteristics of the human auditory system. Examples include models of the mechanical properties of the basilar membrane, [4] the electrically stimulated cochlea, [5] [6] middle ear mechanics, [7] bone conduction, [8] and the central auditory pathway. [9] Saremi et al. (2016) compared 7 contemporary models including parallel filterbanks, cascaded filterbanks, transmission lines and biophysical models. [10] More recently, convolutional neural networks (CNNs) have been constructed and trained that can replicate human auditory function [11] or complex cochlear mechanics with high accuracy. [12] Although inspired by the interconnectivity of biological neural networks, the architecture of CNNs is distinct from the organization of the natural auditory system.

e-Health / mHealth (connected hearing healthcare, wireless- and internet-based services)

Online pure-tone threshold audiometry (or screening) tests, electrophysiological measures, for example distortion-product otoacoustic emissions (DPOAEs) and speech-in-noise screening tests are becoming increasingly available as a tools to promote awareness and enable accurate early identification of hearing loss across ages, monitor the effects of ototoxicity and/or noise, and guide ear and hearing care decisions and provide support to clinicians. [13] [14] Smartphone-based tests have been proposed to detect middle ear fluid using acoustic reflectometry and machine learning. [15] Smartphone attachments have also been designed to perform tympanometry for acoustic evaluation of the middle ear eardrum. [16] [17] Low-cost earphones attached to smartphones have also been prototyped to help detect the faint otoacoustic emissions from the cochlea and perform neonatal hearing screening. [18] [19]

Big data and AI in audiology and hearing healthcare

Collecting large numbers of audiograms (e.g. from databases from the National Institute for Occupational Safety and Health or NIOSH [20] or National Health and Nutrition Examination Survey or NHANES) provides researchers with opportunities to find patterns of hearing status in the population [21] [22] or to train AI systems that can classify audiograms. [23] Machine learning can be used to predict the relationship between multiple factors e.g. predict depression based on self-reported hearing loss [24] or the relationship between genetic profile and self-reported hearing loss. [25] Hearing aids and wearables provide the option to monitor the soundscape of the user or log the usage patterns which can be used to automatically recommend settings that are expected to benefit the user. [26]

Computational approaches to improving hearing devices and auditory implants

Methods to improve rehabilitation by auditory implants include improving music perception, [27] models of the electrode-neuron interface, [28] and an AI based Cochlear Implant fitting assistant. [29]

Data-based investigations into hearing loss and tinnitus

Online surveys processed with ML-based classification have been used to diagnose somatosensory tinnitus. [30] Automated Natural Language Processing (NPL) techniques, including unsupervised and supervised Machine Learning have been used to analyze social posts about tinnitus and analyze the heterogeneity of symptoms. [31] [32]

Diagnostics for hearing problems, acoustics to facilitate hearing

Machine learning has been applied to audiometry to create flexible, efficient estimation tools that do not require excessive testing time to determine someone's individual's auditory profile. [33] [34] Similarly, machine learning based versions of other auditory tests including determining dead regions in the cochlea or equal loudness contours, [35] have been created.

e-Research (remote testing, online experiments, new tools and frameworks)

Examples of e-Research tools include including the Remote Testing Wiki, [36] the Portable Automated Rapid Testing (PART), Ecological Momentary Assessment (EMA) and the NIOSH sound level meter. A number of tools can be found online. [37]

Software and tools

Software and large datasets are important for the development and adoption of computational audiology. As with many scientific computing fields, much of the field of computational audiology existentially depends on open source software and its continual maintenance, development, and advancement. [38]

Computational biology, computational medicine, and computational pathology are all interdisciplinary approaches to the life sciences that draw from quantitative disciplines such as mathematics and information science.

See also

Related Research Articles

Tinnitus is a condition when a person hears a ringing sound or a different variety of sound when no corresponding external sound is present and other people cannot hear it. Nearly everyone experiences faint "normal tinnitus" in a completely quiet room; but this is of concern only if it is bothersome, interferes with normal hearing, or is associated with other problems. The word tinnitus comes from the Latin tinnire, "to ring". In some people, it interferes with concentration, and can be associated with anxiety and depression.

<span class="mw-page-title-main">Cochlea</span> Snail-shaped part of inner ear involved in hearing

The cochlea is the part of the inner ear involved in hearing. It is a spiral-shaped cavity in the bony labyrinth, in humans making 2.75 turns around its axis, the modiolus. A core component of the cochlea is the organ of Corti, the sensory organ of hearing, which is distributed along the partition separating the fluid chambers in the coiled tapered tube of the cochlea.

<span class="mw-page-title-main">Cochlear implant</span> Prosthesis

A cochlear implant (CI) is a surgically implanted neuroprosthesis that provides a person who has moderate-to-profound sensorineural hearing loss with sound perception. With the help of therapy, cochlear implants may allow for improved speech understanding in both quiet and noisy environments. A CI bypasses acoustic hearing by direct electrical stimulation of the auditory nerve. Through everyday listening and auditory training, cochlear implants allow both children and adults to learn to interpret those signals as speech and sound.

<span class="mw-page-title-main">Hearing test</span> Evaluation of the sensitivity of a persons sense of hearing


A hearing test provides an evaluation of the sensitivity of a person's sense of hearing and is most often performed by an audiologist using an audiometer. An audiometer is used to determine a person's hearing sensitivity at different frequencies. There are other hearing tests as well, e.g., Weber test and Rinne test.


Ototoxicity is the property of being toxic to the ear (oto-), specifically the cochlea or auditory nerve and sometimes the vestibular system, for example, as a side effect of a drug. The effects of ototoxicity can be reversible and temporary, or irreversible and permanent. It has been recognized since the 19th century. There are many well-known ototoxic drugs used in clinical situations, and they are prescribed, despite the risk of hearing disorders, for very serious health conditions. Ototoxic drugs include antibiotics, loop diuretics, and platinum-based chemotherapy agents. A number of nonsteroidal anti-inflammatory drugs (NSAIDS) have also been shown to be ototoxic. This can result in sensorineural hearing loss, dysequilibrium, or both. Some environmental and occupational chemicals have also been shown to affect the auditory system and interact with noise.

<span class="mw-page-title-main">Audiology</span> Branch of science that studies hearing, balance, and related disorders

Audiology is a branch of science that studies hearing, balance, and related disorders. Audiologists treat those with hearing loss and proactively prevent related damage. By employing various testing strategies, audiologists aim to determine whether someone has normal sensitivity to sounds. If hearing loss is identified, audiologists determine which portions of hearing are affected, to what degree, and where the lesion causing the hearing loss is found. If an audiologist determines that a hearing loss or vestibular abnormality is present, they will provide recommendations for interventions or rehabilitation.

<span class="mw-page-title-main">Sensorineural hearing loss</span> Hearing loss caused by an inner ear or vestibulocochlear nerve defect

Sensorineural hearing loss (SNHL) is a type of hearing loss in which the root cause lies in the inner ear, sensory organ, or the vestibulocochlear nerve. SNHL accounts for about 90% of reported hearing loss. SNHL is usually permanent and can be mild, moderate, severe, profound, or total. Various other descriptors can be used depending on the shape of the audiogram, such as high frequency, low frequency, U-shaped, notched, peaked, or flat.

<span class="mw-page-title-main">Otoacoustic emission</span> Sound from the inner ear

An otoacoustic emission (OAE) is a sound that is generated from within the inner ear. Having been predicted by Austrian astrophysicist Thomas Gold in 1948, its existence was first demonstrated experimentally by British physicist David Kemp in 1978, and otoacoustic emissions have since been shown to arise through a number of different cellular and mechanical causes within the inner ear. Studies have shown that OAEs disappear after the inner ear has been damaged, so OAEs are often used in the laboratory and the clinic as a measure of inner ear health.

<span class="mw-page-title-main">Audiometry</span> Branch of audiology measuring hearing sensitivity

Audiometry is a branch of audiology and the science of measuring hearing acuity for variations in sound intensity and pitch and for tonal purity, involving thresholds and differing frequencies. Typically, audiometric tests determine a subject's hearing levels with the help of an audiometer, but may also measure ability to discriminate between different sound intensities, recognize pitch, or distinguish speech from background noise. Acoustic reflex and otoacoustic emissions may also be measured. Results of audiometric tests are used to diagnose hearing loss or diseases of the ear, and often make use of an audiogram.

Hyperacusis is an increased sensitivity to sound and a low tolerance for environmental noise. Definitions of hyperacusis can vary significantly; it often revolves around damage to or dysfunction of the stapes bone, stapedius muscle or tensor tympani (eardrum). It is often categorized into four subtypes: loudness, pain, annoyance, and fear. It can be a highly debilitating hearing disorder.

Auditory neuropathy (AN) is a hearing disorder in which the outer hair cells of the cochlea are present and functional, but sound information is not transmitted sufficiently by the auditory nerve to the brain. The cause may be several dysfunctions of the inner hair cells of the cochlea or spiral ganglion neuron levels. Hearing loss with AN can range from normal hearing sensitivity to profound hearing loss.

Presbycusis, or age-related hearing loss, is the cumulative effect of aging on hearing. It is a progressive and irreversible bilateral symmetrical age-related sensorineural hearing loss resulting from degeneration of the cochlea or associated structures of the inner ear or auditory nerves. The hearing loss is most marked at higher frequencies. Hearing loss that accumulates with age but is caused by factors other than normal aging is not presbycusis, although differentiating the individual effects of distinct causes of hearing loss can be difficult.

<span class="mw-page-title-main">Audiogram</span> Graph showing audible frequencies

An audiogram is a graph that shows the audible threshold for standardized frequencies as measured by an audiometer. The Y axis represents intensity measured in decibels (dB) and the X axis represents frequency measured in hertz (Hz). The threshold of hearing is plotted relative to a standardised curve that represents 'normal' hearing, in dB(HL). They are not the same as equal-loudness contours, which are a set of curves representing equal loudness at different levels, as well as at the threshold of hearing, in absolute terms measured in dB SPL.

<span class="mw-page-title-main">Auditory brainstem response</span> Auditory phenomenon in the brain

The auditory brainstem response (ABR), also called brainstem evoked response audiometry (BERA) or brainstem auditory evoked potentials (BAEPs) or brainstem auditory evoked responses (BAERs) is an auditory evoked potential extracted from ongoing electrical activity in the brain and recorded via electrodes placed on the scalp. The measured recording is a series of six to seven vertex positive waves of which I through V are evaluated. These waves, labeled with Roman numerals in Jewett and Williston convention, occur in the first 10 milliseconds after onset of an auditory stimulus. The ABR is considered an exogenous response because it is dependent upon external factors.

<span class="mw-page-title-main">Pure-tone audiometry</span> Medical test

Pure-tone audiometry is the main hearing test used to identify hearing threshold levels of an individual, enabling determination of the degree, type and configuration of a hearing loss and thus providing a basis for diagnosis and management. Pure-tone audiometry is a subjective, behavioural measurement of a hearing threshold, as it relies on patient responses to pure tone stimuli. Therefore, pure-tone audiometry is only used on adults and children old enough to cooperate with the test procedure. As with most clinical tests, standardized calibration of the test environment, the equipment and the stimuli is needed before testing proceeds. Pure-tone audiometry only measures audibility thresholds, rather than other aspects of hearing such as sound localization and speech recognition. However, there are benefits to using pure-tone audiometry over other forms of hearing test, such as click auditory brainstem response (ABR). Pure-tone audiometry provides ear specific thresholds, and uses frequency specific pure tones to give place specific responses, so that the configuration of a hearing loss can be identified. As pure-tone audiometry uses both air and bone conduction audiometry, the type of loss can also be identified via the air-bone gap. Although pure-tone audiometry has many clinical benefits, it is not perfect at identifying all losses, such as ‘dead regions’ of the cochlea and neuropathies such as auditory processing disorder (APD). This raises the question of whether or not audiograms accurately predict someone's perceived degree of disability.

An auditory brainstem implant (ABI) is a surgically implanted electronic device that provides a sense of sound to a person who is profoundly deaf, due to retrocochlear hearing impairment. In Europe, ABIs have been used in children and adults, and in patients with neurofibromatosis type II.

<span class="mw-page-title-main">Hearing</span> Sensory perception of sound by living organisms

Hearing, or auditory perception, is the ability to perceive sounds through an organ, such as an ear, by detecting vibrations as periodic changes in the pressure of a surrounding medium. The academic field concerned with hearing is auditory science.

Temporal envelope (ENV) and temporal fine structure (TFS) are changes in the amplitude and frequency of sound perceived by humans over time. These temporal changes are responsible for several aspects of auditory perception, including loudness, pitch and timbre perception and spatial hearing.

<span class="mw-page-title-main">Brian Moore (scientist)</span>

Brian C.J. Moore FMedSci, FRS is an Emeritus Professor of Auditory Perception in the University of Cambridge and an Emeritus Fellow of Wolfson College, Cambridge. His research focuses on psychoacoustics, audiology, and the development and assessment of hearing aids.

<span class="mw-page-title-main">Diagnosis of hearing loss</span> Medical testing

Identification of a hearing loss is usually conducted by a general practitioner medical doctor, otolaryngologist, certified and licensed audiologist, school or industrial audiometrist, or other audiometric technician. Diagnosis of the cause of a hearing loss is carried out by a specialist physician or otorhinolaryngologist.

References

  1. Winslow, Raimond L.; Trayanova, Natalia; Geman, Donald; Miller, Michael I. (31 October 2012). "Computational Medicine: Translating Models to Clinical Care". Science Translational Medicine. 4 (158): 158rv11. doi:10.1126/scitranslmed.3003528. PMC   3618897 . PMID   23115356.
  2. Gannon, Frank (November 2014). "The steps from translatable to translational research". EMBO Reports. 15 (11): 1107–1108. doi:10.15252/embr.201439587. ISSN   1469-221X. PMC   4253482 . PMID   25296643.
  3. Wasmann, Jan-Willem A.; Lanting, Cris P.; Huinck, Wendy J.; Mylanus, Emmanuel A. M.; van der Laak, Jeroen W. M.; Govaerts, Paul J.; Swanepoel, De Wet; Moore, David R.; Barbour, Dennis L. (November–December 2021). "Computational Audiology: New Approaches to Advance Hearing Health Care in the Digital Age". Ear and Hearing. 42 (6): 1499–1507. doi:10.1097/AUD.0000000000001041. PMC   8417156 . PMID   33675587.
  4. De Boer, Egbert (1996), Dallos, Peter; Popper, Arthur N.; Fay, Richard R. (eds.), "Mechanics of the Cochlea: Modeling Efforts", The Cochlea, Springer Handbook of Auditory Research, vol. 8, New York, NY: Springer New York, pp. 258–317, doi:10.1007/978-1-4612-0757-3_5, ISBN   978-1-4612-6891-8 , retrieved 2022-01-18
  5. Frijns, J. H. M.; de Snoo, S. L.; Schoonhoven, R. (1995-07-01). "Potential distributions and neural excitation patterns in a rotationally symmetric model of the electrically stimulated cochlea". Hearing Research. 87 (1): 170–186. doi:10.1016/0378-5955(95)00090-Q. ISSN   0378-5955. PMID   8567435. S2CID   4762235.
  6. Rubinstein, Jay T.; Hong, Robert (September 2003). "Signal Coding in Cochlear Implants: Exploiting Stochastic Effects of Electrical Stimulation". Annals of Otology, Rhinology & Laryngology. 112 (9_suppl): 14–19. doi:10.1177/00034894031120s904. ISSN   0003-4894. PMID   14533839. S2CID   32157848.
  7. Sun, Q.; Gan, R. Z.; Chang, K.-H.; Dormer, K. J. (2002-10-01). "Computer-integrated finite element modeling of human middle ear". Biomechanics and Modeling in Mechanobiology. 1 (2): 109–122. doi:10.1007/s10237-002-0014-z. ISSN   1617-7959. PMID   14595544. S2CID   8781577.
  8. Stenfelt, Stefan (2016-10-01). "Model predictions for bone conduction perception in the human". Hearing Research. MEMRO 2015 – Basic Science meets Clinical Otology. 340: 135–143. doi:10.1016/j.heares.2015.10.014. ISSN   0378-5955. PMID   26657096. S2CID   4862153.
  9. Meddis, Ray; Lopez-Poveda, Enrique A.; Fay, Richard R.; Popper, Arthur N., eds. (2010). "Computational Models of the Auditory System". Springer Handbook of Auditory Research. 35. doi:10.1007/978-1-4419-5934-8. ISBN   978-1-4419-1370-8. ISSN   0947-2657.
  10. Saremi, Amin; Beutelmann, Rainer; Dietz, Mathias; Ashida, Go; Kretzberg, Jutta; Verhulst, Sarah (September 2016). "A comparative study of seven human cochlear filter models". The Journal of the Acoustical Society of America. 140 (3): 1618–1634. Bibcode:2016ASAJ..140.1618S. doi:10.1121/1.4960486. ISSN   0001-4966. PMID   27914400.
  11. Kell, Alexander J. E.; Yamins, Daniel L. K.; Shook, Erica N.; Norman-Haignere, Sam V.; McDermott, Josh H. (2018-05-02). "A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy". Neuron. 98 (3): 630–644.e16. doi: 10.1016/j.neuron.2018.03.044 . ISSN   0896-6273. PMID   29681533. S2CID   5084719.
  12. Baby, Deepak; Van Den Broucke, Arthur; Verhulst, Sarah (February 2021). "A convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications". Nature Machine Intelligence. 3 (2): 134–143. doi:10.1038/s42256-020-00286-8. ISSN   2522-5839. PMC   7116797 . PMID   33629031.
  13. Paglialonga, Alessia; Cleveland Nielsen, Annette; Ingo, Elisabeth; Barr, Caitlin; Laplante-Lévesque, Ariane (2018-07-31). "eHealth and the hearing aid adult patient journey: a state-of-the-art review". BioMedical Engineering OnLine. 17 (1): 101. doi: 10.1186/s12938-018-0531-3 . ISSN   1475-925X. PMC   6069792 . PMID   30064497.
  14. Frisby, Caitlin; Eikelboom, Robert; Mahomed-Asmail, Faheema; Kuper, Hannah; Swanepoel, De Wet (2021-12-30). "MHealth Applications for Hearing Loss: A Scoping Review". Telemedicine and e-Health. 28 (8): 1090–1099. doi:10.1089/tmj.2021.0460. hdl: 2263/84486 . ISSN   1530-5627. PMID   34967683. S2CID   245567480.
  15. Chan, Justin; Raju, Sharat; Nandakumar, Rajalakshmi; Bly, Randall; Gollakota, Shyamnath (2019-05-15). "Detecting middle ear fluid using smartphones". Science Translational Medicine. 11 (492): eaav1102. doi: 10.1126/scitranslmed.aav1102 . ISSN   1946-6234. PMID   31092691. S2CID   155102882.
  16. Chan, Justin; Najafi, Ali; Baker, Mallory; Kinsman, Julie; Mancl, Lisa R.; Norton, Susan; Bly, Randall; Gollakota, Shyamnath (2022-06-16). "Performing tympanometry using smartphones". Communications Medicine. 2 (1): 57. doi:10.1038/s43856-022-00120-9. ISSN   2730-664X. PMC   9203539 . PMID   35721828. S2CID   249811632.
  17. Community, Nature Portfolio Bioengineering (2022-06-15). "Computing for Audiology: Smartphone tympanometer for diagnosing middle ear disorders". Nature Portfolio Bioengineering Community. Retrieved 2022-06-21.
  18. Goodman, Shawn S. (2022-10-31). "Affordable hearing screening". Nature Biomedical Engineering. 6 (11): 1199–1200. doi:10.1038/s41551-022-00959-2. ISSN   2157-846X. PMID   36316370. S2CID   253246312.
  19. Chan, Justin; Ali, Nada; Najafi, Ali; Meehan, Anna; Mancl, Lisa R.; Gallagher, Emily; Bly, Randall; Gollakota, Shyamnath (2022-10-31). "An off-the-shelf otoacoustic-emission probe for hearing screening via a smartphone". Nature Biomedical Engineering. 6 (11): 1203–1213. doi:10.1038/s41551-022-00947-6. ISSN   2157-846X. PMC   9717525 . PMID   36316369.
  20. Masterson, Elizabeth A.; Tak, SangWoo; Themann, Christa L.; Wall, David K.; Groenewold, Matthew R.; Deddens, James A.; Calvert, Geoffrey M. (June 2013). "Prevalence of hearing loss in the United States by industry". American Journal of Industrial Medicine. 56 (6): 670–681. doi:10.1002/ajim.22082. PMID   22767358.
  21. Charih, François; Bromwich, Matthew; Mark, Amy E.; Lefrançois, Renée; Green, James R. (December 2020). "Data-Driven Audiogram Classification for Mobile Audiometry". Scientific Reports. 10 (1): 3962. Bibcode:2020NatSR..10.3962C. doi:10.1038/s41598-020-60898-3. ISSN   2045-2322. PMC   7054524 . PMID   32127604.
  22. Cox, Marco; de Vries, Bert (2021). "Bayesian Pure-Tone Audiometry Through Active Learning Under Informed Priors". Frontiers in Digital Health. 3: 723348. doi: 10.3389/fdgth.2021.723348 . ISSN   2673-253X. PMC   8521968 . PMID   34713188.
  23. Crowson, Matthew G.; Lee, Jong Wook; Hamour, Amr; Mahmood, Rafid; Babier, Aaron; Lin, Vincent; Tucci, Debara L.; Chan, Timothy C. Y. (2020-08-07). "AutoAudio: Deep Learning for Automatic Audiogram Interpretation". Journal of Medical Systems. 44 (9): 163. doi:10.1007/s10916-020-01627-1. ISSN   1573-689X. PMID   32770269. S2CID   221035573.
  24. Crowson, Matthew G.; Franck, Kevin H.; Rosella, Laura C.; Chan, Timothy C. Y. (July–August 2021). "Predicting Depression From Hearing Loss Using Machine Learning". Ear and Hearing. 42 (4): 982–989. doi:10.1097/AUD.0000000000000993. ISSN   1538-4667. PMID   33577219. S2CID   231901726.
  25. Wells, Helena Rr.; Freidin, Maxim B.; Zainul Abidin, Fatin N.; Payton, Antony; Dawes, Piers; Munro, Kevin J.; Morton, Cynthia C.; Moore, David R.; Dawson, Sally J; Williams, Frances Mk. (2019-02-14), Genome-wide association study identifies 44 independent genomic loci for self-reported adult hearing difficulty in the UK Biobank cohort, doi:10.1101/549071, S2CID   92606662 , retrieved 2022-01-20
  26. Christensen, Jeppe H.; Saunders, Gabrielle H.; Porsbo, Michael; Pontoppidan, Niels H. (2021). "The everyday acoustic environment and its association with human heart rate: evidence from real-world data logging with hearing aids and wearables". Royal Society Open Science. 8 (2): 201345. Bibcode:2021RSOS....801345C. doi:10.1098/rsos.201345. PMC   8074664 . PMID   33972852.
  27. Tahmasebi, Sina; Gajȩcki, Tom; Nogueira, Waldo (2020). "Design and Evaluation of a Real-Time Audio Source Separation Algorithm to Remix Music for Cochlear Implant Users". Frontiers in Neuroscience. 14: 434. doi: 10.3389/fnins.2020.00434 . ISSN   1662-453X. PMC   7248365 . PMID   32508564.
  28. Garcia, Charlotte; Goehring, Tobias; Cosentino, Stefano; Turner, Richard E.; Deeks, John M.; Brochier, Tim; Rughooputh, Taren; Bance, Manohar; Carlyon, Robert P. (2021-10-01). "The Panoramic ECAP Method: Estimating Patient-Specific Patterns of Current Spread and Neural Health in Cochlear Implant Users". Journal of the Association for Research in Otolaryngology. 22 (5): 567–589. doi:10.1007/s10162-021-00795-2. ISSN   1438-7573. PMC   8476702 . PMID   33891218.
  29. Battmer, Rolf-Dieter; Borel, Stephanie; Brendel, Martina; Buchner, Andreas; Cooper, Huw; Fielden, Claire; Gazibegovic, Dzemal; Goetze, Romy; Govaerts, Paul; Kelleher, Katherine; Lenartz, Thomas (2015-03-01). "Assessment of 'Fitting to Outcomes Expert' FOX™ with new cochlear implant users in a multi-centre study". Cochlear Implants International. 16 (2): 100–109. doi:10.1179/1754762814Y.0000000093. ISSN   1467-0100. PMID   25118042. S2CID   4674778.
  30. Michiels, Sarah; Cardon, Emilie; Gilles, Annick; Goedhart, Hazel; Vesala, Markku; Schlee, Winfried (2021-07-14). "Somatosensory Tinnitus Diagnosis: Diagnostic Value of Existing Criteria". Ear & Hearing. 43 (1): 143–149. doi:10.1097/aud.0000000000001105. hdl: 1942/34681 . ISSN   1538-4667. PMID   34261856. S2CID   235907109.
  31. Palacios, Guillaume; Noreña, Arnaud; Londero, Alain (2020). "Assessing the Heterogeneity of Complaints Related to Tinnitus and Hyperacusis from an Unsupervised Machine Learning Approach: An Exploratory Study". Audiology and Neurotology. 25 (4): 174–189. doi: 10.1159/000504741 . ISSN   1420-3030. PMID   32062654. S2CID   211135952.
  32. "What can we learn about tinnitus from social media posts?". Computational Audiology. 2021-06-07. Retrieved 2022-01-20.
  33. Barbour, Dennis L.; Howard, Rebecca T.; Song, Xinyu D.; Metzger, Nikki; Sukesan, Kiron A.; DiLorenzo, James C.; Snyder, Braham R. D.; Chen, Jeff Y.; Degen, Eleanor A.; Buchbinder, Jenna M.; Heisey, Katherine L. (July 2019). "Online Machine Learning Audiometry". Ear & Hearing. 40 (4): 918–926. doi:10.1097/AUD.0000000000000669. ISSN   0196-0202. PMC   6476703 . PMID   30358656.
  34. Schlittenlacher, Josef; Turner, Richard E.; Moore, Brian C. J. (2018-07-01). "Audiogram estimation using Bayesian active learning". The Journal of the Acoustical Society of America. 144 (1): 421–430. Bibcode:2018ASAJ..144..421S. doi:10.1121/1.5047436. ISSN   0001-4966. PMID   30075695. S2CID   51910371.
  35. Schlittenlacher, Josef; Moore, Brian C. J. (2020). "Fast estimation of equal-loudness contours using Bayesian active learning and direct scaling". Acoustical Science and Technology. 41 (1): 358–360. doi: 10.1250/ast.41.358 . S2CID   214270892.
  36. "PP Remote Testing Wiki | Main / RemoteTesting". www.spatialhearing.org. Retrieved 2022-01-20.
  37. "Resources". Computational Audiology. Retrieved 2022-01-20.
  38. Fortunato, Laura; Galassi, Mark (2021-05-17). "The case for free and open source software in research and scholarship". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 379 (2197): 20200079. Bibcode:2021RSPTA.37900079F. doi:10.1098/rsta.2020.0079. PMID   33775148. S2CID   232387092.