Computational audiology is a branch of audiology that employs techniques from mathematics and computer science to improve clinical treatments and scientific understanding of the auditory system. Computational audiology is closely related to computational medicine, which uses quantitative models to develop improved methods for general disease diagnosis and treatment. [1]
In contrast to traditional methods in audiology and hearing science research, computational audiology emphasizes predictive modeling and large-scale analytics ("big data") rather than inferential statistics and small-cohort hypothesis testing. The aim of computational audiology is to translate advances in hearing science, data science, information technology, and machine learning to clinical audiological care. Research to understand hearing function and auditory processing in humans as well as relevant animal species represents translatable work that supports this aim. Research and development to implement more effective diagnostics and treatments represent translational work that supports this aim. [2]
For people with hearing difficulties, tinnitus, hyperacusis, or balance problems, these advances might lead to more precise diagnoses, novel therapies, and advanced rehabilitation options including smart prostheses and e-Health/mHealth apps. For care providers, it can provide actionable knowledge and tools for automating part of the clinical pathway. [3]
The field is interdisciplinary and includes foundations in audiology, auditory neuroscience, computer science, data science, machine learning, psychology, signal processing, natural language processing, otology and vestibulology.
In computational audiology, models and algorithms are used to understand the principles that govern the auditory system, to screen for hearing loss, to diagnose hearing disorders, to provide rehabilitation, and to generate simulations for patient education, among others.
For decades, phenomenological & biophysical (computational) models have been developed to simulate characteristics of the human auditory system. Examples include models of the mechanical properties of the basilar membrane, [4] the electrically stimulated cochlea, [5] [6] middle ear mechanics, [7] bone conduction, [8] and the central auditory pathway. [9] Saremi et al. (2016) compared 7 contemporary models including parallel filterbanks, cascaded filterbanks, transmission lines and biophysical models. [10] More recently, convolutional neural networks (CNNs) have been constructed and trained that can replicate human auditory function [11] or complex cochlear mechanics with high accuracy. [12] Although inspired by the interconnectivity of biological neural networks, the architecture of CNNs is distinct from the organization of the natural auditory system.
Online pure-tone threshold audiometry (or screening) tests, electrophysiological measures, for example distortion-product otoacoustic emissions (DPOAEs) and speech-in-noise screening tests are becoming increasingly available as a tools to promote awareness and enable accurate early identification of hearing loss across ages, monitor the effects of ototoxicity and/or noise, and guide ear and hearing care decisions and provide support to clinicians. [13] [14] Smartphone-based tests have been proposed to detect middle ear fluid using acoustic reflectometry and machine learning. [15] Smartphone attachments have also been designed to perform tympanometry for acoustic evaluation of the middle ear eardrum. [16] [17] Low-cost earphones attached to smartphones have also been prototyped to help detect the faint otoacoustic emissions from the cochlea and perform neonatal hearing screening. [18] [19]
Collecting large numbers of audiograms (e.g. from databases from the National Institute for Occupational Safety and Health or NIOSH [20] or National Health and Nutrition Examination Survey or NHANES) provides researchers with opportunities to find patterns of hearing status in the population [21] [22] or to train AI systems that can classify audiograms. [23] Machine learning can be used to predict the relationship between multiple factors e.g. predict depression based on self-reported hearing loss [24] or the relationship between genetic profile and self-reported hearing loss. [25] Hearing aids and wearables provide the option to monitor the soundscape of the user or log the usage patterns which can be used to automatically recommend settings that are expected to benefit the user. [26]
Methods to improve rehabilitation by auditory implants include improving music perception, [27] models of the electrode-neuron interface, [28] and an AI based Cochlear Implant fitting assistant. [29]
Online surveys processed with ML-based classification have been used to diagnose somatosensory tinnitus. [30] Automated Natural Language Processing (NPL) techniques, including unsupervised and supervised Machine Learning have been used to analyze social posts about tinnitus and analyze the heterogeneity of symptoms. [31] [32]
Machine learning has been applied to audiometry to create flexible, efficient estimation tools that do not require excessive testing time to determine someone's individual's auditory profile. [33] [34] Similarly, machine learning based versions of other auditory tests including determining dead regions in the cochlea or equal loudness contours, [35] have been created.
Examples of e-Research tools include including the Remote Testing Wiki, [36] the Portable Automated Rapid Testing (PART), Ecological Momentary Assessment (EMA) and the NIOSH sound level meter. A number of tools can be found online. [37]
Software and large datasets are important for the development and adoption of computational audiology. As with many scientific computing fields, much of the field of computational audiology existentially depends on open source software and its continual maintenance, development, and advancement. [38]
Computational biology, computational medicine, and computational pathology are all interdisciplinary approaches to the life sciences that draw from quantitative disciplines such as mathematics and information science.
Tinnitus is a condition when a person hears a ringing sound or a different variety of sound when no corresponding external sound is present and other people cannot hear it. Nearly everyone experiences faint "normal tinnitus" in a completely quiet room; but this is of concern only if it is bothersome, interferes with normal hearing, or is associated with other problems. The word tinnitus comes from the Latin tinnire, "to ring". In some people, it interferes with concentration, and can be associated with anxiety and depression.
The cochlea is the part of the inner ear involved in hearing. It is a spiral-shaped cavity in the bony labyrinth, in humans making 2.75 turns around its axis, the modiolus. A core component of the cochlea is the organ of Corti, the sensory organ of hearing, which is distributed along the partition separating the fluid chambers in the coiled tapered tube of the cochlea.
A cochlear implant (CI) is a surgically implanted neuroprosthesis that provides a person who has moderate-to-profound sensorineural hearing loss with sound perception. With the help of therapy, cochlear implants may allow for improved speech understanding in both quiet and noisy environments. A CI bypasses acoustic hearing by direct electrical stimulation of the auditory nerve. Through everyday listening and auditory training, cochlear implants allow both children and adults to learn to interpret those signals as speech and sound.
A hearing test provides an evaluation of the sensitivity of a person's sense of hearing and is most often performed by an audiologist using an audiometer. An audiometer is used to determine a person's hearing sensitivity at different frequencies. There are other hearing tests as well, e.g., Weber test and Rinne test.
Ototoxicity is the property of being toxic to the ear (oto-), specifically the cochlea or auditory nerve and sometimes the vestibular system, for example, as a side effect of a drug. The effects of ototoxicity can be reversible and temporary, or irreversible and permanent. It has been recognized since the 19th century. There are many well-known ototoxic drugs used in clinical situations, and they are prescribed, despite the risk of hearing disorders, for very serious health conditions. Ototoxic drugs include antibiotics, loop diuretics, and platinum-based chemotherapy agents. A number of nonsteroidal anti-inflammatory drugs (NSAIDS) have also been shown to be ototoxic. This can result in sensorineural hearing loss, dysequilibrium, or both. Some environmental and occupational chemicals have also been shown to affect the auditory system and interact with noise.
Audiology is a branch of science that studies hearing, balance, and related disorders. Audiologists treat those with hearing loss and proactively prevent related damage. By employing various testing strategies, audiologists aim to determine whether someone has normal sensitivity to sounds. If hearing loss is identified, audiologists determine which portions of hearing are affected, to what degree, and where the lesion causing the hearing loss is found. If an audiologist determines that a hearing loss or vestibular abnormality is present, they will provide recommendations for interventions or rehabilitation.
Sensorineural hearing loss (SNHL) is a type of hearing loss in which the root cause lies in the inner ear, sensory organ, or the vestibulocochlear nerve. SNHL accounts for about 90% of reported hearing loss. SNHL is usually permanent and can be mild, moderate, severe, profound, or total. Various other descriptors can be used depending on the shape of the audiogram, such as high frequency, low frequency, U-shaped, notched, peaked, or flat.
An otoacoustic emission (OAE) is a sound that is generated from within the inner ear. Having been predicted by Austrian astrophysicist Thomas Gold in 1948, its existence was first demonstrated experimentally by British physicist David Kemp in 1978, and otoacoustic emissions have since been shown to arise through a number of different cellular and mechanical causes within the inner ear. Studies have shown that OAEs disappear after the inner ear has been damaged, so OAEs are often used in the laboratory and the clinic as a measure of inner ear health.
Audiometry is a branch of audiology and the science of measuring hearing acuity for variations in sound intensity and pitch and for tonal purity, involving thresholds and differing frequencies. Typically, audiometric tests determine a subject's hearing levels with the help of an audiometer, but may also measure ability to discriminate between different sound intensities, recognize pitch, or distinguish speech from background noise. Acoustic reflex and otoacoustic emissions may also be measured. Results of audiometric tests are used to diagnose hearing loss or diseases of the ear, and often make use of an audiogram.
Hyperacusis is an increased sensitivity to sound and a low tolerance for environmental noise. Definitions of hyperacusis can vary significantly; it often revolves around damage to or dysfunction of the stapes bone, stapedius muscle or tensor tympani (eardrum). It is often categorized into four subtypes: loudness, pain, annoyance, and fear. It can be a highly debilitating hearing disorder.
Auditory neuropathy (AN) is a hearing disorder in which the outer hair cells of the cochlea are present and functional, but sound information is not transmitted sufficiently by the auditory nerve to the brain. The cause may be several dysfunctions of the inner hair cells of the cochlea or spiral ganglion neuron levels. Hearing loss with AN can range from normal hearing sensitivity to profound hearing loss.
Presbycusis, or age-related hearing loss, is the cumulative effect of aging on hearing. It is a progressive and irreversible bilateral symmetrical age-related sensorineural hearing loss resulting from degeneration of the cochlea or associated structures of the inner ear or auditory nerves. The hearing loss is most marked at higher frequencies. Hearing loss that accumulates with age but is caused by factors other than normal aging is not presbycusis, although differentiating the individual effects of distinct causes of hearing loss can be difficult.
An audiogram is a graph that shows the audible threshold for standardized frequencies as measured by an audiometer. The Y axis represents intensity measured in decibels (dB) and the X axis represents frequency measured in hertz (Hz). The threshold of hearing is plotted relative to a standardised curve that represents 'normal' hearing, in dB(HL). They are not the same as equal-loudness contours, which are a set of curves representing equal loudness at different levels, as well as at the threshold of hearing, in absolute terms measured in dB SPL.
The auditory brainstem response (ABR), also called brainstem evoked response audiometry (BERA) or brainstem auditory evoked potentials (BAEPs) or brainstem auditory evoked responses (BAERs) is an auditory evoked potential extracted from ongoing electrical activity in the brain and recorded via electrodes placed on the scalp. The measured recording is a series of six to seven vertex positive waves of which I through V are evaluated. These waves, labeled with Roman numerals in Jewett and Williston convention, occur in the first 10 milliseconds after onset of an auditory stimulus. The ABR is considered an exogenous response because it is dependent upon external factors.
Pure-tone audiometry is the main hearing test used to identify hearing threshold levels of an individual, enabling determination of the degree, type and configuration of a hearing loss and thus providing a basis for diagnosis and management. Pure-tone audiometry is a subjective, behavioural measurement of a hearing threshold, as it relies on patient responses to pure tone stimuli. Therefore, pure-tone audiometry is only used on adults and children old enough to cooperate with the test procedure. As with most clinical tests, standardized calibration of the test environment, the equipment and the stimuli is needed before testing proceeds. Pure-tone audiometry only measures audibility thresholds, rather than other aspects of hearing such as sound localization and speech recognition. However, there are benefits to using pure-tone audiometry over other forms of hearing test, such as click auditory brainstem response (ABR). Pure-tone audiometry provides ear specific thresholds, and uses frequency specific pure tones to give place specific responses, so that the configuration of a hearing loss can be identified. As pure-tone audiometry uses both air and bone conduction audiometry, the type of loss can also be identified via the air-bone gap. Although pure-tone audiometry has many clinical benefits, it is not perfect at identifying all losses, such as ‘dead regions’ of the cochlea and neuropathies such as auditory processing disorder (APD). This raises the question of whether or not audiograms accurately predict someone's perceived degree of disability.
An auditory brainstem implant (ABI) is a surgically implanted electronic device that provides a sense of sound to a person who is profoundly deaf, due to retrocochlear hearing impairment. In Europe, ABIs have been used in children and adults, and in patients with neurofibromatosis type II.
Hearing, or auditory perception, is the ability to perceive sounds through an organ, such as an ear, by detecting vibrations as periodic changes in the pressure of a surrounding medium. The academic field concerned with hearing is auditory science.
Temporal envelope (ENV) and temporal fine structure (TFS) are changes in the amplitude and frequency of sound perceived by humans over time. These temporal changes are responsible for several aspects of auditory perception, including loudness, pitch and timbre perception and spatial hearing.
Brian C.J. Moore FMedSci, FRS is an Emeritus Professor of Auditory Perception in the University of Cambridge and an Emeritus Fellow of Wolfson College, Cambridge. His research focuses on psychoacoustics, audiology, and the development and assessment of hearing aids.
Identification of a hearing loss is usually conducted by a general practitioner medical doctor, otolaryngologist, certified and licensed audiologist, school or industrial audiometrist, or other audiometric technician. Diagnosis of the cause of a hearing loss is carried out by a specialist physician or otorhinolaryngologist.