Robert V. Shannon | |
---|---|
Nationality | American |
Education | PhD, University of California, San Diego, CA, USA |
Scientific career | |
Fields | Biomedical Engineering, Auditory sciences |
Institutions | House Ear Institute, Los Angeles, CA, USA; University of Southern California, Los Angeles, CA, USA |
Thesis | "Suppression of Forward Masking" (1975) |
Doctoral advisor | David M. Green |
Doctoral students | Deniz Başkent |
Robert V. Shannon is Research Professor of Otolaryngology-Head & Neck Surgery and Affiliated Research Professor of Biomedical Engineering at University of Southern California, CA, USA. [1] Shannon investigates the basic mechanisms underlying auditory neural processing by users of cochlear implants, auditory brainstem implants, and midbrain implants.
Shannon received his B.A. degrees in Mathematics and Psychology from the University of Iowa, Iowa City, Iowa, in 1971. After obtaining his PhD in Psychology at the University of California, San Diego, CA, USA, in 1975, he completed two postdocs, one at Institute for Perception, Nederlandse Organisatie voor Toegepast Natuurwetenschappelijk Onderzoek (TNO; English: Netherlands Organisation for Applied Scientific Research), Soesterberg, Netherlands, and University of California, Irvine, CA, USA. After faculty positions at University of California, San Francisco, CA, USA, and Boys Town National Research Hospital (BTNRH), he served as the director of Department of Auditory Implant and Perception Research, House Ear Institute, Los Angeles, CA, USA, with an affiliated research professor position at Biomedical Engineering, University of Southern California, Los Angeles, CA, USA.
Shannon has been a founding organizer of Conference on Implantable Auditory Prostheses (CIAP). [2] In 1996, Shannon was elected Fellow of the Acoustical Society of America "for contributions in the psychoelectric study of hearing." [3] In 2007, Shannon served as the President of Association for Research in Otolaryngology (ARO), [4] and in 2011 he was the Recipient of ARO Award of Merit. [5]
As of 2018, Shannon is serving as a Member of Hearing4All Scientific Advisory Board, [6] and Board of Directors of Hearing Health Foundation. [7]
Shannon has been one of the earlier and main researchers studying the psychophysics of electrical stimulation in cochlear-implant users, [8] [9] laying out the foundations for fundamental limitations and capabilities of sound perception with a cochlear implant. Later, Shannon has expanded his research to also include auditory brainstem implants and auditory midbrain implants. [10]
A key early contribution was the development of a research interface that could be used by researchers to independently achieve stimulus control in electric stimulation (now referred to as direct stimulation). [11] In 1995, Shannon and colleagues published a study on perception of speech that was manipulated in temporal envelope and fine structure. More specifically, using noiseband vocoding, inherent degradations of cochlear-implant speech signal transmission were (loosely) mimicked by removing most temporal fine structure and limiting envelope information to a small number of spectral channels. [12] This paper showed both the importance of envelop information for speech perception, as well as providing an initial explanation why, in quiet listening environments, cochlear-implant users can perceive speech well, despite those degradations. In a later study, Shannon and colleagues provided early evidence that one of the main limiting factors in speech perception by cochlear-implant users is reduced spectral resolution. [13] What this paper showed was that the limitation in spectral resolution was not caused by the limited number of electrodes, which each delivers distinct spectral information of speech. Instead, it seemed to be caused by an internal factor, namely, channel interactions, a consequence of direct electric stimulation of the auditory nerve.
Shannon has supervised a number of PhD students and postdocs, whose projects lead to a comprehensive exploration of the effects of front-end processing and related parameters of cochlear-implant signal processing and electrode placement on sound and speech perception. [14] [15] [16] [17] [18] [19] [20]
A cochlear implant (CI) is a surgically implanted neuroprosthesis that provides a person who has moderate-to-profound sensorineural hearing loss with sound perception. With the help of therapy, cochlear implants may allow for improved speech understanding in both quiet and noisy environments. A CI bypasses acoustic hearing by direct electrical stimulation of the auditory nerve. Through everyday listening and auditory training, cochlear implants allow both children and adults to learn to interpret those signals as speech and sound.
Auditory neuropathy (AN) is a hearing disorder in which the outer hair cells of the cochlea are present and functional, but sound information is not transmitted sufficiently by the auditory nerve to the brain. The cause may be several dysfunctions of the inner hair cells of the cochlea or spiral ganglion neuron levels. Hearing loss with AN can range from normal hearing sensitivity to profound hearing loss.
Unilateral hearing loss (UHL) is a type of hearing impairment where there is normal hearing in one ear and impaired hearing in the other ear.
The Greenwood function correlates the position of the hair cells in the inner ear to the frequencies that stimulate their corresponding auditory neurons. Empirically derived in 1961 by Donald D. Greenwood, the relationship has shown to be constant throughout mammalian species when scaled to the appropriate cochlear spiral lengths and audible frequency ranges. Moreover, the Greenwood function provides the mathematical basis for cochlear implant surgical electrode array placement within the cochlea.
Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.
The auditory brainstem response (ABR), also called brainstem evoked response audiometry (BERA) or brainstem auditory evoked potentials (BAEPs) or brainstem auditory evoked responses (BAERs) is an auditory evoked potential extracted from ongoing electrical activity in the brain and recorded via electrodes placed on the scalp. The measured recording is a series of six to seven vertex positive waves of which I through V are evaluated. These waves, labeled with Roman numerals in Jewett and Williston convention, occur in the first 10 milliseconds after onset of an auditory stimulus. The ABR is considered an exogenous response because it is dependent upon external factors.
Binaural fusion or binaural integration is a cognitive process that involves the combination of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech as one ear may pick up more information about the speech stimuli than the other.
The ASA Silver Medal is an award presented by the Acoustical Society of America to individuals, without age limitation, for contributions to the advancement of science, engineering, or human welfare through the application of acoustic principles or through research accomplishments in acoustics. The medal is awarded in a number of categories depending on the technical committee responsible for making the nomination.
Electric acoustic stimulation (EAS) is the use of a hearing aid and a cochlear implant technology together in the same ear. EAS is intended for people with high-frequency hearing loss, who can hear low-pitched sounds but not high-pitched ones. The hearing aid acoustically amplifies low-frequency sounds, while the cochlear implant electrically stimulates the middle- and high-frequency sounds. The inner ear then processes the acoustic and electric stimuli simultaneously, to give the patient the perception of sound.
An auditory brainstem implant (ABI) is a surgically implanted electronic device that provides a sense of sound to a person who is profoundly deaf, due to retrocochlear hearing impairment. In Europe, ABIs have been used in children and adults, and in patients with neurofibromatosis type II.
Phonemic restoration effect is a perceptual phenomenon where under certain conditions, sounds actually missing from a speech signal can be restored by the brain and may appear to be heard. The effect occurs when missing phonemes in an auditory signal are replaced with a noise that would have the physical properties to mask those phonemes, creating an ambiguity. In such ambiguity, the brain tends towards filling in absent phonemes. The effect can be so strong that some listeners may not even notice that there are phonemes missing. This effect is commonly observed in a conversation with heavy background noise, making it difficult to properly hear every phoneme being spoken. Different factors can change the strength of the effect, including how rich the context or linguistic cues are in speech, as well as the listener's state, such as their hearing status or age.
Monita Chatterjee is an auditory scientist and the Director of the Auditory Prostheses & Perception Laboratory at Boys Town National Research Hospital. She investigates the basic mechanisms underlying auditory processing by cochlear implant listeners.
Temporal envelope (ENV) and temporal fine structure (TFS) are changes in the amplitude and frequency of sound perceived by humans over time. These temporal changes are responsible for several aspects of auditory perception, including loudness, pitch and timbre perception and spatial hearing.
Brian C.J. Moore FMedSci, FRS is an Emeritus Professor of Auditory Perception in the University of Cambridge and an Emeritus Fellow of Wolfson College, Cambridge. His research focuses on psychoacoustics, audiology, and the development and assessment of hearing aids.
Auditory science or hearing science is a field of research and education concerning the perception of sounds by humans, animals, or machines. It is a heavily interdisciplinary field at the crossroad between acoustics, neuroscience, and psychology. It is often related to one or many of these other fields: psychophysics, psychoacoustics, audiology, physiology, otorhinolaryngology, speech science, automatic speech recognition, music psychology, linguistics, and psycholinguistics.
Christian Lorenzi is Professor of Experimental Psychology at École Normale Supérieure in Paris, France, where he has been Director of the Department of Cognitive Studies and Director of Scientific Studies until. Lorenzi works on auditory perception.
Deniz Başkent is a Turkish-born Dutch auditory scientist who works on auditory perception. As of 2018, she is Professor of Audiology at the University Medical Center Groningen, Netherlands.
Quentin Summerfield is a British psychologist, specialising in hearing. He joined the Medical Research Council Institute of Hearing Research in 1977 and served as its deputy director from 1993 to 2004, before moving on to a chair in psychology at The University of York. He served as head of the Psychology department from 2011 to 2017 and retired in 2018, becoming an emeritus professor. From 2013 to 2018, he was a member of the University of York's Finance & Policy Committee. From 2015 to 2018, he was a member of York University's governing body, the Council.
Richard Charles Dowell is an Australian audiologist, academic and researcher. He holds the Graeme Clark Chair in Audiology and Speech Science at University of Melbourne. He is a former director of Audiological Services at Royal Victorian Eye and Ear Hospital.
Computational audiology is a branch of audiology that employs techniques from mathematics and computer science to improve clinical treatments and scientific understanding of the auditory system. Computational audiology is closely related to computational medicine, which uses quantitative models to develop improved methods for general disease diagnosis and treatment.