Robert V. Shannon

Last updated
Robert V. Shannon
NationalityAmerican
EducationPhD, University of California, San Diego, CA, USA
Scientific career
Fields Biomedical Engineering, Auditory sciences
Institutions House Ear Institute, Los Angeles, CA, USA; University of Southern California, Los Angeles, CA, USA
Thesis "Suppression of Forward Masking"  (1975)
Doctoral advisor David M. Green
Doctoral students Deniz Başkent

Robert V. Shannon is Research Professor of Otolaryngology-Head & Neck Surgery and Affiliated Research Professor of Biomedical Engineering at University of Southern California, CA, USA. [1] Shannon investigates the basic mechanisms underlying auditory neural processing by users of cochlear implants, auditory brainstem implants, and midbrain implants.

Contents

Biography

Shannon received his B.A. degrees in Mathematics and Psychology from the University of Iowa, Iowa City, Iowa, in 1971. After obtaining his PhD in Psychology at the University of California, San Diego, CA, USA, in 1975, he completed two postdocs, one at Institute for Perception, Nederlandse Organisatie voor Toegepast Natuurwetenschappelijk Onderzoek (TNO; English: Netherlands Organisation for Applied Scientific Research), Soesterberg, Netherlands, and University of California, Irvine, CA, USA. After faculty positions at University of California, San Francisco, CA, USA, and Boys Town National Research Hospital (BTNRH), he served as the director of Department of Auditory Implant and Perception Research, House Ear Institute, Los Angeles, CA, USA, with an affiliated research professor position at Biomedical Engineering, University of Southern California, Los Angeles, CA, USA.

Shannon has been a founding organizer of Conference on Implantable Auditory Prostheses (CIAP). [2] In 1996, Shannon was elected Fellow of the Acoustical Society of America "for contributions in the psychoelectric study of hearing." [3] In 2007, Shannon served as the President of Association for Research in Otolaryngology (ARO), [4] and in 2011 he was the Recipient of ARO Award of Merit. [5]

As of 2018, Shannon is serving as a Member of Hearing4All Scientific Advisory Board, [6] and Board of Directors of Hearing Health Foundation. [7]

Research

Shannon has been one of the earlier and main researchers studying the psychophysics of electrical stimulation in cochlear-implant users, [8] [9] laying out the foundations for fundamental limitations and capabilities of sound perception with a cochlear implant. Later, Shannon has expanded his research to also include auditory brainstem implants and auditory midbrain implants. [10]

A key early contribution was the development of a research interface that could be used by researchers to independently achieve stimulus control in electric stimulation (now referred to as direct stimulation). [11] In 1995, Shannon and colleagues published a study on perception of speech that was manipulated in temporal envelope and fine structure. More specifically, using noiseband vocoding, inherent degradations of cochlear-implant speech signal transmission were (loosely) mimicked by removing most temporal fine structure and limiting envelope information to a small number of spectral channels. [12] This paper showed both the importance of envelop information for speech perception, as well as providing an initial explanation why, in quiet listening environments, cochlear-implant users can perceive speech well, despite those degradations. In a later study, Shannon and colleagues provided early evidence that one of the main limiting factors in speech perception by cochlear-implant users is reduced spectral resolution. [13] What this paper showed was that the limitation in spectral resolution was not caused by the limited number of electrodes, which each delivers distinct spectral information of speech. Instead, it seemed to be caused by an internal factor, namely, channel interactions, a consequence of direct electric stimulation of the auditory nerve.

Shannon has supervised a number of PhD students and postdocs, whose projects lead to a comprehensive exploration of the effects of front-end processing and related parameters of cochlear-implant signal processing and electrode placement on sound and speech perception. [14] [15] [16] [17] [18] [19] [20]

Related Research Articles

<span class="mw-page-title-main">Cochlear implant</span> Prosthesis

A cochlear implant (CI) is a surgically implanted neuroprosthesis that provides a person who has moderate-to-profound sensorineural hearing loss with sound perception. With the help of therapy, cochlear implants may allow for improved speech understanding in both quiet and noisy environments. A CI bypasses acoustic hearing by direct electrical stimulation of the auditory nerve. Through everyday listening and auditory training, cochlear implants allow both children and adults to learn to interpret those signals as speech and sound.

Auditory neuropathy (AN) is a hearing disorder in which the outer hair cells of the cochlea are present and functional, but sound information is not transmitted sufficiently by the auditory nerve to the brain. The cause may be several dysfunctions of the inner hair cells of the cochlea or spiral ganglion neuron levels. Hearing loss with AN can range from normal hearing sensitivity to profound hearing loss.

Unilateral hearing loss (UHL) is a type of hearing impairment where there is normal hearing in one ear and impaired hearing in the other ear.

The Greenwood function correlates the position of the hair cells in the inner ear to the frequencies that stimulate their corresponding auditory neurons. Empirically derived in 1961 by Donald D. Greenwood, the relationship has shown to be constant throughout mammalian species when scaled to the appropriate cochlear spiral lengths and audible frequency ranges. Moreover, the Greenwood function provides the mathematical basis for cochlear implant surgical electrode array placement within the cochlea.

Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.

The auditory brainstem response (ABR), also called brainstem evoked response audiometry (BERA) or brainstem auditory evoked potentials (BAEPs) or brainstem auditory evoked responses (BAERs) is an auditory evoked potential extracted from ongoing electrical activity in the brain and recorded via electrodes placed on the scalp. The measured recording is a series of six to seven vertex positive waves of which I through V are evaluated. These waves, labeled with Roman numerals in Jewett and Williston convention, occur in the first 10 milliseconds after onset of an auditory stimulus. The ABR is considered an exogenous response because it is dependent upon external factors.

Binaural fusion or binaural integration is a cognitive process that involves the combination of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech as one ear may pick up more information about the speech stimuli than the other.

The ASA Silver Medal is an award presented by the Acoustical Society of America to individuals, without age limitation, for contributions to the advancement of science, engineering, or human welfare through the application of acoustic principles or through research accomplishments in acoustics. The medal is awarded in a number of categories depending on the technical committee responsible for making the nomination.

Electric acoustic stimulation (EAS) is the use of a hearing aid and a cochlear implant technology together in the same ear. EAS is intended for people with high-frequency hearing loss, who can hear low-pitched sounds but not high-pitched ones. The hearing aid acoustically amplifies low-frequency sounds, while the cochlear implant electrically stimulates the middle- and high-frequency sounds. The inner ear then processes the acoustic and electric stimuli simultaneously, to give the patient the perception of sound.

An auditory brainstem implant (ABI) is a surgically implanted electronic device that provides a sense of sound to a person who is profoundly deaf, due to retrocochlear hearing impairment. In Europe, ABIs have been used in children and adults, and in patients with neurofibromatosis type II.

Phonemic restoration effect is a perceptual phenomenon where under certain conditions, sounds actually missing from a speech signal can be restored by the brain and may appear to be heard. The effect occurs when missing phonemes in an auditory signal are replaced with a noise that would have the physical properties to mask those phonemes, creating an ambiguity. In such ambiguity, the brain tends towards filling in absent phonemes. The effect can be so strong that some listeners may not even notice that there are phonemes missing. This effect is commonly observed in a conversation with heavy background noise, making it difficult to properly hear every phoneme being spoken. Different factors can change the strength of the effect, including how rich the context or linguistic cues are in speech, as well as the listener's state, such as their hearing status or age.

Monita Chatterjee is an auditory scientist and the Director of the Auditory Prostheses & Perception Laboratory at Boys Town National Research Hospital. She investigates the basic mechanisms underlying auditory processing by cochlear implant listeners.

Temporal envelope (ENV) and temporal fine structure (TFS) are changes in the amplitude and frequency of sound perceived by humans over time. These temporal changes are responsible for several aspects of auditory perception, including loudness, pitch and timbre perception and spatial hearing.

<span class="mw-page-title-main">Brian Moore (scientist)</span>

Brian C.J. Moore FMedSci, FRS is an Emeritus Professor of Auditory Perception in the University of Cambridge and an Emeritus Fellow of Wolfson College, Cambridge. His research focuses on psychoacoustics, audiology, and the development and assessment of hearing aids.

Auditory science or hearing science is a field of research and education concerning the perception of sounds by humans, animals, or machines. It is a heavily interdisciplinary field at the crossroad between acoustics, neuroscience, and psychology. It is often related to one or many of these other fields: psychophysics, psychoacoustics, audiology, physiology, otorhinolaryngology, speech science, automatic speech recognition, music psychology, linguistics, and psycholinguistics.

<span class="mw-page-title-main">Christian Lorenzi</span>

Christian Lorenzi is Professor of Experimental Psychology at École Normale Supérieure in Paris, France, where he has been Director of the Department of Cognitive Studies and Director of Scientific Studies until. Lorenzi works on auditory perception.

Deniz Başkent is a Turkish-born Dutch auditory scientist who works on auditory perception. As of 2018, she is Professor of Audiology at the University Medical Center Groningen, Netherlands.

<span class="mw-page-title-main">Quentin Summerfield</span> British psychologist

Quentin Summerfield is a British psychologist, specialising in hearing. He joined the Medical Research Council Institute of Hearing Research in 1977 and served as its deputy director from 1993 to 2004, before moving on to a chair in psychology at The University of York. He served as head of the Psychology department from 2011 to 2017 and retired in 2018, becoming an emeritus professor. From 2013 to 2018, he was a member of the University of York's Finance & Policy Committee. From 2015 to 2018, he was a member of York University's governing body, the Council.

<span class="mw-page-title-main">Richard Dowell</span> Australian audiologist and researcher

Richard Charles Dowell is an Australian audiologist, academic and researcher. He holds the Graeme Clark Chair in Audiology and Speech Science at University of Melbourne. He is a former director of Audiological Services at Royal Victorian Eye and Ear Hospital.

Computational audiology is a branch of audiology that employs techniques from mathematics and computer science to improve clinical treatments and scientific understanding of the auditory system. Computational audiology is closely related to computational medicine, which uses quantitative models to develop improved methods for general disease diagnosis and treatment.

References

  1. "Robert Shannon, | Keck School of Medicine of USC". keck.usc.edu. Retrieved 17 June 2018.
  2. "Organization Conference on Implantable Auditory Prostheses". www.ciaphome.org. Retrieved 17 June 2018.
  3. "Acoustical News—USA". The Journal of the Acoustical Society of America. 100 (4): 1915–1922. October 1996. doi:10.1121/1.2336982.
  4. "Past Presidents - Association for Research in Otolaryngology". www.aro.org. Retrieved 17 June 2018.
  5. "Award of Merit Recipients - Association for Research in Otolaryngology". www.aro.org. Retrieved 17 June 2018.
  6. "Hearing4all - Scientific Advisory Board". hearing4all.eu. Retrieved 17 June 2018.
  7. "Leadership". Hearing Health Foundation. Retrieved 17 June 2018.
  8. Shannon, Robert V. (August 1983). "Multichannel electrical stimulation of the auditory nerve in man. I. Basic psychophysics". Hearing Research. 11 (2): 157–189. doi:10.1016/0378-5955(83)90077-1.
  9. Shannon, Robert V. (October 1983). "Multichannel electrical stimulation of the auditory nerve in man. II. Channel interaction". Hearing Research. 12 (1): 1–16. doi: 10.1016/0378-5955(83)90115-6 .
  10. "Robert V Shannon - Google Scholar Citations". scholar.google.com. Retrieved 17 June 2018.
  11. Shannon, Robert V.; Adams, Doug D.; Ferrel, Roger L.; Palumbo, Robert L.; Grandgenett, Michael (February 1990). "A computer interface for psychophysical and speech research with the Nucleus cochlear implant". The Journal of the Acoustical Society of America. 87 (2): 905–907. doi:10.1121/1.398902.
  12. Shannon, R. V.; Zeng, F.-G.; Kamath, V.; Wygonski, J.; Ekelid, M. (October 1995). "Speech Recognition with Primarily Temporal Cues". Science. 270 (5234): 303–304. doi:10.1126/science.270.5234.303. PMID   7569981.
  13. Friesen, Lendra M.; Shannon, Robert V.; Baskent, Deniz; Wang, Xiaosong (August 2001). "Speech recognition in noise as a function of the number of spectral channels: Comparison of acoustic hearing and cochlear implants". The Journal of the Acoustical Society of America. 110 (2): 1150–1163. doi:10.1121/1.1381538. PMID   11519582.
  14. Fu, Qian-Jie; Shannon, Robert V. (November 1998). "Effects of amplitude nonlinearity on phoneme recognition by cochlear implant users and normal-hearing listeners". The Journal of the Acoustical Society of America. 104 (5): 2570–2577. doi:10.1121/1.423912.
  15. Fu, Qian-Jie; Shannon, Robert V.; Wang, Xiaosong (December 1998). "Effects of noise and spectral resolution on vowel and consonant recognition: Acoustic and electric hearing". The Journal of the Acoustical Society of America. 104 (6): 3586–3596. doi:10.1121/1.423941.
  16. Fu, Qian-Jie; Shannon, Robert V. (January 2000). "Effect of stimulation rate on phoneme recognition by Nucleus-22 cochlear implant listeners". The Journal of the Acoustical Society of America. 107 (1): 589–597. doi:10.1121/1.428325.
  17. Chatterjee, Monita; Shannon, Robert V. (May 1998). "Forward masked excitation patterns in multielectrode electrical stimulation". The Journal of the Acoustical Society of America. 103 (5): 2565–2572. doi:10.1121/1.422777. PMID   9604350.
  18. Başkent, Deniz; Shannon, Robert V. (November 2004). "Frequency-place compression and expansion in cochlear implant listeners". The Journal of the Acoustical Society of America. 116 (5): 3130–3140. doi:10.1121/1.1804627.
  19. Başkent, Deniz; Shannon, Robert V. (March 2005). "Interactions between cochlear implant electrode insertion depth and frequency-place mapping". The Journal of the Acoustical Society of America. 117 (3): 1405–1416. doi: 10.1121/1.1856273 .
  20. Srinivasan, Arthi G.; Padilla, Monica; Shannon, Robert V.; Landsberger, David M. (May 2013). "Improving speech perception in noise with current focusing in cochlear implant users". Hearing Research. 299: 29–36. doi:10.1016/j.heares.2013.02.004. PMC   3639477 . PMID   23467170.