Eric Knudsen is a professor of neurobiology at Stanford University. He is best known for his discovery, along with Masakazu Konishi, of a brain map of sound location in two dimensions in the barn owl, tyto alba. His work has contributed to the understanding of information processing in the auditory system of the barn owl, the plasticity of the auditory space map in developing and adult barn owls, the influence of auditory and visual experience on the space map, and more recently, mechanisms of attention and learning. He is a recipient of the Lashley Award, [1] the Gruber Prize in Neuroscience, [2] and the Newcomb Cleveland prize [3] and is a member of the National Academy of Sciences.
Knudsen attended UC, Santa Barbara, earning a B.A. in Zoology followed by an M.A. in Neuroscience. He earned a Ph. D. at UC, San Diego in 1976, working under Theodore H. Bullock. Knudsen was a post-doctoral fellow with Konishi at California Institute of Technology from 1976 to 1979. He has been a professor at the Stanford University School of Medicine since 1988 and was chair of the Department of Neuroscience in the School of Medicine from 2001 to 2005.
In 1978, Knudsen and Konishi presented the discovery of an auditory map of space in the midbrain of the barn owl. This discovery was groundbreaking because it unearthed the first non-somatotopic space map in the brain. The map was found in the owl’s midbrain, in the lateral and anterior mesencephalicus lateralis dorsalis (MLD), a structure now referred to as the inferior colliculus. Unlike most sound-localization maps, this map was found to be two-dimensional, with units arranged spatially to represent both the vertical and horizontal location of sound. Knudsen and Konishi discovered that units in this structure respond preferentially to sounds originating in a particular region in space. [4]
In the 1978 paper, elevation and azimuth (location in the horizontal plane) were shown to be the two coordinates of the map. Using a speaker set on a rotatable hemispherical track, Knudsen and Konishi presented owls with auditory stimulus from various locations in space and recorded the resulting neuronal activity. They found that neurons in this part of the MLD were organized according to the location of their receptive field, with azimuth varying along the horizontal plane of the space map and elevation varying vertically.
Knudsen followed this discovery with research into specific sound localization mechanisms. Two main auditory cues used by the barn owl to localize sound are interaural time difference (ITD) and interaural intensity difference (IID). The owl’s ears are asymmetric, with the right ear’s opening being directed higher than that of the left. This asymmetry allows the barn owl to determine the elevation of a sound by comparing sound levels between its two ears. Interaural time differences provide the owl with information regarding a sound’s azimuth; sound will reach the ear closer to the sound source before reaching the farther ear, and this time difference can be detected and interpreted as an azimuthal direction. [5] At low frequencies, the wavelength of a sound is wider than the owl's facial ruff, and the ruff does not affect detection of azimuth. At high frequencies, the ruff plays a role in reflecting sound for heightened sensitivity to vertical elevation. Therefore, with wide-band noise, containing both high and low frequencies, the owl could use interaural spectrum difference to obtain information about both azimuth and elevation. In 1979, Knudsen and Konishi showed that the barn owl uses interaural spectrum information in sound localization. They presented owls with both wide-bandwidth noise and pure tones. The birds were able to successfully locate pure tones (since they could still gather information from IID and ITD), but their error rate was much lower when localizing wide-bandwidth noise. This indicates that the birds utilize interaural spectrum differences to improve their accuracy. [6]
Together with John Olsen and Steven Esterly, Knudsen studied the pattern of response to IID and ITD in the space map. They presented owls with sound stimuli while recordings were made from the optic tectum. Consistent with the previous findings regarding the organization of the optic tectum in terms of elevation and azimuth, they found that ITD varied primarily along the horizontal axis and IID along the vertical axis. However, the map according to elevation and azimuth does not line up perfectly with the ITD/IID map. Azimuth and ITD do not have a strictly linear relationship. In addition, IID is used not only to determine elevation of the sound source, but also, to a lesser degree, azimuth. Finally, IID and ITD are not the only two cues the owl uses to determine sound location, as shown by Knudsen’s research into the effect of bandwidth on sound localization accuracy. Information from multiple types of cues is used to create the map of elevation and azimuth in the optic tectum. [7]
At Stanford, Knudsen studied the plasticity of the auditory sound map, discovering that associations between the auditory cue values of the map and the locations in space that they represent can be altered by both auditory and visual experience.
Knudsen altered owls’ auditory cues by plugging one ear or removing the ruff feathers and preaural flaps. Initially this caused the birds to inaccurately judge sound source location, since the cues normally associated with each location in space had been changed. However, over time, the map shifted to restore a normal auditory sound map, aligned with the visual space map, despite the abnormal cues. New associations were formed between the abnormal cue values and the spatial locations they now represented, adjusting the map to translate the cues the bird was receiving into an accurate representation of its environment. This adjustment happens most rapidly and extensively in young birds. However, the map never perfectly reflects abnormal experience, even when cues are altered so early that the bird never experiences normal cues. This indicates that there is some innate “programming” of the map to reflect typical sensory experience. [8]
In 1994, Knudsen disproved the idea that the auditory sound map is not long plastic in the adult bird; the plasticity appears to have a critical period. Earlier work had indicated that alteration of the sound map by experience was restricted to a period during development, and once this window of plasticity had passed, subsequent changes would not occur. In work done with Steven Esterly and John Olsen, he showed that adult animals retain plasticity, although to a lesser degree than younger animals. The adult auditory sound map is more readily altered if the bird was exposed to abnormal stimuli earlier in life, during a sensitive period. This shows that the owl’s brain forms functional connections during early abnormal experience which can be reactivated upon the return of abnormal stimuli. [9]
Knudsen's work has shown that vision is the dominant sense in changing the auditory sound map. Binocular displacing prisms were used to shift owls’ visual world, which resulted in a corresponding shift in the sound map. The disparity between the owls’ visual and auditory experience was reconciled by reinterpretation of auditory cues to match visual experience, even though the visual information was incorrect and the auditory was not. Even when other sensory information indicates to the owl that its visual input is misleading, this input exerts an apparently innate dominance over the other senses. In owls raised with displacing prisms, this persistent reliance on inaccurate information is particularly apparent: “Even though interaction with the environment from the beginning of life has proven to owls that their visual perception of stimulus source location is inaccurate, they nevertheless use vision to calibrate sound localization, which in this case leads to a gross error in sound localization”. [10] This dominance does have limitations, however; in 1985, Eric Knudsen and Phyllis Knudsen conducted a study which showed that vision can alter the magnitude but not the sign of an auditory error. [11]
While monaural occlusion and visual displacement both alter the associations between sensory cues and corresponding spatial locations, there are significant differences in the mechanisms at work: “The task under [the conditions of monaural occlusion] is to use vision[…]to assign abnormal combinations of cue values to appropriate locations in space. In contrast, prisms cause a relatively coherent displacement of visual space while leaving auditory cues essentially unchanged. The task under these conditions is to assign normal ranges and combinations of cue values to abnormal locations in space”. [12]
A head-related transfer function (HRTF), also known as anatomical transfer function (ATF), is a response that characterizes how an ear receives a sound from a point in space. As sound strikes the listener, the size and shape of the head, ears, ear canal, density of the head, size and shape of nasal and oral cavities, all transform the sound and affect how it is perceived, boosting some frequencies and attenuating others. Generally speaking, the HRTF boosts frequencies from 2–5 kHz with a primary resonance of +17 dB at 2,700 Hz. But the response curve is more complex than a single bump, affects a broad frequency spectrum, and varies significantly from person to person.
Neuroethology is the evolutionary and comparative approach to the study of animal behavior and its underlying mechanistic control by the nervous system. It is an interdisciplinary science that combines both neuroscience and ethology. A central theme of neuroethology, which differentiates it from other branches of neuroscience, is its focus on behaviors that have been favored by natural selection rather than on behaviors that are specific to a particular disease state or laboratory experiment.
Sound localization is a listener's ability to identify the location or origin of a detected sound in direction and distance.
The superior colliculus is a structure lying on the roof of the mammalian midbrain. In non-mammalian vertebrates, the homologous structure is known as the optic tectum, or optic lobe. The adjective form tectal is commonly used for both structures.
Multisensory integration, also known as multimodal integration, is the study of how information from the different sensory modalities may be integrated by the nervous system. A coherent representation of objects combining modalities enables animals to have meaningful perceptual experiences. Indeed, multisensory integration is central to adaptive behavior because it allows animals to perceive a world of coherent perceptual entities. Multisensory integration also deals with how different sensory modalities interact with one another and alter each other's processing.
Virtual acoustic space (VAS), also known as virtual auditory space, is a technique in which sounds presented over headphones appear to originate from any desired direction in space. The illusion of a virtual sound source outside the listener's head is created.
The superior olivary complex (SOC) or superior olive is a collection of brainstem nuclei that functions in multiple aspects of hearing and is an important component of the ascending and descending auditory pathways of the auditory system. The SOC is intimately related to the trapezoid body: most of the cell groups of the SOC are dorsal to this axon bundle while a number of cell groups are embedded in the trapezoid body. Overall, the SOC displays a significant interspecies variation, being largest in bats and rodents and smaller in primates.
The interaural time difference when concerning humans or animals, is the difference in arrival time of a sound between two ears. It is important in the localization of sounds, as it provides a cue to the direction or angle of the sound source from the head. If a signal arrives at the head from one side, the signal has further to travel to reach the far ear than the near ear. This pathlength difference results in a time difference between the sound's arrivals at the ears, which is detected and aids the process of identifying the direction of sound source.
A sensory cue is a statistic or signal that can be extracted from the sensory input by a perceiver, that indicates the state of some property of the world that the perceiver is interested in perceiving.
Binaural fusion or binaural integration is a cognitive process that involves the combination of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech as one ear may pick up more information about the speech stimuli than the other.
Coincidence detection in the context of neurobiology is a process by which a neuron or a neural circuit can encode information by detecting the occurrence of temporally close but spatially distributed input signals. Coincidence detectors influence neuronal information processing by reducing temporal jitter, reducing spontaneous activity, and forming associations between separate neural events. This concept has led to a greater understanding of neural processes and the formation of computational maps in the brain.
Ambiophonics is a method in the public domain that employs digital signal processing (DSP) and two loudspeakers directly in front of the listener in order to improve reproduction of stereophonic and 5.1 surround sound for music, movies, and games in home theaters, gaming PCs, workstations, or studio monitoring applications. First implemented using mechanical means in 1986, today a number of hardware and VST plug-in makers offer Ambiophonic DSP. Ambiophonics eliminates crosstalk inherent in the conventional stereo triangle speaker placement, and thereby generates a speaker-binaural soundfield that emulates headphone-binaural sound, and creates for the listener improved perception of reality of recorded auditory scenes. A second speaker pair can be added in back in order to enable 360° surround sound reproduction. Additional surround speakers may be used for hall ambience, including height, if desired.
Sensory maps are areas of the brain which respond to sensory stimulation, and are spatially organized according to some feature of the sensory stimulation. In some cases the sensory map is simply a topographic representation of a sensory surface such as the skin, cochlea, or retina. In other cases it represents other stimulus properties resulting from neuronal computation and is generally ordered in a manner that reflects the periphery. An example is the somatosensory map which is a projection of the skin's surface in the brain that arranges the processing of tactile sensation. This type of somatotopic map is the most common, possibly because it allows for physically neighboring areas of the brain to react to physically similar stimuli in the periphery or because it allows for greater motor control.
Spatial hearing loss refers to a form of deafness that is an inability to use spatial cues about where a sound originates from in space. This in turn affects the ability to understand speech in the presence of background noise.
Sensory maps and brain development is a concept in neuroethology that links the development of the brain over an animal’s lifetime with the fact that there is spatial organization and pattern to an animal’s sensory processing. Sensory maps are the representations of sense organs as organized maps in the brain, and it is the fundamental organization of processing. Sensory maps are not always close to an exact topographic projection of the senses. The fact that the brain is organized into sensory maps has wide implications for processing, such as that lateral inhibition and coding for space are byproducts of mapping. The developmental process of an organism guides sensory map formation; the details are yet unknown. The development of sensory maps requires learning, long term potentiation, experience-dependent plasticity, and innate characteristics. There is significant evidence for experience-dependent development and maintenance of sensory maps, and there is growing evidence on the molecular basis, synaptic basis and computational basis of experience-dependent development.
3D sound localization refers to an acoustic technology that is used to locate the source of a sound in a three-dimensional space. The source location is usually determined by the direction of the incoming sound waves and the distance between the source and sensors. It involves the structure arrangement design of the sensors and signal processing techniques.
Perceptual-based 3D sound localization is the application of knowledge of the human auditory system to develop 3D sound localization technology.
Most owls are nocturnal or crepuscular birds of prey. Because they hunt at night, they must rely on non-visual senses. Experiments by Roger Payne have shown that owls are sensitive to the sounds made by their prey, not the heat or the smell. In fact, the sound cues are both necessary and sufficient for localization of mice from a distant location where they are perched. For this to work, the owls must be able to accurately localize both the azimuth and the elevation of the sound source.
Binaural unmasking is phenomenon of auditory perception discovered by Ira Hirsh. In binaural unmasking, the brain combines information from the two ears in order to improve signal detection and identification in noise. The phenomenon is most commonly observed when there is a difference between the interaural phase of the signal and the interaural phase of the noise. When such a difference is present there is an improvement in masking threshold compared to a reference situation in which the interaural phases are the same, or when the stimulus has been presented monaurally. Those two cases usually give very similar thresholds. The size of the improvement is known as the "binaural masking level difference" (BMLD), or simply as the "masking level difference".
Ilana B. Witten is an American neuroscientist and professor of psychology and neuroscience at Princeton University. Witten studies the mesolimbic pathway, with a focus on the striatal neural circuit mechanisms driving reward learning and decision making.