David Poeppel | |
---|---|
Born | 1964 (age 58–59) Freiburg, West Germany |
Alma mater | Massachusetts Institute of Technology |
Occupation | Professor of Neuroscience |
Parent |
|
David Poeppel (born 1964 in Freiburg) [1] is Professor of Psychology and Neural Science at New York University (NYU). [2] From 2014 until the end of 2021, he was the Director of the Department of Neuroscience at Max Planck Institute for Empirical Aesthetics (MPIEA). [3] In 2019, he co-founded the Center for Language, Music and Emotion (CLaME) [4] an international joint research center, co-sponsored by the Max Planck Society and New York University. Since 2021, he is the managing director of the Ernst Strüngmann Institute. [5]
Poeppel grew up in Munich, Germany; Cambridge MA, USA; and Caracas, Venezuela. He received his Abitur from the Maximiliansgymnasium in Munich, obtained his bachelor's degree (1990) and doctorate (1995) from Massachusetts Institute of Technology MIT. He received training in functional brain imaging as a postdoctoral fellow at the School of Medicine of the University of California, San Francisco. From 2000 to 2008, Poeppel directed the Cognitive Neuroscience of Language Laboratory at the University of Maryland College Park, where he was a professor of linguistics and biology. [6] He joined New York University [7] in 2009.
He was a fellow at the Wissenschaftskolleg zu Berlin and has been a guest professor at several institutions. He has received the DaimlerChrysler Berlin Prize of the American Academy of Arts and Sciences [8] and other honors.
He is married to the novelist Amy Poeppel and they have three sons, Alex, Andrew and Luke. His parents are Christiane Blohm and Dr. Ernst Pöppel. [9]
David Poeppel employs behavioral and cognitive neuroscience approaches to study the brain basis of auditory processing, speech perception, language comprehension, and sometimes music. The research in Poeppel's laboratory addresses questions such as: What are the cognitive and neuronal "parts lists" that form the basis for language processing, i.e., what are the fundamental constituents used in speech and language? How is sensory information transformed into the abstract representations that underlie language processing? What are the neural circuits that enable language processing? The research covers the range of questions ‘from vibrations in the ear to abstractions in the head.’
The major contributions of the Poeppel laboratory include the functional anatomic model of speech and language processing developed with Greg Hickok;, [10] [11] [12] the dual stream model; issues surrounding lateralization in auditory processing, [13] [14] specifically a model known as asymmetric sampling in time; and experimental work on the role of neuronal oscillations in audition and speech perception. [15] [16] He also writes and lectures about methodological questions at the interdisciplinary boundary between cognitive science research and brain research. [17]
Functional anatomy of speech and language: The dual stream model
Hickok G, Poeppel D (2007). The cortical organization of speech perception. Nature Reviews Neuroscience 8: 393–402.
Lau E, Phillips C, Poeppel D (2008). A cortical network for semantics: (de)constructing the N400. Nature Reviews Neuroscience 9: 920–933.
Fundamental mechanisms of speech perception and language comprehension
Poeppel D, Assaneo F (2020). Speech rhythms and their neural foundation. Nature Reviews Neuroscience 21: 322–334.
Assaneo F, Ripolles P, Orpella J, Lin W, de Diego Balaguer R, Poeppel D (2019). Spontaneous synchronization to speech reveals neural mechanisms facilitating language learning. Nature Neuroscience 22:627–632.
Ding N, Melloni L, Zhang H, Tian X, Poeppel D (2016). Cortical entrainment reflects hierarchical structure building in speech comprehension. Nature Neuroscience 19:158-64.
Overath T, McDermott JH, Zarate JM, Poeppel D (2015). The cortical analysis of speech-specific temporal structure revealed by responses to sound quilts. Nature Neuroscience 18:903-911.
Neural oscillations and their role in perception
Luo H, Poeppel D (2007). Phase Patterns of Neuronal Responses Reliably Discriminate Speech in Human Auditory Cortex. Neuron 54: 1001–1010.
Giraud AL, Poeppel D (2012). Cortical oscillations and speech processing: emerging computational principles and operations. Nature Neuroscience 15: 511–517.
Lateralization and its computational consequences for audition
Poeppel D (2003). The analysis of speech in different temporal integration windows: cerebral lateralization as ‘asymmetric sampling in time’. Speech Communication 41: 245–255.
Boemio A, Fromm S, Braun A, Poeppel D (2005). Hierarchical and asymmetric temporal sensitivity in human auditory cortices. Nature Neuroscience 8: 389–395.
Conceptual foundations of cognitive neuroscience
Poeppel D (2012). The maps problem and the mapping problem: Two challenges for a cognitive neuroscience of speech and language. Cognitive Neuropsychol 29: 34–55.
Krakauer J, Ghazanfar A, Maciver M, Gomez-Marin A, Poeppel D. (2017) Neuroscience needs behavior: Correcting a reductionist bias. Neuron 93: 480–490.
2003 - 2004: Fellow, Wissenschaftskolleg zu Berlin [18]
2004: DaimlerChrysler Berlin Prize, [19] American Academy Berlin
2007: Fellow, American Association for the Advancement of Science (AAAS)
Neurolinguistics is the study of neural mechanisms in the human brain that control the comprehension, production, and acquisition of language. As an interdisciplinary field, neurolinguistics draws methods and theories from fields such as neuroscience, linguistics, cognitive science, communication disorders and neuropsychology. Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models in psycholinguistics and theoretical linguistics, and is focused on investigating how the brain can implement the processes that theoretical and psycholinguistics propose are necessary in producing and comprehending language. Neurolinguists study the physiological mechanisms by which the brain processes information related to language, and evaluate linguistic and psycholinguistic theories, using aphasiology, brain imaging, electrophysiology, and computer modeling.
The temporal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The temporal lobe is located beneath the lateral fissure on both cerebral hemispheres of the mammalian brain.
The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs and the auditory parts of the sensory system.
The auditory cortex is the part of the temporal lobe that processes auditory information in humans and many other vertebrates. It is a part of the auditory system, performing basic and higher functions in hearing, such as possible relations to language switching. It is located bilaterally, roughly at the upper sides of the temporal lobes – in humans, curving down and onto the medial surface, on the superior temporal plane, within the lateral sulcus and comprising parts of the transverse temporal gyri, and the superior temporal gyrus, including the planum polare and planum temporale.
The superior temporal gyrus (STG) is one of three gyri in the temporal lobe of the human brain, which is located laterally to the head, situated somewhat above the external ear.
A gamma wave or gamma rhythm is a pattern of neural oscillation in humans with a frequency between 25 and 140 Hz, the 40 Hz point being of particular interest. Gamma rhythms are correlated with large-scale brain network activity and cognitive phenomena such as working memory, attention, and perceptual grouping, and can be increased in amplitude via meditation or neurostimulation. Altered gamma activity has been observed in many mood and cognitive disorders such as Alzheimer's disease, epilepsy, and schizophrenia.
In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.
The language module or language faculty is a hypothetical structure in the human brain which is thought to contain innate capacities for language, originally posited by Noam Chomsky. There is ongoing research into brain modularity in the fields of cognitive science and neuroscience, although the current idea is much weaker than what was proposed by Chomsky and Jerry Fodor in the 1980s. In today's terminology, 'modularity' refers to specialisation: language processing is specialised in the brain to the extent that it occurs partially in different areas than other types of information processing such as visual input. The current view is, then, that language is neither compartmentalised nor based on general principles of processing. It is modular to the extent that it constitutes a specific cognitive skill or area in cognition.
Brainwave entrainment, also referred to as brainwave synchronization or neural entrainment, refers to the observation that brainwaves will naturally synchronize to the rhythm of periodic external stimuli, such as flickering lights, speech, music, or tactile stimuli.
Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either by mechanisms within individual neurons or by interactions between neurons. In individual neurons, oscillations can appear either as oscillations in membrane potential or as rhythmic patterns of action potentials, which then produce oscillatory activation of post-synaptic neurons. At the level of neural ensembles, synchronized activity of large numbers of neurons can give rise to macroscopic oscillations, which can be observed in an electroencephalogram. Oscillatory activity in groups of neurons generally arises from feedback connections between the neurons that result in the synchronization of their firing patterns. The interaction between neurons can give rise to oscillations at a different frequency than the firing frequency of individual neurons. A well-known example of macroscopic neural oscillations is alpha activity.
The two-streams hypothesis is a model of the neural processing of vision as well as hearing. The hypothesis, given its initial characterisation in a paper by David Milner and Melvyn A. Goodale in 1992, argues that humans possess two distinct visual systems. Recently there seems to be evidence of two distinct auditory systems as well. As visual information exits the occipital lobe, and as sound leaves the phonological network, it follows two main pathways, or "streams". The ventral stream leads to the temporal lobe, which is involved with object and visual identification and recognition. The dorsal stream leads to the parietal lobe, which is involved with processing the object's spatial location relative to the viewer and with speech repetition.
In human neuroanatomy, brain asymmetry can refer to at least two quite distinct findings:
Howard C. Nusbaum is professor at the University of Chicago, United States in the Department of Psychology and its College, and a steering committee member of the Neuroscience Institute. Nusbaum is an internationally recognized expert in cognitive psychology, speech science, and in the new field of social neuroscience. Nusbaum investigates the cognitive and neural mechanisms that mediate spoken language use, as well as language learning and the role of attention in speech perception. In addition, he investigates how we understand the meaning of music, and how cognitive and social-emotional processes interact in decision-making.
Ernst Pöppel is a German psychologist and neuroscientist. He is the father of Dr. David Poeppel.
In the human brain, the superior temporal sulcus (STS) is the sulcus separating the superior temporal gyrus from the middle temporal gyrus in the temporal lobe of the brain. A sulcus is a deep groove that curves into the largest part of the brain, the cerebrum, and a gyrus is a ridge that curves outward of the cerebrum.
The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.
Neurocomputational speech processing is computer-simulation of speech production and speech perception by referring to the natural neuronal processes of speech production and speech perception, as they occur in the human nervous system. This topic is based on neuroscience and computational neuroscience.
Andreas Karl Engel is a German neuroscientist. He is the director of the Department of Neurophysiology and Pathophysiology at the University Medical Center Hamburg-Eppendorf (UKE).
The bi-directional hypothesis of language and action proposes that the sensorimotor and language comprehension areas of the brain exert reciprocal influence over one another. This hypothesis argues that areas of the brain involved in movement and sensation, as well as movement itself, influence cognitive processes such as language comprehension. In addition, the reverse effect is argued, where it is proposed that language comprehension influences movement and sensation. Proponents of the bi-directional hypothesis of language and action conduct and interpret linguistic, cognitive, and movement studies within the framework of embodied cognition and embodied language processing. Embodied language developed from embodied cognition, and proposes that sensorimotor systems are not only involved in the comprehension of language, but that they are necessary for understanding the semantic meaning of words.
Auditosensory cortex is the part of the auditory system that is associated with the sense of hearing in humans. It occupies the bilateral primary auditory cortex in the temporal lobe of the mammalian brain. The term is used to describe Brodmann area 42 together with the transverse temporal gyri of Heschl. The auditosensory cortex takes part in the reception and processing of auditory nerve impulses, which passes sound information from the thalamus to the brain. Abnormalities in this region are responsible for many disorders in auditory abilities, such as congenital deafness, true cortical deafness, primary progressive aphasia and auditory hallucination.