April A. Benasich | |
---|---|
Nationality | American |
Alma mater |
|
Scientific career | |
Fields | Neuroscience |
Institutions | Rutgers University |
Doctoral advisor | Marc Bornstein |
April A. Benasich is an American neuroscientist. She is the Elizabeth H. Solomon Professor of Developmental Cognitive Neuroscience, director of the Infancy Studies Laboratory at the Center for Molecular and Behavioral Neuroscience, and director of the Carter Center for Neurocognitive Research and Professor of Neuroscience at Rutgers University. She is also a principal investigator within the National Science Foundation-funded Temporal Dynamics of Learning Center headquartered at the University of California, San Diego’s Institute for Neural Computation.
Benasich was the first to link early deficits in rapid auditory processing to later impairments in language and cognition, thus demonstrating that the ability to perform fine non-speech acoustic discriminations in early infancy is critically important to, and highly predictive of, language development in typically developing children as well as children at risk for language learning disorders. [1] [2] Her research also suggests that rapid auditory processing ability may be used to identify and remediate infants at highest risk of language delay and impairment regardless of risk status. [1] and she has demonstrated that infants who played a training game developed to encourage them to focus on small aural differences developed more accurate acoustic maps than infants who were not exposed to the same intervention. [3]
Benasich received Ph.D.s from New York University in experimental/cognitive neuroscience and clinical psychology in 1987 and has a bachelor's of science degree in nursing and extensive medical experience in pediatrics. [4] She completed her initial postdoctoral work at Johns Hopkins University School of Medicine, where she was a member of the Research Steering Committee of the Infant Health and Development Program funded by the Robert Wood Johnson Foundation, and a second postdoctoral fellowship under Paula Tallal at the Center for Molecular and Behavioral Neuroscience.
Benasich's work has centered on the study of the early neural processes necessary for normal cognitive and language development and the impact of disordered processing in high risk or neurologically impaired infants. At New York University, Benasich and Marc Bornstein studied the relationship of infant behaviors such as attention, habituation and memory to later cognitive and linguistic activity. [5] In her postdoctoral work at Johns Hopkins, she served on the Research Steering Committee for the Infant Health and Development Program, a large national randomized clinical trial of an early intervention program for low birth weight, premature infants. [6]
As a research associate at the Center for Molecular and Behavioral Neuroscience, Benasich developed a behavioral and electrocortical battery that permitted the assessment of rapid auditory temporal processing in infancy and its relationship to subsequent language outcomes. The resulting studies demonstrated that differences in infant discrimination of rapid auditory cues (a critical skill for decoding language) were related to differences in later language comprehension and production. [7] [8] At the Infancy Studies Laboratory, Benasich's research, involving more than 1000 children over fifteen years, has continued to focus on neural underpinnings of cognitive and language development as well as the development of temporally-bounded sensory information processing (shown to be a predictor of language impairment and dyslexia in older children). [9] Her research has shown that the ability to perform fine-grained acoustic analyses in the tens of milliseconds time range in early infancy is critical to the decoding of the speech stream and the subsequent establishment of phonemic maps that support later language development. [2] [7] [10] Currently, the Benasich lab is studying the evolution of infant brain waves (and oscillations) as infants process the critical timing cues important for the construction of prelinguistic acoustic maps that support language acquisition. [11] [12] Failure to efficiently process these timing cues can produce difficulties as language is set up, particularly in children with a family history of language learning issues. [13] Studies from the Benasich lab suggest that behavioral intervention in young infants can support and enhance language mapping and rapid auditory processing abilities and that those changes endure. [14] [15] [16]
Benasich has co-founded RAPT Ventures, a company whose goals are to facilitate technology transfer from the lab to the real world in order to optimize early brain development during the critical periods for early language. [17] [18]
Lip reading, also known as speechreading, is a technique of understanding speech by visually interpreting the movements of the lips, face and tongue when normal sound is not available. It relies also on information provided by the context, knowledge of the language, and any residual hearing. Although lip reading is used most extensively by deaf and hard-of-hearing people, most people with normal hearing process some speech information from sight of the moving mouth.
The McGurk effect is a perceptual phenomenon that demonstrates an interaction between hearing and vision in speech perception. The illusion occurs when the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound. The visual information a person gets from seeing a person speak changes the way they hear the sound. If a person is getting poor quality auditory information but good quality visual information, they may be more likely to experience the McGurk effect. Integration abilities for audio and visual information may also influence whether a person will experience the effect. People who are better at sensory integration have been shown to be more susceptible to the effect. Many people are affected differently by the McGurk effect based on many factors, including brain damage and other disorders.
The temporal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The temporal lobe is located beneath the lateral fissure on both cerebral hemispheres of the mammalian brain.
Mixed receptive-expressive language disorder is a communication disorder in which both the receptive and expressive areas of communication may be affected in any degree, from mild to severe. Children with this disorder have difficulty understanding words and sentences. This impairment is classified by deficiencies in expressive and receptive language development that is not attributed to sensory deficits, nonverbal intellectual deficits, a neurological condition, environmental deprivation or psychiatric impairments. Research illustrates that 2% to 4% of five year olds have mixed receptive-expressive language disorder. This distinction is made when children have issues in expressive language skills, the production of language, and when children also have issues in receptive language skills, the understanding of language. Those with mixed receptive-language disorder have a normal left-right anatomical asymmetry of the planum temporale and parietale. This is attributed to a reduced left hemisphere functional specialization for language. Taken from a measure of cerebral blood flow (SPECT) in phonemic discrimination tasks, children with mixed receptive-expressive language disorder do not exhibit the expected predominant left hemisphere activation. Mixed receptive-expressive language disorder is also known as receptive-expressive language impairment (RELI) or receptive language disorder.
The transverse temporal gyri, also called Heschl's gyri or Heschl's convolutions, are gyri found in the area of primary auditory cortex buried within the lateral sulcus of the human brain, occupying Brodmann areas 41 and 42. Transverse temporal gyri are superior to and separated from the planum temporale by Heschl's sulcus. Transverse temporal gyri are found in varying numbers in both the right and left hemispheres of the brain and one study found that this number is not related to the hemisphere or dominance of hemisphere studied in subjects. Transverse temporal gyri can be viewed in the sagittal plane as either an omega shape or a heart shape.
Amusia is a musical disorder that appears mainly as a defect in processing pitch but also encompasses musical memory and recognition. Two main classifications of amusia exist: acquired amusia, which occurs as a result of brain damage, and congenital amusia, which results from a music-processing anomaly present since birth.
Language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.
In developmental psychology and developmental biology, a critical period is a maturational stage in the lifespan of an organism during which the nervous system is especially sensitive to certain environmental stimuli. If, for some reason, the organism does not receive the appropriate stimulus during this "critical period" to learn a given skill or trait, it may be difficult, ultimately less successful, or even impossible, to develop certain associated functions later in life. Functions that are indispensable to an organism's survival, such as vision, are particularly likely to develop during critical periods. "Critical period" also relates to the ability to acquire one's first language. Researchers found that people who passed the "critical period" would not acquire their first language fluently.
Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.
Developmental cognitive neuroscience is an interdisciplinary scientific field devoted to understanding psychological processes and their neurological bases in the developing organism. It examines how the mind changes as children grow up, interrelations between that and how the brain is changing, and environmental and biological influences on the developing mind and brain.
Auditory processing disorder (APD), rarely known as King-Kopetzky syndrome or auditory disability with normal hearing (ADN), is a neurodevelopmental disorder affecting the way the brain processes auditory information. Individuals with APD usually have normal structure and function of the outer, middle, and inner ear. However, they cannot process the information they hear in the same way as others do, which leads to difficulties in recognizing and interpreting sounds, especially the sounds composing speech. It is thought that these difficulties arise from dysfunction in the central nervous system. It is highly prevalent in individuals with other neurodevelopmental disorders, such as Attention Deficit Hyperactivity Disorder, Autism Spectrum Disorders, Dyslexia, and Sensory Processing Disorder.
Auditory agnosia is a form of agnosia that manifests itself primarily in the inability to recognize or differentiate between sounds. It is not a defect of the ear or "hearing", but rather a neurological inability of the brain to process sound meaning. While auditory agnosia impairs the understanding of sounds, other abilities such as reading, writing, and speaking are not hindered. It is caused by bilateral damage to the anterior superior temporal gyrus, which is part of the auditory pathway responsible for sound recognition, the auditory "what" pathway.
Musical memory refers to the ability to remember music-related information, such as melodic content and other progressions of tones or pitches. The differences found between linguistic memory and musical memory have led researchers to theorize that musical memory is encoded differently from language and may constitute an independent part of the phonological loop. The use of this term is problematic, however, since it implies input from a verbal system, whereas music is in principle nonverbal.
The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.
Educational neuroscience is an emerging scientific field that brings together researchers in cognitive neuroscience, developmental cognitive neuroscience, educational psychology, educational technology, education theory and other related disciplines to explore the interactions between biological processes and education. Researchers in educational neuroscience investigate the neural mechanisms of reading, numerical cognition, attention and their attendant difficulties including dyslexia, dyscalculia and ADHD as they relate to education. Researchers in this area may link basic findings in cognitive neuroscience with educational technology to help in curriculum implementation for mathematics education and reading education. The aim of educational neuroscience is to generate basic and applied research that will provide a new transdisciplinary account of learning and teaching, which is capable of informing education. A major goal of educational neuroscience is to bridge the gap between the two fields through a direct dialogue between researchers and educators, avoiding the "middlemen of the brain-based learning industry". These middlemen have a vested commercial interest in the selling of "neuromyths" and their supposed remedies.
Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remembered, it would be impossible for language, relationships, or personal identity to develop. Memory loss is usually described as forgetfulness or amnesia.
Attentional control, colloquially referred to as concentration, refers to an individual's capacity to choose what they pay attention to and what they ignore. It is also known as endogenous attention or executive attention. In lay terms, attentional control can be described as an individual's ability to concentrate. Primarily mediated by the frontal areas of the brain including the anterior cingulate cortex, attentional control is thought to be closely related to other executive functions such as working memory.
The temporal dynamics of music and language describes how the brain coordinates its different regions to process musical and vocal sounds. Both music and language feature rhythmic and melodic structure. Both employ a finite set of basic elements that are combined in ordered ways to create complete musical or lingual ideas.
Temporal envelope (ENV) and temporal fine structure (TFS) are changes in the amplitude and frequency of sound perceived by humans over time. These temporal changes are responsible for several aspects of auditory perception, including loudness, pitch and timbre perception and spatial hearing.
Christian Lorenzi is Professor of Experimental Psychology at École Normale Supérieure in Paris, France, where he has been Director of the Department of Cognitive Studies and Director of Scientific Studies until. Lorenzi works on auditory perception.