Rochelle Newman

Last updated

Rochelle Newman is an American psychologist. [1] She is chair of the University of Maryland Department of Hearing and Speech Sciences (HESP), as well as associate director of the Maryland Language Science Center. She previously served as the director of graduate studies for both HESP and the Program in Neuroscience and Cognitive Science and is also a member of the Center for the Comparative & Evolutionary Biology of Hearing. Newman helped found the University of Maryland Infant & Child Studies Consortium and the University of Maryland Autism Research Consortium. [2]

Contents

Her research focuses on speech perception and language acquisition. [2] More specifically, she is interested in how the brain recognizes words from fluent speech, especially in the context of noise, [3] and how this ability changes with development. [4]

Biography

Newman received her Bachelor of Science in speech from Northwestern University in 1991. She attended SUNY Buffalo for her graduate and doctoral studies, where she received her master's degree from the department of psychology in 1995 and her Ph.D. in 1997. Newman's dissertation explored differences in speech perception and production (“Individual differences and the link between speech perception and speech production”). [5]

After working as an assistant professor in the department of psychology in the University of Iowa, Newman joined the University of Maryland Hearing and Speech Sciences Department in 2001. She currently serves as professor and chair of the Hearing and Speech Sciences Department. [5] She is also associate director of the Maryland Language Science Center (LSC) [6] and serves on the executive board of the Maryland Cochlear Implant Center of Excellence (MCICE) [7] and the Graduate Field Committee in Developmental Science.

Research

Newman's research focuses on listening in noise, particularly in infants and young children. [8] She has also looked at other difficult listening conditions (such as listening through a cochlear implant, and listening to fast speech, and novel accents). Additional areas of research include bilingualism, sports-related concussions, and dog cognition. [9]

Notable publications

Related Research Articles

Language acquisition is the process by which humans acquire the capacity to perceive and comprehend language. In other words, it is how human beings gain the ability to be aware of language, to understand it, and to produce and use words and sentences to communicate.

<span class="mw-page-title-main">Cochlear implant</span> Prosthesis

A cochlear implant (CI) is a surgically implanted neuroprosthesis that provides a person who has moderate-to-profound sensorineural hearing loss with sound perception. With the help of therapy, cochlear implants may allow for improved speech understanding in both quiet and noisy environments. A CI bypasses acoustic hearing by direct electrical stimulation of the auditory nerve. Through everyday listening and auditory training, cochlear implants allow both children and adults to learn to interpret those signals as speech and sound.

<span class="mw-page-title-main">Jerome Bruner</span> American psychologist and scholar

Jerome Seymour Bruner was an American psychologist who made significant contributions to human cognitive psychology and cognitive learning theory in educational psychology. Bruner was a senior research fellow at the New York University School of Law. He received a BA in 1937 from Duke University and a PhD from Harvard University in 1941. He taught and did research at Harvard University, the University of Oxford, and New York University. A Review of General Psychology survey, published in 2002, ranked Bruner as the 28th most cited psychologist of the 20th century.

Lip reading, also known as speechreading, is a technique of understanding a limited range of speech by visually interpreting the movements of the lips, face and tongue without sound. Estimates of the range of lip reading vary, with some figures as low as 30% because lip reading relies on context, language knowledge, and any residual hearing. Although lip reading is used most extensively by deaf and hard-of-hearing people, most people with normal hearing process some speech information from sight of the moving mouth.

Diana Deutsch is a British-American psychologist from London, England. She is a professor of psychology at the University of California, San Diego, and is a prominent researcher on the psychology of music. Deutsch is primarily known for her discoveries in music and speech illusions. She also studies the cognitive foundation of musical grammars, which consists of the way people hold musical pitches in memory, and how people relate the sounds of music and speech to each other. In addition, she is known for her work on absolute pitch, which she has shown is far more prevalent among speakers of tonal languages. Deutsch is the author of Musical Illusions and Phantom Words: How Music and Speech Unlock Mysteries of the Brain (2019), the editor for Psychology of Music, and also the compact discs Musical Illusions and Paradoxes (1995) and Phantom Words and Other Curiosities (2003).

Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.

Phonological development refers to how children learn to organize sounds into meaning or language (phonology) during their stages of growth.

Richard N. Aslin is an American psychologist. He is currently a Senior Scientist at Haskins Laboratories and professor at Yale University. Until December, 2016, Dr. Aslin was William R. Kenan Professor of Brain & Cognitive Sciences and Center for Visual Sciences at the University of Rochester. During his time in Rochester, he was also Director of the Rochester Center for Brain Imaging and the Rochester Baby Lab. He had worked at the university for over thirty years, until he resigned in protest of the university's handling of a sexual harassment complaint about a junior member of his department.

Prelingual deafness refers to deafness that occurs before learning speech or language. Speech and language typically begin to develop very early with infants saying their first words by age one. Therefore, prelingual deafness is considered to occur before the age of one, where a baby is either born deaf or loses hearing before the age of one. This hearing loss may occur for a variety of reasons and impacts cognitive, social, and language development.

Selective auditory attention, or selective hearing, is a process of the auditory system where an individual selects or focuses on certain stimuli for auditory information processing while other stimuli are disregarded. This selection is very important as the processing and memory capabilities for humans have a limited capacity. When people use selective hearing, noise from the surrounding environment is heard by the auditory system but only certain parts of the auditory information are chosen to be processed by the brain.

Phonemic restoration effect is a perceptual phenomenon where under certain conditions, sounds actually missing from a speech signal can be restored by the brain and may appear to be heard. The effect occurs when missing phonemes in an auditory signal are replaced with a noise that would have the physical properties to mask those phonemes, creating an ambiguity. In such ambiguity, the brain tends towards filling in absent phonemes. The effect can be so strong that some listeners may not even notice that there are phonemes missing. This effect is commonly observed in a conversation with heavy background noise, making it difficult to properly hear every phoneme being spoken. Different factors can change the strength of the effect, including how rich the context or linguistic cues are in speech, as well as the listener's state, such as their hearing status or age.

Deafness has varying definitions in cultural and medical contexts. In medical contexts, the meaning of deafness is hearing loss that precludes a person from understanding spoken language, an audiological condition. In this context it is written with a lower case d. It later came to be used in a cultural context to refer to those who primarily communicate through sign language regardless of hearing ability, often capitalized as Deaf and referred to as "big D Deaf" in speech and sign. The two definitions overlap but are not identical, as hearing loss includes cases that are not severe enough to impact spoken language comprehension, while cultural Deafness includes hearing people who use sign language, such as children of deaf adults.

Neville Moray was a British-born Canadian psychologist. He served as an academic and professor at the Department of Psychology of the University of Surrey, known from his 1959 research of the cocktail party effect.

<span class="mw-page-title-main">Janet Werker</span>

Janet F. Werker is a researcher in the field of developmental psychology. She researches the foundations of monolingual and bilingual infant language acquisition in infants at the University of British Columbia's Infant Studies Centre. Her research has pioneered what are now accepted baselines in the field, showing that language learning begins in early infancy and is shaped by experience across the first year of life.

Nan Bernstein Ratner is a professor in the Department of Hearing and Speech Sciences at the University of Maryland, College Park. Ratner is a board-recognized specialist in child language disorders.

<span class="mw-page-title-main">Brian Moore (scientist)</span>

Brian C.J. Moore FMedSci, FRS is an Emeritus Professor of Auditory Perception in the University of Cambridge and an Emeritus Fellow of Wolfson College, Cambridge. His research focuses on psychoacoustics, audiology, and the development and assessment of hearing aids.

Rachel Keen is a developmental psychologist known for her research on infant cognitive development, auditory development, and motor control. She is Professor Emeritus of Psychology at the University of Virginia.

Leslie Altman Rescorla was a developmental psychologist and expert on language delay in toddlers. Rescorla was Professor of Psychology on the Class of 1897 Professorship of Science and Director of the Child Study Institute at Bryn Mawr College. She was a licensed and school certified psychologist known for her longitudinal research on late talkers. In the 1980s, she created the Language Development Survey, a widely used tool for screening toddlers for possible language delays. Rescorla worked with Thomas M. Achenbach in developing the manual for the Achenbach System of Empirically Based Assessment (ASEBA) used to measure adaptive and maladaptive behavior in children.

Max Friedrich Meyer was the first psychology professor who worked on psychoacoustics and taught at the University of Missouri. He was the founder of the theory of cochlear function, and was also an advocate for behaviourism as he argued in his book "The Psychology of the Other". During his time at the University of Missouri, he opened an experimental lab for Psychology and taught a variety of courses. His lab focused on behavioural zeitgeist and the studies of nervous system and behaviour. Meyer eventually moved to Miami and lived there from 1932 until the late 1950s. Afterwards, he moved to Virginia to stay with his daughter until his death in 1967.

Janice Elizabeth Murray is a Canadian–New Zealand academic psychologist, and is professor emerita at the University of Otago. Her research focuses on object and face recognition, and age-related changes in perception.

References

  1. "Screening Tests To Identify Children With Reading Problems Are Being Misapplied, Study Shows". ScienceDaily. University of Maryland, College Park. November 25, 2007. Retrieved 15 April 2020.
  2. 1 2 "Rochelle Newman, Professor and Chair". Department of Hearing and Speech Sciences. Retrieved 13 April 2020.
  3. WebMD (25 March 2015). "Noisy Places May Delay Kids' Speech". FoxNews. Retrieved 15 April 2020.
  4. Winerman, Lea. "Baby's First Heard Words". American Psychological Association. Retrieved 15 April 2020.
  5. 1 2 Newman, Rochelle. "Rochelle Newman Curriculum Vitae".
  6. "Language at Maryland: Organization". Language Science Center. Retrieved 15 April 2020.
  7. Likowski, Alex. "UMB, UMCP Unveil New Collaborative Projects". UMB News. Retrieved 15 April 2020.
  8. "Benefits of word repetition to infants". University of Maryland, College Park. Science Daily. September 21, 2015. Retrieved 15 April 2020.
  9. Cimons, Marlene (January 26, 2020). "Babies are bad at listening in noisy places. Dogs aren't. My pets took part in a study to learn why". The Washington Post. Retrieved 15 April 2020.