This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
Founded | 1935 |
---|---|
Founder | Caryl Haskins Franklin S. Cooper |
Type | Non-profit organization |
13-1628174 | |
Focus | Speech, language, literacy, education |
Location |
|
Products | Research and analysis |
Key people | Kenneth Pugh, president Douglas Whalen, VP Vincent Gracco, VP Joseph Cardone, CFO Carol Fowler, senior advisor Philip Rubin, senior advisor |
Revenue | $4,955,859 (2019) [1] |
Expenses | $5,814,864 (2019) [2] |
Employees | 83 (2019) [3] |
Website | haskinslabs |
Haskins Laboratories, Inc. is an independent 501(c) non-profit corporation, [4] [5] founded in 1935 and located in New Haven, Connecticut, since 1970. Haskins has formal affiliation agreements with both Yale University and the University of Connecticut; it remains fully independent, administratively and financially, of both Yale and UConn. Haskins is a multidisciplinary and international community of researchers that conducts basic research on spoken and written language. A guiding perspective of their research is to view speech and language as emerging from biological processes, including those of adaptation, response to stimuli, and conspecific interaction. Haskins Laboratories has a long history of technological and theoretical innovation, from creating systems of rules for speech synthesis and development of an early working prototype of a reading machine for the blind to developing the landmark concept of phonemic awareness as the critical preparation for learning to read an alphabetic writing system.
Haskins Laboratories is equipped, in-house, with a comprehensive suite of tools and capabilities to advance its mission of research into language and literacy. As of 2014, these included:
Many researchers have contributed to scientific breakthroughs at Haskins Laboratories since its founding. All of them are indebted to the pioneering work and leadership of Caryl Parker Haskins, Franklin S. Cooper, Alvin Liberman, Seymour Hutner and Luigi Provasoli. The history presented here focuses on the research program of the division of Haskins Laboratories that, since the 1940s, has been most well known for its work in the areas of speech, language, and reading. [6]
Caryl Haskins and Franklin S. Cooper established Haskins Laboratories in 1935. It was originally affiliated with Harvard University, MIT, and Union College in Schenectady, NY. Caryl Haskins conducted research in microbiology, radiation physics, and other fields in Cambridge, MA and Schenectady. In 1939 Haskins Laboratories moved its center to New York City. Seymour Hutner joined the staff to set up a research program in microbiology, genetics, and nutrition. The descendant of the division led by Hutner program eventually became a department of Pace University in New York. [7] The two identically named organizations are no longer formally affiliated.
The U. S. Office of Scientific Research and Development, under Vannevar Bush asked Haskins Laboratories to evaluate and develop technologies for assisting blinded World War II veterans. Experimental psychologist Alvin Liberman joined Haskins Laboratories to assist in developing a "sound alphabet" to represent the letters in a text for use in a reading machine for the blind. Luigi Provasoli joined Haskins Laboratories to set up a research program in marine biology. The program in marine biology moved to Yale University in 1970 and disbanded with Provasoli's retirement in 1978.
Franklin S. Cooper invented the pattern playback, [8] [9] a machine that converts pictures of the acoustic patterns of speech back into sound. With this device, Alvin Liberman, Cooper, and Pierre Delattre (and later joined by Katherine Safford Harris, Leigh Lisker, Arthur Abramson, and others), discovered the acoustic cues for the perception of phonetic segments (consonants and vowels). Liberman and colleagues proposed a motor theory of speech perception to resolve the acoustic complexity: they hypothesized that we perceive speech by tapping into a biological specialization, a speech module, that contains knowledge of the acoustic consequences of articulation. Liberman, aided by Frances Ingemann and others, organized the results of the work on speech cues into a groundbreaking set of rules for speech synthesis by the Pattern Playback. [10]
Franklin S. Cooper and Katherine Safford Harris, working with Peter MacNeilage, were the first researchers in the U.S. to use electromyographic techniques, pioneered at the University of Tokyo, to study the neuromuscular organization of speech. Leigh Lisker and Arthur Abramson looked for simplification at the level of articulatory action in the voicing of certain contrasting consonants. They showed that many acoustic properties of voicing contrasts arise from variations in voice onset time, the relative phasing of the onset of vocal cord vibration and the end of a consonant. Their work has been widely replicated and elaborated, here and abroad, over the following decades. Donald Shankweiler and Michael Studdert-Kennedy used a dichotic listening technique (presenting different nonsense syllables simultaneously to opposite ears) to demonstrate the dissociation of phonetic (speech) and auditory (nonspeech) perception by finding that phonetic structure devoid of meaning is an integral part of language, typically processed in the left cerebral hemisphere. Liberman, Cooper, Shankweiler, and Studdert-Kennedy summarized and interpreted fifteen years of research in "Perception of the Speech Code", still among the most cited papers in the speech literature. It set the agenda for many years of research at Haskins and elsewhere by describing speech as a code in which speakers overlap (or coarticulate) segments to form syllables. Researchers at Haskins connected their first computer to a speech synthesizer designed by Haskins Laboratories' engineers. Ignatius Mattingly, with British collaborators, John N. Holmes [11] and J.N. Shearme, [12] adapted the Pattern playback rules to write the first computer program for synthesizing continuous speech from a phonetically spelled input. A further step toward a reading machine for the blind combined Mattingly's program with an automatic look-up procedure for converting alphabetic text into strings of phonetic symbols.
In 1970, Haskins Laboratories moved to New Haven, Connecticut, and entered into affiliation agreements with Yale University and the University of Connecticut; Haskins remains fully independent of both Yale and UConn, administratively and financially. The lab's original location in New Haven, at 270 Crown Street (from 1970 to 2005), was leased from Yale University. Isabelle Liberman, Donald Shankweiler, and Alvin Liberman teamed up with Ignatius Mattingly to study the relationship between speech perception and reading, a topic implicit in Haskins Laboratories' research program since its inception. They developed the concept of phonemic awareness, the knowledge that would-be readers must be aware of the phonemic structure of their language in order to be able to read. Leonard Katz related the work to contemporary cognitive theory and provided expertise in experimental design and data analysis. Under the broad rubric of the "alphabetic principle", this is the core of the lab's present program of reading pedagogy. Patrick Nye [13] joined Haskins Laboratories to lead a team working on the reading machine for the blind. The project culminated when the addition of an optical character recognizer allowed investigators to assemble the first automatic text-to-speech reading machine. By the end of the decade this technology had advanced to the point where commercial concerns assumed the task of designing and manufacturing reading machines for the blind.
In 1973, Franklin S. Cooper was selected to form a panel of six experts [14] charged with investigating the famous 18-minute gap in the White House office tapes of President Richard Nixon related to the Watergate scandal. [15]
Building on earlier work, Philip Rubin developed the sinewave synthesis program, which was then used by Robert Remez, Rubin, and colleagues to show that listeners can perceive continuous speech without traditional speech cues from a pattern of sinewaves that track the changing resonances of the vocal tract. This paved the way for a view of speech as a dynamic pattern of trajectories through articulatory-acoustic space. Philip Rubin and colleagues developed Paul Mermelstein's anatomically simplified vocal tract model, [16] originally worked on at Bell Laboratories, into the first articulatory synthesizer [17] that can be controlled in a physically meaningful way and used for interactive experiments.
Studies of different writing systems supported the controversial hypothesis that all reading necessarily activates the phonological form of a word before, or at the same time, as its meaning. Work included experiments by Georgije Lukatela, [18] Michael Turvey, Leonard Katz, Ram Frost, Laurie Feldman, [19] and Shlomo Bentin, in a variety of languages. Cross-language work on reading, including investigations of the brain process involved, remains a large part of Haskins Laboratories' program today.
Various researchers developed compatible theoretical accounts of speech production, [20] speech perception and phonological knowledge. Carol Fowler proposed a direct realism theory of speech perception: listeners perceive gestures not by means of a specialized decoder, as in the motor theory, but because information in the acoustic signal specifies the gestures that form it. J. A. Scott Kelso and colleagues demonstrated functional synergies in speech gestures experimentally. Elliot Saltzman [21] developed a dynamical systems theory of synergetic action and implemented the theory as a working model of speech production. Linguists Catherine Browman and Louis Goldstein developed the theory of articulatory phonology, [22] in which gestures are the basic units of both phonetic action and phonological knowledge. Articulatory phonology, the task dynamic model, and the articulatory synthesis model are combined into a gestural computational model of speech production. [23]
Katherine Safford Harris, [24] Frederica Bell-Berti [25] and colleagues studied the phasing and cohesion of articulatory speech gestures. Kenneth Pugh was among the first scientists to use functional magnetic resonance imaging (fMRI) to reveal brain activity associated with reading and reading disabilities. Pugh, Donald Shankweiler, Weija Ni, [26] Einar Mencl, [27] and colleagues developed novel applications of neuroimaging to measure brain activity associated with understanding sentences. Philip Rubin, Louis Goldstein and Mark Tiede [28] designed a radical revision of the articulatory synthesis model, known as CASY [29] the configurable articulatory synthesizer. This 3-dimensional model of the vocal tract permits researchers to replicate MRI images of actual speakers. Douglas Whalen, Goldstein, Rubin and colleagues extended this work to study the relation between speech production and perception. [30] Donald Shankweiler, Susan Brady, Anne Fowler, [31] and others explored whether weak memory and perception in poor readers are tied specifically to phonological deficits. Evidence rejected broader cognitive deficits underlying reading difficulties and raised questions about impaired phonological representations in disabled readers.
In 2000, Anne Fowler [32] and Susan Brady launched the Early Reading Success (ERS) program, [33] part of the Haskins Literacy Initiative [34] which promotes the science of teaching reading. The ERS program was a demonstration project examining the efficacy of professional development in reading instruction for teachers of children in kindergarten through second grade. The Mastering Reading Instruction program, [35] which combines professional development with Haskins-trained mentors, was a continuation of ERS. David Ostry and colleagues explored the neurological underpinning of motor control using a robot arm to influence jaw movement. Douglas Whalen and Khalil Iskarous [36] pioneered the pairing of ultrasound, used here to monitor articulators that cannot be seen, and Optotrak, [37] an opto-electronic position-tracking device, used here to monitor visible articulators. Christine Shadle [38] joined Haskins in 2004 to head up a project investigating the speech production goals for fricatives. [39] Donald Shankweiler and David Braze [40] developed an eye movement laboratory that combines eye tracking data with brain activity measures for investigating reading processes in normal and disabled readers. Laura Koenig and Jorge C. Lucero [41] studied the development of laryngeal and aerodynamic control in children's speech. In March 2005 Haskins Laboratories moved to a new, state-of-the-art facility on the 9th floor of a commercial building at 300 George Street in New Haven. This provides about 11,000 square feet of office and lab space. In 2008, Ken Pugh of Yale University was named President and Director of Research, succeeding Carol Fowler who remains at Haskins as a Senior Advisor. In 2009, Haskins released a new Strategic Plan [42] featuring new Birth-to-Five and Bilingualism initiatives.
The Haskins Training Institute was established in 2011 to provide direct educational opportunities in Haskins Laboratories' core areas of research (language, speech perception, speech production, literacy). [43] The Training Institute serves to communicate this knowledge to the public through accessible seminars, small conferences, and intern and training positions.
In December 2015, Haskins Laboratories convened a Global Literacy Summit. [44] This was a three-day meeting of scientists and representatives from governmental and non-governmental organizations around the globe, who are working with programs in the developing world to support literacy and education in disadvantaged populations.
In 2016, Richard N. Aslin joined Haskins, [45] after leaving the University of Rochester. [46]
In 2019, David Lewkowicz joined Haskins after leaving Northeastern University. [47]
Phonological awareness is an individual's awareness of the phonological structure, or sound structure, of words. Phonological awareness is an important and reliable predictor of later reading ability and has, therefore, been the focus of much research.
A reading machine is a piece of assistive technology that allows blind people to access printed materials. It scans text, converts the image into text by means of optical character recognition and uses a speech synthesizer to read out what it has found.
Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.
Philip E. Rubin is an American cognitive scientist, technologist, and science administrator known for raising the visibility of behavioral and cognitive science, neuroscience, and ethical issues related to science, technology, and medicine, at a national level. His research career is noted for his theoretical contributions and pioneering technological developments, starting in the 1970s, related to speech synthesis and speech production, including articulatory synthesis and sinewave synthesis, and their use in studying complex temporal events, particularly understanding the biological bases of speech and language.
Alvin Meyer Liberman was born in St. Joseph, Missouri. Liberman was an American psychologist. His ideas set the agenda for fifty years of psychological research in speech perception.
Carol Ann Fowler is an American experimental psychologist. She was president and director of research at Haskins Laboratories in New Haven, Connecticut from 1992 to 2008. She is also a professor of psychology at the University of Connecticut and adjunct professor of linguistics and psychology at Yale University. She received her undergraduate degree from Brown University in 1971, her M.A University of Connecticut in 1973 and her Ph.D. in psychology from the University of Connecticut in 1977.
Sinewave synthesis, or sine wave speech, is a technique for synthesizing speech by replacing the formants with pure tone whistles. The first sinewave synthesis program (SWS) for the automatic creation of stimuli for perceptual experiments was developed by Philip Rubin at Haskins Laboratories in the 1970s. This program was subsequently used by Robert Remez, Philip Rubin, David Pisoni, and other colleagues to show that listeners can perceive continuous speech without traditional speech cues, i.e., pitch, stress, and intonation. This work paved the way for a view of speech as a dynamic pattern of trajectories through articulatory-acoustic space.
The pattern playback is an early talking device that was built by Dr. Franklin S. Cooper and his colleagues, including John M. Borst and Caryl Haskins, at Haskins Laboratories in the late 1940s and completed in 1950. There were several different versions of this hardware device. Only one currently survives. The machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device, Alvin Liberman, Frank Cooper, and Pierre Delattre were able to discover acoustic cues for the perception of phonetic segments. This research was fundamental to the development of modern techniques of speech synthesis, reading machines for the blind, the study of speech perception and speech recognition, and the development of the motor theory of speech perception.
Articulatory phonology is a linguistic theory originally proposed in 1986 by Catherine Browman of Haskins Laboratories and Louis Goldstein of University of Southern California and Haskins. The theory identifies theoretical discrepancies between phonetics and phonology and aims to unify the two by treating them as low- and high-dimensional descriptions of a single system.
Franklin Seaney Cooper was an American physicist and inventor who was a pioneer in speech research.
Ignatius G. Mattingly (1927–2004) was a prominent American linguist and speech scientist. Prior to his academic career, he was an analyst for the National Security Agency from 1955 to 1966. He was a Lecturer and then Professor of Linguistics at the University of Connecticut from 1966 to 1996 and a researcher at Haskins Laboratories from 1966 until his death in 2004. He is best known for his pioneering work on speech synthesis and reading and for his theoretical work on the motor theory of speech perception in conjunction with Alvin Liberman. He received his B.A. in English from Yale University in 1947, his M.A. in Linguistics from Harvard University in 1959, and his Ph.D. in English from Yale University in 1968.
Katherine Safford Harris is a noted psychologist and speech scientist. She is Distinguished Professor Emerita in Speech and Hearing at the CUNY Graduate Center and a member of the Board of Directors Archived 2006-03-03 at the Wayback Machine of Haskins Laboratories. She is also the former President of the Acoustical Society of America and Vice President of Haskins Laboratories.
Susan Brady is an American psychologist and literacy expert who is a professor of school psychology at the University of Rhode Island. For many years, she led the Haskins Literacy Initiative at Haskins Laboratories in New Haven, Connecticut which promotes the "science of teaching reading." She has been a leading researcher in the area of reading acquisition for over thirty years and has been involved with efforts to improve state and national policy on the teaching of reading including speaking before a U.S. Senate committee.
Louis M. Goldstein is an American linguist and cognitive scientist. He was previously a professor and chair of the Department of Linguistics and a professor of psychology at Yale University and is now a professor in the Department of Linguistics at the University of Southern California. He is a senior scientist at Haskins Laboratories in New Haven, Connecticut, and a founding member of the Association for Laboratory Phonology. Notable students of Goldstein include Douglas Whalen and Elizabeth Zsiga.
Catherine Phebe Browman was an American linguist and speech scientist. She received her Ph.D. in linguistics from the University of California, Los Angeles (UCLA) in 1978. Browman was a research scientist at Bell Laboratories in New Jersey (1967–1972). While at Bell Laboratories, she was known for her work on speech synthesis using demisyllables. She later worked as researcher at Haskins Laboratories in New Haven, Connecticut (1982–1998). She was best known for developing, with Louis Goldstein, of the theory of articulatory phonology, a gesture-based approach to phonological and phonetic structure. The theoretical approach is incorporated in a computational model that generates speech from a gesturally-specified lexicon. Browman was made an honorary member of the Association for Laboratory Phonology.
Michael Studdert-Kennedy was an American psychologist and speech scientist 1927–2017.https://haskinslabs. We org/news/michael-studdert-kennedy. He is well known for his contributions to studies of speech perception, the motor theory of speech perception, and the evolution of language, among other areas. He is a professor emeritus of psychology at the University of Connecticut and a professor emeritus of linguistics at Yale University. He is the former president (1986–1992) of Haskins Laboratories in New Haven, Connecticut. He was also a member of the Haskins Laboratories Board of Directors and was chairman of the board from 1988 until 2001. He was the son of the priest and Christian socialist Geoffrey Studdert-Kennedy.
Donald P. ShankweilerArchived 2006-06-26 at the Wayback Machine is an eminent psychologist and cognitive scientist who has done pioneering work on the representation and processing of language in the brain. He is a Professor Emeritus of Psychology at the University of Connecticut, a Senior Scientist at Haskins Laboratories in New Haven, Connecticut, and a member of the Board of Directors Archived 2021-01-26 at the Wayback Machine at Haskins. He is married to well-known American philosopher of biology, psychology, and language Ruth Millikan.
Isabelle Yoffe Liberman (1918–1990) was an American psychologist, born in Latvia, who was an expert on reading disabilities, including dyslexia. Isabelle Liberman received her bachelor's degree from Vassar College and her doctorate from Yale University. She was a professor at the University of Connecticut from 1966 through 1987 and a research associate at the Haskins Laboratories.
Leonard Katz (1938–2017) was an American experimental psychologist, born in Boston, Massachusetts. He was a professor of psychology at the University of Connecticut (1965–2006) and then professor emeritus until 2017. He was a Fellow of the American Association for the Advancement of Science and the Association for Psychological Science.
The motor theory of speech perception is the hypothesis that people perceive spoken words by identifying the vocal tract gestures with which they are pronounced rather than by identifying the sound patterns that speech generates. It originally claimed that speech perception is done through a specialized module that is innate and human-specific. Though the idea of a module has been qualified in more recent versions of the theory, the idea remains that the role of the speech motor system is not only to produce speech articulations but also to detect them.
{{cite web}}
: CS1 maint: archived copy as title (link){{cite web}}
: CS1 maint: archived copy as title (link)