Part of a series on |
Linguistics |
---|
Portal |
Statistical language acquisition, a branch of developmental psycholinguistics, studies the process by which humans develop the ability to perceive, produce, comprehend, and communicate with natural language in all of its aspects (phonological, syntactic, lexical, morphological, semantic) through the use of general learning mechanisms operating on statistical patterns in the linguistic input. Statistical learning acquisition claims that infants' language-learning is based on pattern perception rather than an innate biological grammar. Several statistical elements such as frequency of words, frequent frames, phonotactic patterns and other regularities provide information on language structure and meaning for facilitation of language acquisition.
Fundamental to the study of statistical language acquisition is the centuries-old debate between rationalism (or its modern manifestation in the psycholinguistic community, nativism) and empiricism, with researchers in this field falling strongly in support of the latter category. Nativism is the position that humans are born with innate domain-specific knowledge, especially inborn capacities for language learning. Ranging from seventeenth century rationalist philosophers such as Descartes, Spinoza, and Leibniz to contemporary philosophers such as Richard Montague and linguists such as Noam Chomsky, nativists posit an innate learning mechanism with the specific function of language acquisition. [1]
In modern times, this debate has largely surrounded Chomsky's support of a universal grammar, properties that all natural languages must have, through the controversial postulation of a language acquisition device (LAD), an instinctive mental 'organ' responsible for language learning which searches all possible language alternatives and chooses the parameters that best match the learner's environmental linguistic input. Much of Chomsky's theory is founded on the poverty of the stimulus (POTS) argument, the assertion that a child's linguistic data is so limited and corrupted that learning language from this data alone is impossible. As an example, many proponents of POTS claim that because children are never exposed to negative evidence, that is, information about what phrases are ungrammatical, the language structure they learn would not resemble that of correct speech without a language-specific learning mechanism. [2] Chomsky's argument for an internal system responsible for language, biolinguistics, poses a three-factor model. "Genetic endowment" allows the infant to extract linguistic info, detect rules, and have universal grammar. "External environment" illuminates the need to interact with others and the benefits of language exposure at an early age. The last factor encompasses the brain properties, learning principles, and computational efficiencies that enable children to pick up on language rapidly using patterns and strategies.
Standing in stark contrast to this position is empiricism, the epistemological theory that all knowledge comes from sensory experience. This school of thought often characterizes the nascent mind as a tabula rasa, or blank slate, and can in many ways be associated with the nurture perspective of the "nature vs. nurture debate". This viewpoint has a long historical tradition that parallels that of rationalism, beginning with seventeenth century empiricist philosophers such as Locke, Bacon, Hobbes, and, in the following century, Hume. The basic tenet of empiricism is that information in the environment is structured enough that its patterns are both detectable and extractable by domain-general learning mechanisms. [1] In terms of language acquisition, these patterns can be either linguistic or social in nature.
Chomsky is very critical of this empirical theory of language acquisition. He has said, "It's true there's been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures." He claims the idea of using statistical methods to acquire language is simply a mimicry of the process, rather than a true understanding of how language is acquired. [3]
One of the most used experimental paradigms in investigations of infants' capacities for statistical language acquisition is the Headturn Preference Procedure (HPP), developed by Stanford psychologist Anne Fernald in 1985 to study infants' preferences for prototypical child-directed speech over normal adult speech. [4] In the classic HPP paradigm, infants are allowed to freely turn their heads and are seated between two speakers with mounted lights. The light of either the right or left speaker then flashes as that speaker provides some type of audial or linguistic input stimulus to the infant. Reliable orientation to a given side is taken to be an indication of a preference for the input associated with that side's speaker. This paradigm has since become increasingly important in the study of infant speech perception, especially for input at levels higher than syllable chunks, though with some modifications, including using the listening times instead of the side preference as the relevant dependent measure. [5]
Similar to HPP, the Conditioned Headturn Procedure also makes use of an infant's differential preference for a given side as an indication of a preference for, or more often a familiarity with, the input or speech associated with that side. Used in studies of prosodic boundary markers by Gout et al. (2004) [5] and later by Werker in her classic studies of categorical perception of native-language phonemes, [6] infants are conditioned by some attractive image or display to look in one of two directions every time a certain input is heard, a whole word in Gout's case and a single phonemic syllable in Werker's. After the conditioning, new or more complex input is then presented to the infant, and their ability to detect the earlier target word or distinguish the input of the two trials is observed by whether they turn their head in expectation of the conditioned display or not.
While HPP and the Conditioned Headturn Procedure allow for observations of behavioral responses to stimuli and after the fact inferences about what the subject's expectations must have been to motivate this behavior, the Anticipatory Eye Movement paradigm allows researchers to directly observe a subject's expectations before the event occurs. By tracking subjects' eye movements researchers have been able to investigate infant decision-making and the ways in which infants encode and act on probabilistic knowledge to make predictions about their environments. [7] This paradigm also offers the advantage of comparing differences in eye movement behavior across a wider range of ages than others.
Artificial languages, that is, small-scale languages that typically have an extremely limited vocabulary and simplified grammar rules, are a commonly used paradigm for psycholinguistic researchers. Artificial languages allow researchers to isolate variables of interest and wield a greater degree of control over the input the subject will receive. Unfortunately, the overly simplified nature of these languages and the absence of a number of phenomena common to all human natural languages such as rhythm, pitch changes, and sequential regularities raise questions of external validity for any findings obtained using this paradigm, even after attempts have been made to increase the complexity and richness of the languages used. [8] The artificial language's lack of complexity or decreased complexity fails to account for a child's need to recognize a given syllable in natural language regardless of the sound variability inherent to natural language, though "it is possible that the complexity of natural language actually facilitates learning." [9]
As such, artificial language experiments are typically conducted to explore what the relevant linguistic variables are, what sources of information infants are able to use and when, and how researchers can go about modeling the learning and acquisition process. [5] Aslin and Newport, for example, have used artificial languages to explore what features of linguistic input make certain patterns salient and easily detectable by infants, allowing them to easily contrast the detection of syllable repetition with that of word-final syllables and make conclusions about the conditions under which either feature is recognized as important. [10]
Statistical learning has been shown to play a large role in language acquisition, but social interaction appears to be a necessary component of learning as well. In one study, infants presented with audio or audiovisual recordings of Mandarin speakers failed to distinguish the phonemes of the language. [11] [12] This implies that simply hearing the sounds is not sufficient for language learning; social interaction cues the infant to take statistics. Particular interactions geared towards infants is known as "child-directed" language because it is more repetitive and associative, which makes it easier to learn. These "child directed" interactions could also be the reason why it is easier to learn a language as a child rather than an adult.
Studies of bilingual infants, such as a study Bijeljac-Babic, et al., on French-learning infants, have offered insight to the role of prosody in language acquisition. [13] The Bijeljac-Babic study found that language dominance influences "sensitivity to prosodic contrasts." Although this was not a study on statistical learning, its findings on prosodic pattern recognition might have implications for statistical learning.
It is possible that the kinds of language experience and knowledge gained through the statistical learning of the first language influences one's acquisition of a second language. Some research points to the possibility that the difficulty of learning a second language may be derived from the structural patterns and language cues that one has already picked up from his or her acquisition of first language. In that sense, the knowledge of and skills to process the first language from statistical acquisition may act as a complicating factor when one tries to learn a new language with different sentence structures, grammatical rules, and speech patterns.[ citation needed ]
The first step in developing knowledge of a system as complex as natural language is learning to distinguish the important language-specific classes of sounds, called phonemes, that distinguish meaning between words. UBC psychologist Janet Werker, since her influential series of experiments in the 1980s, has been one of the most prominent figures in the effort to understand the process by which human babies develop these phonological distinctions. While adults who speak different languages are unable to distinguish meaningful sound differences in other languages that do not delineate different meanings in their own, babies are born with the ability to universally distinguish all speech sounds. Werker's work has shown that while infants at six to eight months are still able to perceive the difference between certain Hindi and English consonants, they have completely lost this ability by 11 to 13 months. [6]
It is now commonly accepted that children use some form of perceptual distributional learning, by which categories are discovered by clumping similar instances of an input stimulus, to form phonetic categories early in life. [5] Developing children have been found to be effective judges of linguistic authority, screening the input they model their language on by shifting their attention less to speakers who mispronounce words. [5] Infants also use statistical tracking to calculate the likelihood that particular phonemes will follow each other. [14]
Parsing is the process by which a continuous speech stream is segmented into its discrete meaningful units, e.g. sentences, words, and syllables. Saffran (1996) represents a singularly seminal study in this line of research. Infants were presented with two minutes of continuous speech of an artificial language from a computerized voice to remove any interference from extraneous variables such as prosody or intonation. After this presentation, infants were able to distinguish words from nonwords, as measured by longer looking times in the second case. [15]
An important concept in understanding these results is that of transitional probability, the likelihood of an element, in this case a syllable, following or preceding another element. In this experiment, syllables that went together in words had a much higher transitional probability than did syllables at word boundaries that just happened to be adjacent. [5] [8] [15] Incredibly, infants, after a short two-minute presentation, were able to keep track of these statistics and recognize high probability words. Further research has since replicated these results with natural languages unfamiliar to infants, indicating that learning infants also keep track of the direction (forward or backward) of the transitional probabilities. [8] Though the neural processes behind this phenomenon remain largely unknown, recent research reports increased activity in the left inferior frontal gyrus and the middle frontal gyrus during the detection of word boundaries. [16]
The development of syllable-ordering biases is an important step along the way to full language development. The ability to categorize syllables and group together frequently co-occurring sequences may be critical in the development of a protolexicon, a set of common language-specific word templates based on characteristic patterns in the words an infant hears. The development of this protolexicon may in turn allow for the recognition of new types of patterns, e.g. the high frequency of word-initially stressed consonants in English, which would allow infants to further parse words by recognizing common prosodic phrasings as autonomous linguistic units, restarting the dynamic cycle of word and language learning. [5]
The question of how novice language-users are capable of associating learned labels with the appropriate referent, the person or object in the environment which the label names, has been at the heart of philosophical considerations of language and meaning from Plato to Quine to Hofstadter. [17] This problem, that of finding some solid relationship between word and object, of finding a word's meaning without succumbing to an infinite recursion of dictionary look-up, is known as the symbol grounding problem. [18]
Researchers have shown that this problem is intimately linked with the ability to parse language, and that those words that are easy to segment due to their high transitional probabilities are also easier to map to an appropriate referent. [8] This serves as further evidence of the developmental progression of language acquisition, with children requiring an understanding of the sound distributions of natural languages to form phonetic categories, parse words based on these categories, and then use these parses to map them to objects as labels.
The developmentally earliest understanding of word to referent associations have been reported at six months old, with infants comprehending the words 'mommy' and 'daddy' or their familial or cultural equivalents. Further studies have shown that infants quickly develop in this capacity and by seven months are capable of learning associations between moving images and nonsense words and syllables. [5]
It is important to note that there is a distinction, often confounded in acquisition research, between mapping a label to a specific instance or individual and mapping a label to an entire class of objects. This latter process is sometimes referred to as generalization or rule learning. Research has shown that if input is encoded in terms of perceptually salient dimensions rather than specific details and if patterns in the input indicate that a number of objects are named interchangeably in the same context, a language learner will be much more likely to generalize that name to every instance with the relevant features. This tendency is heavily dependent on the consistency of context clues and the degree to which word contexts overlap in the input. [10] These differences are furthermore linked to the well-known patterns of under and overgeneralization in infant word learning. Research has also shown that the frequency of co-occurrence of referents is tracked as well, which helps create associations and dispel ambiguities in object-referent models. [19]
The ability to appropriately generalize to whole classes of yet unseen words, coupled with the abilities to parse continuous speech and keep track of word-ordering regularities, may be the critical skills necessary to develop proficiency with and knowledge of syntax and grammar. [5]
According to recent research, there is no neural evidence of statistical language learning in children with autism spectrum disorders. When exposed to a continuous stream of artificial speech, neurotypical children displayed less cortical activity in the dorsolateral frontal cortices (specifically the middle frontal gyrus) as cues for word boundaries increased. However activity in these networks remained unchanged in autistic children, regardless of the verbal cues provided. This evidence, highlighting the importance of proper Frontal Lobe brain function is in support of the "Executive Functions" Theory, used to explain some of the biologically related causes of Autistic language deficits. With impaired working memory, decision making, planning, and goal setting, which are vital functions of the Frontal Lobe, Autistic children are at loss when it comes to socializing and communication (Ozonoff, et al., 2004). Additionally, researchers have found that the level of communicative impairment in autistic children was inversely correlated with signal increases in these same regions during exposure to artificial languages. Based on this evidence, researchers have concluded that children with autism spectrum disorders don't have the neural architecture to identify word boundaries in continuous speech. Early word segmentation skills have been shown to predict later language development, which could explain why language delay is a hallmark feature of autism spectrum disorders. [20]
Language learning takes place in different contexts, with both the infant and the caregiver engaging in social interactions. Recent research have investigated how infants and adults use cross-situational statistics in order to learn about not only the meanings of words but also the constraints within a context. For example, Smith and his colleagues proposed that infants learn language by acquiring a bias to label objects to similar objects that come from categories that are well-defined. Important to this view is the idea that the constraints that assist learning of words are not independent of the input itself or the infant's experience. Rather, constraints come about as infants learn about the ways that the words are used and begin to pay attention to certain characteristics of objects that have been used in the past to represent the words.
Inductive learning problem can occur as words are oftentimes used in ambiguous situations in which there are more than one possible referents available. This can lead to confusion for the infants as they may not be able to distinguish which words should be extended to label objects being referenced to. Smith and Yu proposed that a way to make a distinction in such ambiguous situations is to track the word-referent pairings over multiple scenes. For instance, an infant who hears a word in the presence of object A and object B will be unsure of whether the word is the referent of object A or object B. However, if the infant then hears the label again in the presence of object B and object C, the infant can conclude that object B is the referent of the label because object B consistently pairs with the label across different situations.
Computational models have long been used to explore the mechanisms by which language learners process and manipulate linguistic information. Models of this type allow researchers to systematically control important learning variables that are oftentimes difficult to manipulate at all in human participants. [21]
Associative neural network models of language acquisition are one of the oldest types of cognitive model, using distributed representations and changes in the weights of the connections between the nodes that make up these representations to simulate learning in a manner reminiscent of the plasticity-based neuronal reorganization that forms the basis of human learning and memory. [22] Associative models represent a break with classical cognitive models, characterized by discrete and context-free symbols, in favor of a dynamical systems approach to language better capable of handling temporal considerations. [23]
A precursor to this approach, and one of the first model types to account for the dimension of time in linguistic comprehension and production was Elman's simple recurrent network (SRN). By making use of a feedback network to represent the system's past states, SRNs were able in a word-prediction task to cluster input into self-organized grammatical categories based solely on statistical co-occurrence patterns. [23] [24]
Early successes such as these paved the way for dynamical systems research into linguistic acquisition, answering many questions about early linguistic development but leaving many others unanswered, such as how these statistically acquired lexemes are represented. [23] Of particular importance in recent research has been the effort to understand the dynamic interaction of learning (e.g. language-based) and learner (e.g. speaker-based) variables in lexical organization and competition in bilinguals. [21] In the ceaseless effort to move toward more psychologically realistic models, many researchers have turned to a subset of associative models, self-organizing maps (SOMs), as established, cognitively plausible models of language development. [25] [26]
SOMs have been helpful to researchers in identifying and investigating the constraints and variables of interest in a number of acquisition processes, and in exploring the consequences of these findings on linguistic and cognitive theories. By identifying working memory as an important constraint both for language learners and for current computational models, researchers have been able to show that manipulation of this variable allows for syntactic bootstrapping, drawing not just categorical but actual content meaning from words' positional co-occurrence in sentences. [27]
Some recent models of language acquisition have centered around methods of Bayesian Inference to account for infants' abilities to appropriately parse streams of speech and acquire word meanings. Models of this type rely heavily on the notion of conditional probability (the probability of A given B), in line with findings concerning infants' use of transitional probabilities of words and syllables to learn words. [15]
Models that make use of these probabilistic methods have been able to merge the previously dichotomous language acquisition perspectives of social theories that emphasize the importance of learning speaker intentions and statistical and associative theories that rely on cross-situational contexts into a single joint-inference problem. This approach has led to important results in explaining acquisition phenomena such as mutual exclusivity, one-trial learning or fast mapping, and the use of social intentions. [28]
While these results seem to be robust, studies concerning these models' abilities to handle more complex situations such as multiple referent to single label mapping, multiple label to single referent mapping, and bilingual language acquisition in comparison to associative models' successes in these areas have yet to be explored. Hope remains, though, that these model types may be merged to provide a comprehensive account of language acquisition. [29]
Along the lines of probabilistic frequencies, the C/V hypothesis basically states all language hearers use consonantal frequencies to distinguish between words (lexical distinctions) in continuous speech strings, in comparison to vowels. Vowels are more pertinent to rhythmic identification. Several follow-up studies revealed this finding, as they showed that vowels are processed independently of their local statistical distribution. [30] Other research has shown that the consonant-vowel ratio doesn't influence the sizes of lexicons when comparing distinct languages. In the case of languages with a higher consonant ratio, children may depend more on consonant neighbors than rhyme or vowel frequency. [31]
Some models of language acquisition have been based on adaptive parsing [32] and grammar induction algorithms. [33]
Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others.
Language acquisition is the process by which humans acquire the capacity to perceive and comprehend language. In other words, it is how human beings gain the ability to be aware of language, to understand it, and to produce and use words and sentences to communicate.
Psycholinguistics or psychology of language is the study of the interrelation between linguistic factors and psychological aspects. The discipline is mainly concerned with the mechanisms by which language is processed and represented in the mind and brain; that is, the psychological and neurobiological factors that enable humans to acquire, use, comprehend, and produce language.
Vocabulary development is a process by which people acquire words. Babbling shifts towards meaningful speech as infants grow and produce their first words around the age of one year. In early word learning, infants build their vocabulary slowly. By the age of 18 months, infants can typically produce about 50 words and begin to make word combinations.
In linguistics, linguistic competence is the system of unconscious knowledge that one knows when they know a language. It is distinguished from linguistic performance, which includes all other factors that allow one to use one's language in practice.
Language development in humans is a process which starts early in life. Infants start without knowing a language, yet by 10 months, babies can distinguish speech sounds and engage in babbling. Some research has shown that the earliest learning begins in utero when the fetus starts to recognize the sounds and speech patterns of its mother's voice and differentiate them from other sounds after birth.
Poverty of the stimulus (POS) is the controversial argument from linguistics that children are not exposed to rich enough data within their linguistic environments to acquire every feature of their language. This is considered evidence contrary to the empiricist idea that language is learned solely through experience. The claim is that the sentences children hear while learning a language do not contain the information needed to develop a thorough understanding of the grammar of the language.
Simultaneous bilingualism is a form of bilingualism that takes place when a child becomes bilingual by learning two languages from birth. According to Annick De Houwer, in an article in The Handbook of Child Language, simultaneous bilingualism takes place in "children who are regularly addressed in two spoken languages from before the age of two and who continue to be regularly addressed in those languages up until the final stages" of language development. Both languages are acquired as first languages. This is in contrast to sequential bilingualism, in which the second language is learned not as a native language but a foreign language.
In the field of psychology, nativism is the view that certain skills or abilities are "native" or hard-wired into the brain at birth. This is in contrast to the "blank slate" or tabula rasa view, which states that the brain has inborn capabilities for learning from the environment but does not contain content such as innate beliefs. This factor contributes to the ongoing nature versus nurture dispute, one borne from the current difficulty of reverse engineering the subconscious operations of the brain, especially the human brain.
Speech segmentation is the process of identifying the boundaries between words, syllables, or phonemes in spoken natural languages. The term applies both to the mental processes used by humans, and to artificial processes of natural language processing.
Bootstrapping is a term used in language acquisition in the field of linguistics. It refers to the idea that humans are born innately equipped with a mental faculty that forms the basis of language. It is this language faculty that allows children to effortlessly acquire language. As a process, bootstrapping can be divided into different domains, according to whether it involves semantic bootstrapping, syntactic bootstrapping, prosodic bootstrapping, or pragmatic bootstrapping.
Developmental linguistics is the study of the development of linguistic ability in an individual, particularly the acquisition of language in childhood. It involves research into the different stages in language acquisition, language retention, and language loss in both first and second languages, in addition to the area of bilingualism. Before infants can speak, the neural circuits in their brains are constantly being influenced by exposure to language. Developmental linguistics supports the idea that linguistic analysis is not timeless, as claimed in other approaches, but time-sensitive, and is not autonomous – social-communicative as well as bio-neurological aspects have to be taken into account in determining the causes of linguistic developments.
Phonological development refers to how children learn to organize sounds into meaning or language (phonology) during their stages of growth.
In linguistics, the innateness hypothesis, also known as the nativist hypothesis, holds that humans are born with at least some knowledge of linguistic structure. On this hypothesis, language acquisition involves filling in the details of an innate blueprint rather than being an entirely inductive process. The hypothesis is one of the cornerstones of generative grammar and related approaches in linguistics. Arguments in favour include the poverty of the stimulus, the universality of language acquisition, as well as experimental studies on learning and learnability. However, these arguments have been criticized, and the hypothesis is widely rejected in other traditions such as usage-based linguistics. The term was coined by Hilary Putnam in reference to the views of Noam Chomsky.
The Competition Model is a psycholinguistic theory of language acquisition and sentence processing, developed by Elizabeth Bates and Brian MacWhinney (1982). The claim in MacWhinney, Bates, and Kliegl (1984) is that "the forms of natural languages are created, governed, constrained, acquired, and used in the service of communicative functions." Furthermore, the model holds that processing is based on an online competition between these communicative functions or motives. The model focuses on competition during sentence processing, crosslinguistic competition in bilingualism, and the role of competition in language acquisition. It is an emergentist theory of language acquisition and processing, serving as an alternative to strict innatist and empiricist theories. According to the Competition Model, patterns in language arise from Darwinian competition and selection on a variety of time/process scales including phylogenetic, ontogenetic, social diffusion, and synchronic scales.
Elissa Lee Newport is a professor of neurology and director of the Center for Brain Plasticity and Recovery at Georgetown University. She specializes in language acquisition and developmental psycholinguistics, focusing on the relationship between language development and language structure, and most recently on the effects of pediatric stroke on the organization and recovery of language.
Richard N. Aslin is an American psychologist. He is currently a Senior Scientist at Haskins Laboratories and professor at Yale University. Until December, 2016, Dr. Aslin was William R. Kenan Professor of Brain & Cognitive Sciences and Center for Visual Sciences at the University of Rochester. During his time in Rochester, he was also Director of the Rochester Center for Brain Imaging and the Rochester Baby Lab. He had worked at the university for over thirty years, until he resigned in protest of the university's handling of a sexual harassment complaint about a junior member of his department.
The following outline is provided as an overview of and topical guide to natural-language processing:
Statistical learning is the ability for humans and other animals to extract statistical regularities from the world around them to learn about the environment. Although statistical learning is now thought to be a generalized learning mechanism, the phenomenon was first identified in human infant language acquisition.
Prosodic bootstrapping in linguistics refers to the hypothesis that learners of a primary language (L1) use prosodic features such as pitch, tempo, rhythm, amplitude, and other auditory aspects from the speech signal as a cue to identify other properties of grammar, such as syntactic structure. Acoustically signaled prosodic units in the stream of speech may provide critical perceptual cues by which infants initially discover syntactic phrases in their language. Although these features by themselves are not enough to help infants learn the entire syntax of their native language, they provide various cues about different grammatical properties of the language, such as identifying the ordering of heads and complements in the language using stress prominence, indicating the location of phrase boundaries, and word boundaries. It is argued that prosody of a language plays an initial role in the acquisition of the first language helping children to uncover the syntax of the language, mainly due to the fact that children are sensitive to prosodic cues at a very young age.
{{cite web}}
: CS1 maint: archived copy as title (link)