In linguistics, Poverty of the stimulus (POS) arguments are arguments that children are not exposed to rich enough data within their linguistic environments to acquire every feature of their language. Poverty of the stimulus arguments are used as evidence for universal grammar, the notion that at least some aspects of linguistic competence are innate. The term "poverty of the stimulus" was coined by Noam Chomsky in 1980. Their empirical and conceptual bases are a topic of continuing debate in linguistics.
Noam Chomsky coined the term "poverty of the stimulus" in 1980. This idea is closely related to what Chomsky calls "Plato's Problem". He outlined this philosophical approach in the first chapter of the Knowledge of Language in 1986. [1] Plato's Problem traces back to Meno , a Socratic dialogue. In Meno, Socrates unearths knowledge of geometry concepts from a slave who was never explicitly taught them. [2] Plato's Problem directly parallels the idea of the innateness of language, universal grammar, and more specifically the poverty of the stimulus argument because it reveals that people's knowledge is richer than what they are exposed to. Chomsky suggests that humans are not exposed to all structures of their language, yet they fully achieve knowledge of these structures.
Linguistic nativism is the theory that humans are born with some knowledge of language. One acquires a language not entirely through experience. According to Noam Chomsky, [3] "The speed and precision of vocabulary acquisition leaves no real alternative to the conclusion that the child somehow has the concepts available before experience with language and is basically learning labels for concepts that are already a part of his or her conceptual apparatus." One of the most significant arguments generative grammarians have for linguistic nativism is the poverty of the stimulus argument. [4] [5] [6]
Pullum and Scholz frame the poverty of the stimulus argument by examining all of the ways that the input is insufficient for language acquisition. [7] First, children are exposed only to positive evidence. They do not receive explicit correction or instruction about what is not possible in the language. [7] [8] Second, the input that children receive is degenerate in terms of scope and quality. [9] Degeneracy of scope means that the input does not contain information about the full extent of any grammatical rules. Degeneracy of quality means that children are exposed to speech errors, utterances by nonnative speakers, and false starts, potentially obscuring the grammatical structure of the language. Furthermore, the linguistic data each child is exposed to is different and so the basis for learning is idiosyncratic. However, despite these insufficiencies, children eventually acquire the grammar of the language they are exposed to. Further, other organisms in the same environment do not. [10] From the nativists' point of view, the insufficiency of the input leads to the conclusion that humans are hard-wired with a UG and thus support the innateness hypothesis.
However, the argument that the poverty of the stimulus supports the innateness hypothesis remains controversial. [11] For example, Fiona Cowie claims that the Poverty of Stimulus argument fails "on both empirical and conceptual grounds to support nativism". [12]
The literature contains a variety of Poverty of the Stimulus arguments regarding a variety of phenomena.
In general, pronouns can refer to any prominent individual in the discourse context. However, a pronoun cannot find its antecedent in certain structural positions, as defined by Binding Theory. For example, the pronoun "he" can refer to the Ninja Turtle in (1) but not (2), above. Given that speech to children does not indicate what interpretations are impossible, the input is equally consistent with a grammar that allows coreference between "he" and "the Ninja Turtle" in (2) and one that does not. But, since all speakers of English recognize that (2) does not allow this coreference, this aspect of the grammar must come from some property internal to the learner. [9]
The sentences in (1) and (2) illustrate the active-passive alternation in English. The Noun Phrase after the verb in the active (1) is the subject in the passive (2). Data like (2) would be compatible with a passive rule stated in terms of linear order (move the 1st NP after the verb) or syntactic structure (move the highest NP after the verb). The data in (3–5) illustrate that the actual rule is formulated in terms of structure. If it were stated in terms of linear order, then (4) would be ungrammatical and (5) would be grammatical. But the opposite is true. However, children may not be exposed to sentences like (3–5) as evidence in favor of the correct grammar. Thus, the fact that all adult speakers agree that (4) is grammatical and (5) is not suggests that the linear rule was never even considered and that children are predisposed to a structure based grammatical system. [9]
The English word "one" can refer back to a previously mentioned property in the discourse. For example in (1), "one" can mean "ball".
In (2), "one" is interpreted as "red ball." However, even if a speaker intends (2) in this way, it would be difficult to distinguish that interpretation from one in which "one" simply meant "ball". This is because when a speaker refers to a red ball, they are also referring to a ball since the set of red balls is a subset of balls in general. 18-month-olds, like adults, show that they believe 'one' refers to 'red ball' and not 'ball'. [13] The evidence available to children is systematically ambiguous between a grammar in which "one" refers back to Nouns and one in which "one" refers back to noun phrases. Despite this ambiguity, children learn the more narrow interpretation, suggesting that some property other than the input is responsible for their interpretations.
In Wh-questions, the Wh-word at the beginning of the sentence (the filler) is related to a position later in the sentence (the gap). This relation can hold over an unbounded distance, as in (1). However, there are restrictions on the gap positions that a filler can be related to. These restrictions are called syntactic islands (2). Because questions with islands are ungrammatical, they are not included in the speech that children hear—but neither are grammatical Wh-questions that span multiple clauses. Because the speech children are exposed to is consistent with grammars that have island constraints and grammars that do not, something internal to the child must contribute this knowledge.
Bergelson & Idsardi (2009) presented adults with words drawn from an artificial language. [14] The words contained 3 CV syllables. If the last vowel was long, then it bore stress. Otherwise, stress fell on the first syllable. This pattern is consistent with two grammars. In one grammar, a long vowel bears stress if it is the last segment in the word. This is a rule based on absolute finality. In the other grammar, a long vowel bears stress only if it is the last vowel in the word (i.e., even if it is not the last segment of the word). This is a rule based on relative finality. In natural languages stress rules make reference to relative finality but not to absolute finality. After being exposed to these words, participants were then tested to see whether they thought that a word with a long vowel in a closed syllable (CVVC) would bear stress. If it did, then that would be consistent with the relative-final grammar, but not with the absolute-final grammar. English-speaking adults (tested through computer software) were more likely to accept the words from the relative-final grammar than from the absolute-final grammar. Since the data they were exposed to was equally consistent with both grammars, and since neither rule is a rule of English, the source of this decision must have come from the participants, not from any aspect of their experience. In addition, eighth-month-old children (tested via the Headturn Preference Procedure) were found to have the same preference as adults. Given that this preference could not have come from their exposure to either the artificial language or to their native language, the researchers concluded that human language acquisition mechanisms are "hardwired" to lead infants towards certain generalizations, consistent with the argument for the poverty of the stimulus.
Halle (1978) [15] argues that the morphophonological rule governing the English plural produces forms that are consistent with two grammars. In one grammar, the plural is pronounced as [s] if it follows one of the sounds [p, t, k, f, θ]; otherwise it is pronounced as [z]. In the other grammar, the plural is pronounced as [s] if it follows a voiceless consonant. These rules are exactly equal in their coverage of English since the set of consonants that triggers the [s] pronunciation is identical in the two cases. However, Halle also observes that English speakers consistently pluralize the German name Bach (pronounced /bax/) as /baxs/, despite not having any experience with the /x/ sound, which is nonexistent in English. Since there is "no indication" that speakers could have acquired this knowledge, Halle argues that the tendency to build rules in terms of natural classes comes from a factor internal to the child and not from their experience. [15]
The poverty of the stimulus also applies in the domain of word learning. When learning a new word, children are exposed to examples of the word's referent, but not to the full extent of the category. For example, in learning the word "dog", a child might see a German Shepherd, a Great Dane and a Poodle. How do they know to extend this category to include Dachshunds and Bulldogs? The situations in which the word is used cannot provide the relevant information. Thus, something internal to learners must shape the way that they generalize. This problem is closely related to Quine's gavagai problem.
In other cases, words refer to aspects of the world that cannot be observed directly. For example Lila Gleitman poses a POS argument with respect to verbs that label mental states. She observes that a learner cannot see inside another person's mind, and so an utterance of "Kim thinks that it is raining" is likely to occur in the same kinds of contexts as "Kim wonders if it is raining" or even "Kim wants it to rain". If no aspect of the context can determine whether a mental state verb refers to thinkings, wanting or wonderings, then some aspect of children's minds must direct their attention to other cues. Thus, our ability to learn these word meanings must be shaped by factors internal to the child and not simply from the conditions of their use. [16]
The empirical basis of poverty of the stimulus arguments has been challenged by Geoffrey Pullum and others, leading to back-and-forth debate in the language acquisition literature. [17] [18]
Recent work has also suggested that some recurrent neural network architectures are able to learn hierarchical structure without an explicit constraint. This raises the possibility that poverty of the stimulus arguments regarding hierarchical structure may have been mistaken. [19]
Language acquisition is the process by which humans acquire the capacity to perceive and comprehend language. In other words, it is how human beings gain the ability to be aware of language, to understand it, and to produce and use words and sentences to communicate.
Universal grammar (UG), in modern linguistics, is the theory of the innate biological component of the language faculty, usually credited to Noam Chomsky. The basic postulate of UG is that there are innate constraints on what the grammar of a possible human language could be. When linguistic stimuli are received in the course of language acquisition, children then adopt specific syntactic rules that conform to UG. The advocates of this theory emphasize and partially rely on the poverty of the stimulus (POS) argument and the existence of some universal properties of natural human languages. However, the latter has not been firmly established, as some linguists have argued languages are so diverse that such universality is rare, and the theory of universal grammar remains controversial among linguists.
In linguistics, transformational grammar (TG) or transformational-generative grammar (TGG) is part of the theory of generative grammar, especially of natural languages. It considers grammar to be a system of rules that generate exactly those combinations of words that form grammatical sentences in a given language and involves the use of defined operations to produce new sentences from existing ones.
Cognitive linguistics is an interdisciplinary branch of linguistics, combining knowledge and research from cognitive science, cognitive psychology, neuropsychology and linguistics. Models and theoretical accounts of cognitive linguistics are considered as psychologically real, and research in cognitive linguistics aims to help understand cognition in general and is seen as a road into the human mind.
Generative grammar is a research tradition in linguistics that aims to explain the cognitive basis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge. Generative linguists, or generativists, tend to share certain working assumptions such as the competence–performance distinction and the notion that some domain-specific aspects of grammar are partly innate in humans. These assumptions are rejected in non-generative approaches such as usage-based models of language. Generative linguistics includes work in core areas such as syntax, semantics, phonology, psycholinguistics, and language acquisition, with additional extensions to topics including biolinguistics and music cognition.
Geoffrey Keith Pullum is a British and American linguist specialising in the study of English. Pullum has published over 300 articles and books on various topics in linguistics, including phonology, morphology, semantics, pragmatics, computational linguistics, and philosophy of language. He is Professor Emeritus of General Linguistics at the University of Edinburgh.
In the philosophy of mind, innatism is the view that the mind is born with already-formed ideas, knowledge, and beliefs. The opposing doctrine, that the mind is a tabula rasa at birth and all knowledge is gained from experience and the senses, is called empiricism.
The Language Acquisition Device (LAD) is a claim from language acquisition research proposed by Noam Chomsky in the 1960s. The LAD concept is a purported instinctive mental capacity which enables an infant to acquire and produce language. It is a component of the nativist theory of language. This theory asserts that humans are born with the instinct or "innate facility" for acquiring language. The main argument given in favor of the LAD was the argument from the poverty of the stimulus, which argues that unless children have significant innate knowledge of grammar, they would not be able to learn language as quickly as they do, given that they never have access to negative evidence and rarely receive direct instruction in their first language.
Principles and parameters is a framework within generative linguistics in which the syntax of a natural language is described in accordance with general principles and specific parameters that for particular languages are either turned on or off. For example, the position of heads in phrases is determined by a parameter. Whether a language is head-initial or head-final is regarded as a parameter which is either on or off for particular languages. Principles and parameters was largely formulated by the linguists Noam Chomsky and Howard Lasnik. Many linguists have worked within this framework, and for a period of time it was considered the dominant form of mainstream generative linguistics.
Syntactic Structures is an important work in linguistics by American linguist Noam Chomsky, originally published in 1957. A short monograph of about a hundred pages, it is recognized as one of the most significant and influential linguistic studies of the 20th century. It contains the now-famous sentence "Colorless green ideas sleep furiously", which Chomsky offered as an example of a grammatically correct sentence that has no discernible meaning, thus arguing for the independence of syntax from semantics.
In the field of psychology, nativism is the view that certain skills or abilities are "native" or hard-wired into the brain at birth. This is in contrast to the "blank slate" or tabula rasa view, which states that the brain has inborn capabilities for learning from the environment but does not contain content such as innate beliefs. This factor contributes to the ongoing nature versus nurture dispute, one borne from the current difficulty of reverse engineering the subconscious operations of the brain, especially the human brain.
Plato's problem is the term given by Noam Chomsky to "the problem of explaining how we can know so much" given our limited experience. Chomsky believes that Plato asked how we should account for the rich, intrinsic, common structure of human cognition, when it seems underdetermined by extrinsic evidence presented to a person during human development. In linguistics this is referred to as the "argument from poverty of the stimulus" (APS). Such arguments are common in the natural sciences, where a developing theory is always "underdetermined by evidence". Chomsky's approach to Plato's problem involves treating cognition as a normal research topic in the natural sciences, so cognition can be studied to elucidate intertwined genetic, developmental, and biophysical factors. Plato's problem is most clearly illustrated in the Meno dialogue, in which Socrates demonstrates that an uneducated boy nevertheless understands geometric principles.
The linguistics wars were extended disputes among American theoretical linguists that occurred mostly during the 1960s and 1970s, stemming from a disagreement between Noam Chomsky and several of his associates and students. The debates started in 1967 when linguists Paul Postal, John R. Ross, George Lakoff, and James D. McCawley —self-dubbed the "Four Horsemen of the Apocalypse"—proposed an alternative approach in which the relation between semantics and syntax is viewed differently, which treated deep structures as meaning rather than syntactic objects. While Chomsky and other generative grammarians argued that meaning is driven by an underlying syntax, generative semanticists posited that syntax is shaped by an underlying meaning. This intellectual divergence led to two competing frameworks in generative semantics and interpretive semantics.
In linguistics, grammaticality is determined by the conformity to language usage as derived by the grammar of a particular speech variety. The notion of grammaticality rose alongside the theory of generative grammar, the goal of which is to formulate rules that define well-formed, grammatical sentences. These rules of grammaticality also provide explanations of ill-formed, ungrammatical sentences.
In linguistics, the innateness hypothesis, also known as the nativist hypothesis, holds that humans are born with at least some knowledge of linguistic structure. On this hypothesis, language acquisition involves filling in the details of an innate blueprint rather than being an entirely inductive process. The hypothesis is one of the cornerstones of generative grammar and related approaches in linguistics. Arguments in favour include the poverty of the stimulus, the universality of language acquisition, as well as experimental studies on learning and learnability. However, these arguments have been criticized, and the hypothesis is widely rejected in other traditions such as usage-based linguistics. The term was coined by Hilary Putnam in reference to the views of Noam Chomsky.
Domain-specific learning theories of development hold that we have many independent, specialised knowledge structures (domains), rather than one cohesive knowledge structure. Thus, training in one domain may not impact another independent domain. Domain-general views instead suggest that children possess a "general developmental function" where skills are interrelated through a single cognitive system. Therefore, whereas domain-general theories would propose that acquisition of language and mathematical skill are developed by the same broad set of cognitive skills, domain-specific theories would propose that they are genetically, neurologically and computationally independent.
Aspects of the Theory of Syntax is a book on linguistics written by American linguist Noam Chomsky, first published in 1965. In Aspects, Chomsky presented a deeper, more extensive reformulation of transformational generative grammar (TGG), a new kind of syntactic theory that he had introduced in the 1950s with the publication of his first book, Syntactic Structures. Aspects is widely considered to be the foundational document and a proper book-length articulation of Chomskyan theoretical framework of linguistics. It presented Chomsky's epistemological assumptions with a view to establishing linguistic theory-making as a formal discipline comparable to physical sciences, i.e. a domain of inquiry well-defined in its nature and scope. From a philosophical perspective, it directed mainstream linguistic research away from behaviorism, constructivism, empiricism and structuralism and towards mentalism, nativism, rationalism and generativism, respectively, taking as its main object of study the abstract, inner workings of the human mind related to language acquisition and production.
Lectures on Government and Binding: The Pisa Lectures (LGB) is a book by the linguist Noam Chomsky, published in 1981. It is based on the lectures Chomsky gave at the GLOW conference and workshop held at the Scuola Normale Superiore in Pisa, Italy, in 1979. In this book, Chomsky presented his government and binding theory of syntax. It had great influence on the syntactic research in early 1980s, especially among the linguists working within the transformational grammar framework.
The main purpose of theories of second-language acquisition (SLA) is to shed light on how people who already know one language learn a second language. The field of second-language acquisition involves various contributions, such as linguistics, sociolinguistics, psychology, cognitive science, neuroscience, and education. These multiple fields in second-language acquisition can be grouped as four major research strands: (a) linguistic dimensions of SLA, (b) cognitive dimensions of SLA, (c) socio-cultural dimensions of SLA, and (d) instructional dimensions of SLA. While the orientation of each research strand is distinct, they are in common in that they can guide us to find helpful condition to facilitate successful language learning. Acknowledging the contributions of each perspective and the interdisciplinarity between each field, more and more second language researchers are now trying to have a bigger lens on examining the complexities of second language acquisition.
In language acquisition, negative evidence is information concerning what is not possible in a language. Importantly, negative evidence does not show what is grammatical; that is positive evidence. In theory, negative evidence would help eliminate ungrammatical constructions by revealing what is not grammatical.