Bootstrapping (linguistics)

Last updated

Bootstrapping is a term used in language acquisition in the field of linguistics. It refers to the idea that humans are born innately equipped with a mental faculty that forms the basis of language. It is this language faculty that allows children to effortlessly acquire language. [1] As a process, bootstrapping can be divided into different domains, according to whether it involves semantic bootstrapping, syntactic bootstrapping, prosodic bootstrapping, or pragmatic bootstrapping.

Contents

Background

Etymology

In literal terms, a bootstrap is the small strap on a boot that is used to help pull on the entire boot. Similarly in computer science, booting refers to the startup of an operation system by means of first initiating a smaller program. Therefore, bootstrapping refers to the leveraging of a small action into a more powerful and significant operation.

Bootstrapping in linguistics was first introduced by Steven Pinker as a metaphor for the idea that children are innately equipped with mental processes that help initiate language acquisition. Bootstrapping attempts to identify the language learning processes that enable children to learn about the structure of the target language. [2]

Connectionism

Bootstrapping has a strong link to connectionist theories which model human cognition as a system of simple, interconnected networks. In this respect, connectionist approaches view human cognition as a computational algorithm. On this view, in terms of learning, humans have statistical learning capabilities that allow them to problem solve. [3] Proponents of statistical learning believe that it is the basis for higher level learning, and that humans use the statistical information to create a database which allows them to learn higher-order generalizations and concepts.

For a child acquiring language, the challenge is to parse out discrete segments from a continuous speech stream. Research demonstrates that, when exposed to streams of nonsense speech, children use statistical learning to determine word boundaries. [4] In every human language, there are certain sounds that are more likely to occur with each other: for example, in English, the sequence [st] is attested word-initially (stop), but the sequence *[gb] occurs only across a syllable break.

It appears that children can detect the statistical probability of certain sounds occurring with one another, and use this to parse out word boundaries. Utilizing these statistical abilities, children appear to be able to form mental representations, or neural networks, of relevant pieces of information. [5] Pieces of relevant information include word classes, which in connectionist theory, are seen as each having an internal representation and transitional links between concepts. [6] Neighbouring words provide concepts and links for children to bootstrap new representations on the basis of their previous knowledge.

Innateness

The innateness hypothesis was originally coined by Noam Chomsky as a means to explain the universality in language acquisition. All typically-developing children with adequate exposure to a language will learn to speak and comprehend the language fluently. It is also proposed that despite the supposed variation in languages, they all fall into a very restricted subset of the potential grammars that could be infinitely conceived. [7] Chomsky argued that since all grammars universally deviate very little from the same general structure and children seamlessly acquire language, humans must have some intrinsic language learning capability that allows us to learn language. [7] This intrinsic capability was hypothesized to be embedded in the brain, earning the title of language acquisition device (LAD). According to this hypothesis, the child is equipped with knowledge of grammatical and ungrammatical types, which they then apply to the stream of speech they hear in order to determine the grammar this stream is compatible with. [7] The processes underlying this LAD relates to bootstrapping in that once a child has identified the subset of the grammar they are learning, they can then apply their knowledge of grammatical types in order to learn the language-specific aspects of the word. This relates to the Principles and Parameters theory of linguistics, in that languages universally consist of basic, unbroken principles and vary by specific parameters.

Semantic bootstrapping

Semantic bootstrapping is a linguistic theory of language acquisition which proposes that children can acquire the syntax of a language by first learning and recognizing semantic elements and building upon, or bootstrapping from, that knowledge. [8]

According to Pinker, [8] semantic bootstrapping requires two critical assumptions to hold true:

  1. A child must be able to perceive meaning from utterances. That is, the child must associate utterances with, for example, objects and actions in the real world.
  2. A child must also be able to realize that there are strong correspondences between semantic and syntactic categories. The child can then use the knowledge of these correspondences to create, test, and internalize grammar rules iteratively as the child gains more knowledge of their language.

Acquiring the state/event contrast

When discussing the acquisition of temporal contrasts, the child must first have a concept of time outside of semantics. In other words, the child must be able to have some mental grasp on the concept of events, memory, and general progression of time before attempting to conceive it semantically. [9] Semantics, especially with regard to events and memory concepts, appears to be far more language-general, with meanings being more universal concepts rather than the individual segments being used to represent them. [9] For this reason, semantics requires far more cognition than external stimuli in acquiring it, and relies much on the innate capability of the child to develop such abstraction; the child must first have a mental representation of the concept, before attempting to link a word to that meaning. In order to actually learn time events, several processes must occur:

  1. The child must have a grasp on temporal concepts
  2. They must learn which concepts are represented in their own language
  3. They must learn how their experiences are representative of certain event types that are present in the language
  4. They must learn the different morphological and syntactic representations of these events

(Data in list cited from [9] )

Using these basic stepping stones, the child is able to map their internal concept of the meaning of time onto explicit linguistic segments. This bootstrapping allows them to have hierarchical, segmental steps, in which they are able to build upon their previous knowledge to aid future learning.

Tomasello argues that in learning linguistic symbols, the child does not need to have explicit external linguistic contrasts, and rather, will learn about these concepts via social context and their surroundings. [10] This can be demonstrated with semantic bootstrapping in that the child does not explicitly receive information on the semantic meaning of temporal events, but learns to apply their internal knowledge of time to the linguistic segments that they are being exposed to.

Acquiring the count/mass contrast

With regard to mapping the semantic relationships for count, it follows previous bootstrapping methods. Since the context in which children are presented with number quantities usually have visual aid to accompany them, the child has a relatively easy way to map these number concepts. [11]

1. Look at the three boys!

Count nouns are nouns which are viewed as being discrete entities or individuals. [12] For nouns which denote discrete entities, granted that the child already has the mental concept for BOY and THREE in place, they will see the set of animate, young, human males (i.e. boys) and confirm that the set has a cardinality of three.

For mass nouns which denote non-discrete substances, in order to count, they act to demonstrate the relationship between atoms of the word and substance. [11] However, mass nouns can vary with regard to the sharpness or narrowness that they refer to an entity. [11] For example, a grain of rice has a much narrower quantity definition than a bag of rice.

"Of" is a word that children are thought to learn the definition of as being something that transforms a substance into a set of atoms. [11] For example, when one says:

2. I have three gallons for sale.
3. I have three gallons of water.

The word of is used in (3) to mark the mass noun water is partitioned into gallons. The initial substance now denotes a set. The child again uses visual cues to grasp what this relationship is.

Syntactic bootstrapping

Syntactic bootstrapping is a theory about the process of how children identify word meanings based on their syntactic categories. In other words, how knowledge of grammatical structure, including how syntactic categories (adjectives, nouns, verbs, etc.) combine into phrases and constituents in order to form sentences, "bootstraps" the acquisition of word meaning. The main challenge this theory tackles is the lack of specific information extralinguistic-information context provides on mapping word meaning and making inferences. It accounts for this problem by suggesting that children do not need to rely solely on environmental context to understand meaning or have the words explained to them. Instead, the children infer word meanings from their observations about syntax, and use these observations to infer word meaning and comprehend future utterances they hear.

This in-depth analysis of Syntactic bootstrapping provides background on the research and evidence; describing how children acquire lexical and functional categories, challenges to the theory as well as cross-linguistic applications.

Prosodic bootstrapping

Even before infants can comprehend word meaning, prosodic details assist them in discovering syntactic boundaries. [13] Prosodic bootstrapping or phonological bootstrapping investigates how prosodic information—which includes stress, rhythm, intonation, pitch, pausing, as well as dialectal features—can assist a child in discovering the grammatical structure of the language that they are acquiring.

In general, prosody introduces features that reflect either attributes of the speaker or the utterance type. Speaker attributes include emotional state, as well as the presence of irony or sarcasm. Utterance-level attributes are used to mark questions, statements and commands, and they can also be used to mark contrast.

Similarly, in sign language, prosody includes facial expression, mouthing, and the rhythm, length and tension of gestures and signs.

In language, words are not only categorized into phrases, clauses, and sentences. Words are also organized into prosodic envelopes. The idea of a prosodic envelope is that words that go together syntactically also form a similar intonation pattern. This explains how children discover syllable and word boundaries through prosodic cues. Overall, prosodic bootstrapping explores determining grammatical groupings in a speech stream rather than learning word meaning. [14]

One of the key components of the prosodic bootstrapping hypothesis is that prosodic cues may aid infants in identifying lexical and syntactical properties. From this, three key elements of prosodic bootstrapping can be proposed: [15]

  1. The syntax of language is correlated with acoustic properties.
  2. Infants can detect and are sensitive to these acoustic properties.
  3. These acoustic properties can be used by infants when processing speech.

There is evidence that the acquisition of language-specific prosodic qualities starts even before an infant is born. This is seen in neonate crying patterns, which have qualities that are similar to the prosody of the language that they are acquiring. [16] The only way that an infant could be born with this ability is if the prosodic patterns of the target language are learned in utero. Further evidence of young infants using prosodic cues is their ability to discriminate the acoustic property of pitch change by 1–2 months old. [17]

Prosodic cues for syntactic structure

Infants and young children receive much of their language input in the form of infant-directed speech (IDS) and child-directed speech (CDS), which are characterized as having exaggerated prosody and simplification of words and grammar structure. When interacting with infants and children, adults often raise and widen their pitch, and reduce their speech rate. [18] However, these cues vary across cultures and across languages.

There are several ways in which infant- and child-directed speech can facilitate language acquisition. In recent studies, it is shown that IDS and CDS contain prosodic information that may help infants and children distinguish between paralinguistic expressions (e.g. gasps, laughs, expressions) and informative speech. [19] In Western cultures, mothers speak to their children using exaggerated intonation and pauses, which offer insight about syntactic groupings such as noun phrases, verb phrases, and prepositional phrases. [14] This means that the linguistic input infants and children receive include some prosodic bracketing around syntactically relevant chunks.

(1)  Look the boy is patting the dog with his hand. (2) *Look the boy ... is ... patting the ... dog with his ... hand. (3)  Look ... [DP The boy] ... [VP is patting the dog] ... [PP with his hand].

A sentence like (1) will not typically be produced with the pauses indicated in (2), where the pauses "interrupt" syntactic constituents. For example, pausing between the and dog would interrupt the determiner phrase (DP) constituent, as would pausing between his and hand. Most often, pauses are placed so as to group the utterance into chunks that correspond to the beginnings and ends of constituents such as determiner phrases (DPs), verb phrases (VPs), and prepositional phrases (PPs). As a result, sentences like (3), where the pauses correspond to syntactic constituents, are much more natural. [14]

Moreover, within these phrases are distinct patterns of stress, which helps to differentiate individual elements within the phrase, such as a noun from an article. Typically, articles and other unbound morphemes are unstressed and are relatively short in duration in contrast to the pronunciation of nouns. Furthermore, in verb phrases, auxiliary verbs are less stressed than main verbs. [14] This can be seen in (4).

   4. They are RUNning.

Prosodic bootstrapping states that these naturally occurring intonation packages help infants and children to bracket linguistic input into syntactic groupings. Currently, there is not enough evidence to suggest that prosodic cues in IDS and CDS facilitate in the acquisition of more complex syntax. However IDS and CDS are richer linguistic inputs for infants and children.

Prosodic cues for clauses and phrases

There is continued research into whether infants use prosodic cues – in particular, pauses – when processing clauses and phrases. Clauses are the largest constituent structure in a phrase and are often produced in isolation in conversation; for example, "Did you walk the dog?". [15] Consequently, phrases are smaller components of clauses. For example, "the tall man" or "walks his dog". [15] Peter Jusczyk argued that infants use prosody to parse speech into smaller units for analysis. He, along with colleagues, reported that 4.5 month old infants illustrated a preference for artificial pauses at clause boundaries in comparison to pauses at other places in a sentence; [20] preferring pauses at clause boundaries illustrates infants' abilities to discriminate clauses in a passage. This reveals that while infants do not understand word meaning, they are in the process of learning about their native language and grammatical structure. In a separate study, Jusczyk reported that 9 month old infants preferred passages with pauses occurring between subject-noun phrases and verb phrases. These results are further evidence of infant sensitivity for syntactic boundaries. [21] In a follow-up study by LouAnn Gerken et al., researchers compared sentences such as (1) and (2). The prosodic boundaries are indicated by parentheses. [22]

   5. (Joe)(kissed the dog).    6. (He kissed)(the dog).

In (1), there is a pause before the verb kissed. This is also the location of the subject-verb phrase boundary. Comparably in (2), which contains a weak pronoun, speakers either do not produce a salient prosodic boundary or place the boundary after the verb kissed. When tested, 9 month old infants illustrated a preference for pauses located before the verb, such as in (1). However, when passages with pronoun subjects were used, such as in (2), infants did not show a preference for where the pause occurs. [22] While these results again illustrate that infants are sensitive to prosodic cues in speech, they introduce evidence that infants prefer prosodic boundaries that occur naturally in speech. Although the use of prosody in infant speech processing is generally viewed as assisting infants in speech parsing, it has not yet been established how this speech segmentation enriches the acquisition of syntax. [15]

Criticism

Critics of prosodic bootstrapping have argued that the reliability of prosodic cues has been overestimated and that prosodic boundaries do not always match up with syntactic boundaries. It is argued instead that while prosody does provide infants and children useful clues about a language, it does not explain how children learn to combine clauses, phrases, and sentences, nor word meaning. As a result, a comprehensive account of how children learn language must combine prosodic bootstrapping with other types of bootstrapping as well as more general learning mechanisms. [14]

Pragmatic bootstrapping

Pragmatic bootstrapping refers to how pragmatic cues and their use in social context assist in language acquisition, and more specifically, word learning. Pragmatic cues are illustrated both verbally and through nonlinguistic cues. They include hand gestures, eye movement, a speaker's focus of attention, intentionality, and linguistic context. Similarly, the parsimonious model proposes that a child learns word meaning by relating language input to their immediate environment. [23] An example of Pragmatic Bootstrapping would be a teacher saying the word dog while gesturing to a dog in the presence of a child.

Gaze following

YouTube Video - Word Learning – Gaze Direction

Children are able to associate words with actions or objects by following the gaze of their communication partner. Often, this occurs when an adult labels an action or object while looking at it.

Action Highlighted Condition: [26]     The experimenter would prepare an object that the child would use to perform a specific action by correctly orientating the object.    The experimenter would then hold out the object and say, "Widget, Jason! Your turn!".
Object Highlighted Condition: [26]     The experimenter would not prepare the object for the child and would simply hold out the object to the child and    say, "Widget, Jason! Your turn!".

The results from the experiment illustrated that children in the Action Highlighted Condition associated the novel word with the novel action, whereas the children in the Object Highlighted Condition assumed the novel word referred to the novel object. To understand that the novel word referred to the novel action, children had to learn from the experimenter's nonverbal behavior that they were requesting the action of the object. This illustrates how non-linguistic context influences novel word learning.

Observing adult behavior

Children also look at adults' faces when learning new words, which can often lead to better understanding of what words mean. In everyday speech, mistakes are often made. So, why don't children end up learning the wrong words for the targeted things? This may be because children are able to see whether the word was right or wrong for the intended meaning by seeing the adult's facial expressions and behaviour.

Verb: Plunk [26]    ..."I'm going to plunk Big Bird!"

The adult said this sentence without previously explaining what the verb "plunk" would mean. Afterwards, the adult would do one of two things.

Action 1 [26]    She then performed the target action intentionally, saying "There!", followed immediately by another action on the same apparatus performed "accidentally", in an awkward fashion saying "Whoops!"
Action 2 [26]    Same as Action 1, however, reversed.

Afterwards, the children were asked to do the same to another apparatus, and see if the children would perform the targeted action.

Verb: Plunk [26]    "Can you go plunk Mickey Mouse?"

The results were that the children were able to understand the intended action for the new word in which they just heard, and performed the action when asked. By watching the adult's behavior and facial expressions, they were able to understand what the verb "plunk" meant and figure out whether it was the targeted action or the accidental action.

Language [26]    "Look, I see a gazzer! A gazzer!"
No-Language [26]    "Look, I see a toy! A toy!"

Afterwards, the adults would leave then ask the child to bring the new object over. In the Language condition, the child would correctly bring the targeted object over. In the No-Language condition, the child would just randomly bring an object over.

This presents the discovery of two things...

  1. The child was aware of which object was new for the adults that left the room.
  2. The child knew that the adult was excited because the object was new, and that is why they would use this new term that they had never heard before.

...and the child was able to understand this based on the emotional behaviors of the adult.

See also

Related Research Articles

<span class="mw-page-title-main">Language</span> Structured system of communication

Language is a structured system of communication that consists of grammar and vocabulary. It is the primary means by which humans convey meaning, both in spoken and written forms, and may also be conveyed through sign languages. The vast majority of human languages have developed writing systems that allow for the recording and preservation of the sounds or signs of language. Human language is characterized by its cultural and historical diversity, with significant variations observed between cultures and across time. Human languages possess the properties of productivity and displacement, which enable the creation of an infinite number of sentences, and the ability to refer to objects, events, and ideas that are not immediately present in the discourse. The use of human language relies on social convention and is acquired through learning.

Language acquisition is the process by which humans acquire the capacity to perceive and comprehend language, as well as to produce and use words and sentences to communicate.

Linguistics is the scientific study of human language. Someone who engages in this study is called a linguist. See also the Outline of linguistics, the List of phonetics topics, the List of linguists, and the List of cognitive science topics. Articles related to linguistics include:

Baby talk is a type of speech associated with an older person speaking to a child or infant. It is also called caretaker speech, infant-directed speech (IDS), child-directed speech (CDS), child-directed language (CDL), caregiver register, parentese, or motherese.

In linguistics, focus is a grammatical category that conveys which part of the sentence contributes new, non-derivable, or contrastive information. In the English sentence "Mary only insulted BILL", focus is expressed prosodically by a pitch accent on "Bill" which identifies him as the only person Mary insulted. By contrast, in the sentence "Mary only INSULTED Bill", the verb "insult" is focused and thus expresses that Mary performed no other actions towards Bill. Focus is a cross-linguistic phenomenon and a major topic in linguistics. Research on focus spans numerous subfields including phonetics, syntax, semantics, pragmatics, and sociolinguistics.

<span class="mw-page-title-main">Vocabulary development</span> Process of learning words

Vocabulary development is a process by which people acquire words. Babbling shifts towards meaningful speech as infants grow and produce their first words around the age of one year. In early word learning, infants build their vocabulary slowly. By the age of 18 months, infants can typically produce about 50 words and begin to make word combinations.

In linguistics, prosody is the study of elements of speech that are not individual phonetic segments but which are properties of syllables and larger units of speech, including linguistic functions such as intonation, stress, and rhythm. Such elements are known as suprasegmentals.

Construction grammar is a family of theories within the field of cognitive linguistics which posit that constructions, or learned pairings of linguistic patterns with meanings, are the fundamental building blocks of human language. Constructions include words, morphemes, fixed expressions and idioms, and abstract grammatical rules such as the passive voice or the ditransitive. Any linguistic pattern is considered to be a construction as long as some aspect of its form or its meaning cannot be predicted from its component parts, or from other constructions that are recognized to exist. In construction grammar, every utterance is understood to be a combination of multiple different constructions, which together specify its precise meaning and form.

In linguistics, linguistic competence is the system of unconscious knowledge that one knows when they know a language. It is distinguished from linguistic performance, which includes all other factors that allow one to use one's language in practice.

Language development in humans is a process which starts early in life. Infants start without knowing a language, yet by 10 months, babies can distinguish speech sounds and engage in babbling. Some research has shown that the earliest learning begins in utero when the fetus starts to recognize the sounds and speech patterns of its mother's voice and differentiate them from other sounds after birth.

Semantic bootstrapping is a linguistic theory of child language acquisition which proposes that children can acquire the syntax of a language by first learning and recognizing semantic elements and building upon, or bootstrapping from, that knowledge. This theory proposes that children, when acquiring words, will recognize that words label conceptual categories, such as objects or actions. Children will then use these semantic categories as a cue to the syntactic categories, such as nouns and verbs. Having identified particular words as belonging to a syntactic category, they will then look for other correlated properties of those categories, which will allow them to identify how nouns and verbs are expressed in their language. Additionally, children will use perceived conceptual relations, such as Agent of an event, to identify grammatical relations, such as Subject of a sentence. This knowledge, in turn, allows the learner to look for other correlated properties of those grammatical relations.

Araki is a nearly extinct language spoken in the small island of Araki, south of Espiritu Santo Island in Vanuatu. Araki is gradually being replaced by Tangoa, a language from a neighbouring island.

Phonological development refers to how children learn to organize sounds into meaning or language (phonology) during their stages of growth.

In linguistics, functional morphemes, also sometimes referred to as functors, are building blocks for language acquisition. A functional morpheme is a morpheme which simply modifies the meaning of a word, rather than supplying the root meaning. Functional morpheme are generally considered a closed class, which means that new functional morphemes cannot normally be created.

The mental lexicon is defined as a mental dictionary that contains information regarding the word store of a language user, such as their meanings, pronunciations, and syntactic characteristics. The mental lexicon is used in linguistics and psycholinguistics to refer to individual speakers' lexical, or word, representations. However, there is some disagreement as to the utility of the mental lexicon as a scientific construct.

Statistical language acquisition, a branch of developmental psycholinguistics, studies the process by which humans develop the ability to perceive, produce, comprehend, and communicate with natural language in all of its aspects through the use of general learning mechanisms operating on statistical patterns in the linguistic input. Statistical learning acquisition claims that infants' language-learning is based on pattern perception rather than an innate biological grammar. Several statistical elements such as frequency of words, frequent frames, phonotactic patterns and other regularities provide information on language structure and meaning for facilitation of language acquisition.

Syntactic bootstrapping is a theory in developmental psycholinguistics and language acquisition which proposes that children learn word meanings by recognizing syntactic categories and the structure of their language. It is proposed that children have innate knowledge of the links between syntactic and semantic categories and can use these observations to make inferences about word meaning. Learning words in one's native language can be challenging because the extralinguistic context of use does not give specific enough information about word meanings. Therefore, in addition to extralinguistic cues, conclusions about syntactic categories are made which then lead to inferences about a word's meaning. This theory aims to explain the acquisition of lexical categories such as verbs, nouns, etc. and functional categories such as case markers, determiners, etc.

Statistical learning is the ability for humans and other animals to extract statistical regularities from the world around them to learn about the environment. Although statistical learning is now thought to be a generalized learning mechanism, the phenomenon was first identified in human infant language acquisition.

Prosodic bootstrapping in linguistics refers to the hypothesis that learners of a primary language (L1) use prosodic features such as pitch, tempo, rhythm, amplitude, and other auditory aspects from the speech signal as a cue to identify other properties of grammar, such as syntactic structure. Acoustically signaled prosodic units in the stream of speech may provide critical perceptual cues by which infants initially discover syntactic phrases in their language. Although these features by themselves are not enough to help infants learn the entire syntax of their native language, they provide various cues about different grammatical properties of the language, such as identifying the ordering of heads and complements in the language using stress prominence, indicating the location of phrase boundaries, and word boundaries. It is argued that prosody of a language plays an initial role in the acquisition of the first language helping children to uncover the syntax of the language, mainly due to the fact that children are sensitive to prosodic cues at a very young age.

In linguistics, a co-construction is a single syntactic entity in conversation and discourse that is uttered by more than two or more speakers. Other names for this concept include collaboratively built sentences, sentences-in-progress, and joint utterance constructions. Used in this specific linguistic context, co-construction is not to be confused with the broader social interactional sense of the same name. Co-construction is studied across several linguistic sub-disciplines, including applied linguistics, conversation analysis, linguistic anthropology, and language acquisition.

References

  1. Hohle, Barbara (2009). "Bootstrapping Mechanisms in First Language Acquisition" (PDF). Linguistics. 47 (2): 359–382. doi:10.1515/LING.2009.013. S2CID   145004323. Archived from the original (PDF) on 2014-10-28. Retrieved 28 October 2014.
  2. Pinker, Steven (1984). Language Learnability & Language Development. Harvard University Press.
  3. Siklossy, L (1976). "Problem-solving approach to first language acquisition". Annals of the New York Academy of Sciences. 280: 257–261. doi:10.1111/j.1749-6632.1976.tb25491.x. S2CID   85091065.
  4. Saffran, Jenny (1996). "Word Segmentation: The Role of Distributional Cues". Journal of Memory and Language. 35 (4): 606–621. doi: 10.1006/jmla.1996.0032 .
  5. Siklossy, Laura (1976). "Problem-Solving Approach to first language acquisition". Annals of the New York Academy of Sciences. 280: 257–261. doi:10.1111/j.1749-6632.1976.tb25491.x. S2CID   85091065.
  6. Kiss, George (1973). Grammatical word classes: a learning process and its simulation. Vol. 7. pp. 1–39. doi:10.1016/s0079-7421(08)60064-x. ISBN   9780125433075.{{cite book}}: |journal= ignored (help)
  7. 1 2 3 Hilary Putnam (1985). "The 'Innateness Hypothesis' and Explanatory Models in Linguistics". In Cohen, Robert; Wartofsky, Marx (eds.). A Portrait of Twenty-Five Years: Boston Colloquium for the Philosophy of Science 1960–1985. Boston Studies in the Philosophy of Science. D. Reidel Publishing Company. pp. 41–51. doi:10.1007/978-94-009-5345-1_4. ISBN   978-90-277-1971-3.
  8. 1 2 Pinker, Steven (1984). The Semantic Bootstrapping Hypothesis.
  9. 1 2 3 Heike Behrens (2001). "Cognitive Conceptual Development and the Acquisition of Grammatical Morphemes: The Development of Time Concepts and Verb Tense". In Bowerman, Melissa; Levinson, Steve (eds.). Language Acquisition and Conceptual Development. Cambridge: Cambridge University Press. pp. 450–474.
  10. Michael Tomasello (2001). Bowerman, Melissa; Levinson, Steve (eds.). Language Acquisition and Conceptual Development. Cambridge: Cambridge University Press. pp. 132–158.
  11. 1 2 3 4 Gennaro Chierchia (1994). Lust, Barbara; Suner, Margarita; Whitman, John (eds.). Syntactic Theory and First Language Acquisition: A Cross-Linguistic Perspective. New Jersey: Lawrence Erlbraum Associates. pp. 301–350.
  12. J., Traxler, Matthew (2012). Introduction to psycholinguistics : understanding language science (1st ed.). Chichester, West Sussex: Wiley-Blackwell. p. 349. ISBN   9781405198622. OCLC   707263897.{{cite book}}: CS1 maint: multiple names: authors list (link)
  13. Gleitman, Lila; Wanner, Eric (1982). Language Acquisition: The State of the Art. Cambridge, MA: Cambridge University Press.
  14. 1 2 3 4 5 Karmiloff-Smith, Annette; Karmiloff, Kyra (2002). Pathways to Language: From Fetus to Adolescent. USA: First Harvard University Press. pp. 112–114.
  15. 1 2 3 4 Soderstrom, Melanie; Seidl, Amanda; Kemler Nelson, Deborah G.; Jusczyk, Peter W. (2003). "The Prosodic Bootstrapping of Phrases: Evidence from Prelinguistic Infants". Journal of Memory and Language. 49 (2): 249–267. doi:10.1016/S0749-596X(03)00024-X.
  16. Cross, Ian (2009). "Communicative Development: Neonate Crying Reflects Patterns of Native-Language Speech". Current Biology. 19 (23): R1078–R1079. doi: 10.1016/j.cub.2009.10.035 . PMID   20064408.
  17. Kuhl, P.H.; Miller, J.D. (1982). "Discrimination of Auditory Target Dimensions in the Presence or Absence of Variation in a Second Dimension by Infants". Perception & Psychophysics. 31 (3): 279–292. doi: 10.3758/bf03202536 . PMID   7088673.
  18. Kempe, Vera; Schaeffler, Sonja; Thoresen, John (2010). "Prosodic Disambiguation in Child-Directed Speech". Journal of Memory and Language. 62 (2): 204–225. doi:10.1016/j.jml.2009.11.006.
  19. Soderstrom, M.; Blossom, M.; Foygel, R.; Morgan, J.L. (2008). "Acoustical Cues and Grammatical Units in Speech to Two Proverbal Infants". Journal of Child Language. 35 (4): 869–902. CiteSeerX   10.1.1.624.6891 . doi:10.1017/s0305000908008763. PMID   18838016. S2CID   24031629.
  20. Jusczyk, P.W.; Hohne, E.; Mandel, D. (1995). "Picking Up Regularities in the Sound Structure of the Native Language". Speech Perception and Linguistic Experience: Theoretical and Methodological Issues in Cross-Language Speech Research: 91–119.
  21. Jusczyk, P.W.; Hirsh-Pasek, K.; Kemler Nelson, D.; Kennedy, L.; Woodward, A.; Piwoz, J. (1992). "Perception of Acoustic Correlates of Major Phrasal Units by Young Infants". Cognitive Psychology. 24 (2): 252–293. doi:10.1016/0010-0285(92)90009-q. PMID   1582173. S2CID   22670874.
  22. 1 2 Gerken, L.-A.; Jusczyk, P.W.; Mandel, D.R. (1994). "When prosody fails to cue syntactic structure: Nine month olds sensitivity to phonological versus syntactic phrases". Cognition. 51 (3): 237–265. doi:10.1016/0010-0277(94)90055-8. PMID   8194302. S2CID   36969856.
  23. Caza, Gregory A.; Knott, Alistair (2012). "Pragmatic Bootstrapping: A Neural Network Model of Vocabulary Acquisition". Language Learning and Development. 8 (2): 113–135. doi:10.1080/15475441.2011.581144. ISSN   1547-5441. S2CID   144630060.
  24. Baldwin, D.A. (1993). "Early Referential Understanding: Infants' Ability to Recognize Referential Acts for What They Are". Developmental Psychology. 29 (5): 832–843. doi:10.1037/0012-1649.29.5.832.
  25. Tomasello, Michael; Akhtar, Nameera (1995). "Two-year-olds use pragmatic cues to differentiate reference to objects and actions". Cognitive Development. 10 (2): 201–224. doi:10.1016/0885-2014(95)90009-8. ISSN   0885-2014.
  26. 1 2 3 4 5 6 7 8 Tomasello, Michael (2000). "The Social-Pragmatic Theory of Word Learning". Pragmatics: Quarterly Publication of the International Pragmatics Association. 10: 59–74.
  27. Tomasello, Michael; Barton, Michelle E. (1994). "Learning words in nonostensive contexts". Developmental Psychology. 30 (5): 639–650. doi:10.1037/0012-1649.30.5.639. ISSN   0012-1649.
  28. Akhtar, Nameera; Carpenter, Malinda; Tomasello, Michael (1996). "The Role of Discourse Novelty in Early Word Learning". Child Development. 67 (2): 635–645. doi:10.1111/j.1467-8624.1996.tb01756.x. ISSN   0009-3920.