Phonological development

Last updated

Phonological development refers to how children learn to organize sounds into meaning or language (phonology) during their stages of growth.

Contents

Sound is at the beginning of language learning. Children have to learn to distinguish different sounds and to segment the speech stream they are exposed to into units – eventually meaningful units – in order to acquire words and sentences. One reason that speech segmentation is challenging is that unlike between printed words, no spaces occur between spoken words. Thus if an infant hears the sound sequence “thisisacup,” they have to learn to segment this stream into the distinct units “this”, “is”, “a”, and “cup.” Once "cup" is able to be extracted from the speech stream, the child has to assign a meaning to this word. [1] Furthermore, the child has to be able to distinguish the sequence “cup” from “cub” in order to learn that these are two distinct words with different meanings. Finally, the child has to learn to produce these words. The acquisition of native language phonology begins in the womb [2] and isn't completely adult-like until the teenage years. Perceptual abilities (such as being able to segment “thisisacup” into four individual word units) usually precede production and thus aid the development of speech production.

Prelinguistic development (birth – 1 year)

Perception

Children do not utter their first words until they are about 1 year old, but already at birth they can tell some utterances in their native language from utterances in languages with different prosodic features. [3]

1 month

Categorical perception

Infants as young as 1 month perceive some speech sounds as speech categories (they display categorical perception of speech). For example, the sounds /b/ and /p/ differ in the amount of breathiness that follows the opening of the lips. Using a computer generated continuum in breathiness between /b/ and /p/, Eimas et al. (1971) showed that English-learning infants paid more attention to differences near the boundary between /b/ and /p/ than to equal-sized differences within the /b/-category or within the /p/-category. [4] Their measure, monitoring infant sucking-rate, became a major experimental method for studying infant speech perception.

Fig. 1. Sucking rate for 20 ms VOT change across category boundary (left), 20 ms VOT change within category (middle), without VOT change (right). After Eimas et al.(1971). Eimas.jpg
Fig. 1. Sucking rate for 20 ms VOT change across category boundary (left), 20 ms VOT change within category (middle), without VOT change (right). After Eimas et al.(1971).

Infants up to 10–12 months can distinguish not only native sounds but also nonnative contrasts. Older children and adults lose the ability to discriminate some nonnative contrasts. [5] Thus, it seems that exposure to one's native language causes the perceptual system to be restructured. The restructuring reflects the system of contrasts in the native language.

4 months

At 4 months, infants still prefer infant-directed speech to adult-directed speech. Whereas 1-month-olds only exhibit this preference if the full speech signal is played to them, 4-month-old infants prefer infant-directed speech even when just the pitch contours are played. [6] This shows that between 1 and 4 months of age, infants improve in tracking the suprasegmental information in the speech directed at them. By 4 months, finally, infants have learned which features they have to pay attention to at the suprasegmental level.

5 months

Babies prefer to hear their own name to similar-sounding words. [7] It is possible that they have associated the meaning “me” with their name, although it is also possible that they simply recognize the form because of its high frequency.

6 months

With increasing exposure to the ambient language, infants learn not to pay attention to sound distinctions that are not meaningful in their native language, e.g., two acoustically different versions of the vowel /i/ that simply differ because of inter-speaker variability. By 6 months of age infants have learned to treat acoustically different sounds that are representations of the same sound category, such as an /i/ spoken by a male versus a female speaker, as members of the same phonological category /i/. [8]

Statistical learning

Infants are able to extract meaningful distinctions in the language they are exposed to from statistical properties of that language. For example, if English-learning infants are exposed to a prevoiced /d/ to voiceless unaspirated /t/ continuum (similar to the /d/ - /t/ distinction in Spanish) with the majority of the tokens occurring near the endpoints of the continuum, i.e., showing extreme prevoicing versus long voice onset times (bimodal distribution) they are better at discriminating these sounds than infants who are exposed primarily to tokens from the center of the continuum (unimodal distribution). [9]

These results show that at the age of 6 months infants are sensitive to how often certain sounds occur in the language they are exposed to and they can learn which cues are important to pay attention to from these differences in frequency of occurrence. In natural language exposure this means typical sounds in a language (such as prevoiced /d/ in Spanish) occur often and infants can learn them from mere exposure to them in the speech they hear. All of this occurs before infants are aware of the meaning of any of the words they are exposed to, and therefore the phenomenon of statistical learning has been used to argue for the fact that infants can learn sound contrasts without meaning being attached to them.

At 6 months, infants are also able to make use of prosodic features of the ambient language to break the speech stream they are exposed to into meaningful units, e.g., they are better able to distinguish sounds that occur in stressed vs. unstressed syllables. [10] This means that at 6 months infants have some knowledge of the stress patterns in the speech they are exposed and they have learned that these patterns are meaningful.

7 months

At 7.5 months English-learning infants have been shown to be able to segment words from speech that show a strong-weak (i.e., trochaic) stress pattern, which is the most common stress pattern in the English language, but they were not able to segment out words that follow a weak-strong pattern. In the sequence ‘guitar is’ these infants thus heard ‘taris’ as the word-unit because it follows a strong-weak pattern. [11] The process that allows infants to use prosodic cues in speech input to learn about language structure has been termed “prosodic bootstrapping”. [12]

8 months

While children generally don't understand the meaning of most single words yet, they understand the meaning of certain phrases they hear a lot, such as “Stop it,” or “Come here.” [13]

9 months

Infants can distinguish native from nonnative language input using phonetic and phonotactic patterns alone, i.e., without the help of prosodic cues. [14] They seem to have learned their native language's phonotactics, i.e., which combinations of sounds are possible in the language.

10-12 months

Infants now can no longer discriminate most nonnative sound contrasts that fall within the same sound category in their native language. [15] Their perceptual system has been tuned to the contrasts relevant in their native language. As for word comprehension, Fenson et al. (1994) tested 10-11-month-old children's comprehension vocabulary size and found a range from 11 words to 154 words. [13] At this age, children normally have not yet begun to speak and thus have no production vocabulary. So clearly, comprehension vocabulary develops before production vocabulary.

Production

Stages of pre-speech vocal development

Even though children do not produce their first words until they are approximately 12 months old, the ability to produce speech sounds starts to develop at a much younger age. Stark (1980) distinguishes five stages of early speech development: [16]

0-6 weeks: Reflexive vocalizations

These earliest vocalizations include crying and vegetative sounds such as breathing, sucking or sneezing. For these vegetative sounds, infants’ vocal cords vibrate and air passes through their vocal apparatus, thus familiarizing infants with processes involved in later speech production.

A 14-week-old infant cooing as she interacts with a caregiver (51 seconds)
6-16 weeks: Cooing and laughter

Infants produce cooing sounds when they are content. Cooing is often triggered by social interaction with caregivers and resembles the production of vowels.

16-30 weeks: Vocal play

Infants produce a variety of vowel- and consonant-like sounds that they combine into increasingly longer sequences. The production of vowel sounds (already in the first 2 months) precedes the production of consonants, with the first back consonants (e.g., [g], [k]) being produced around 2–3 months, and front consonants (e.g., [m], [n], [p]) starting to appear around 6 months of age. As for pitch contours in early infant utterances, infants between 3 and 9 months of age produce primarily flat, falling and rising-falling contours. Rising pitch contours would require the infants to raise subglottal pressure during the vocalization or to increase vocal fold length or tension at the end of the vocalization, or both. At 3 to 9 months infants don't seem to be able to control these movements yet. [17]

6-10 months: Reduplicated babbling (or canonical babbling [18] )

Reduplicated babbling contains consonant-vowel (CV) syllables that are repeated in reduplicated series of the same consonant and vowel (e.g., [bababa]). At this stage, infants’ productions resemble speech much more closely in timing and vocal behaviors than at earlier stages. Starting around 6 months babies also show an influence of the ambient language in their babbling, i.e., babies’ babbling sounds different depending on which languages they hear. For example, French learning 9-10 month-olds have been found to produce a bigger proportion of prevoiced stops (which exist in French but not English) in their babbling than English learning infants of the same age. [19] This phenomenon of babbling being influenced by the language being acquired has been called babbling drift. [20]

10-14 months: Nonreduplicated babbling (or variegated babbling [18] )

Infants now combine different vowels and consonants into syllable strings. At this stage, infants also produce various stress and intonation patterns. During this transitional period from babbling to the first word children also produce “protowords”, i.e., invented words that are used consistently to express specific meanings, but that are not real words in the children's target language. [21] Around 12–14 months of age children produce their first word. Infants close to one year of age are able to produce rising pitch contours in addition to flat, falling, and rising-falling pitch contours. [17]

Development once speech sets in (1 year and older)

At the age of 1, children only just begin to speak, and their utterances are not adult-like yet at all. Children's perceptual abilities are still developing, too. In fact, both production and perception abilities continue to develop well into the school years, with the perception of some prosodic features not being fully developed until about 12 years of age.

Perception

14 months

Children are able to distinguish newly learned ‘words’ associated with objects if they are not similar-sounding, such as ‘lif’ and ‘neem’. They cannot distinguish similar-sounding newly learned words such as ‘bih’ and ‘dih’, however. [22] So, while children at this age are able to distinguish monosyllabic minimal pairs at a purely phonological level, if the discrimination task is paired with word meaning, the additional cognitive load required by learning the word meanings leaves them unable to spend the extra effort on distinguishing the similar phonology.

16 months

Children's comprehension vocabulary size ranges from about 92 to 321 words. [13] The production vocabulary size at this age is typically around 50 words. This shows that comprehension vocabulary grows faster than production vocabulary.

18-20 months

At 18–20 months infants can distinguish newly learned ‘words’, even if they are phonologically similar, e.g. ‘bih’ and ‘dih’. [22] While infants are able to distinguish syllables like these already soon after birth, only now are they able to distinguish them if they are presented to them as meaningful words rather than just a sequence of sounds. Children are also able to detect mispronunciations such as ‘vaby’ for ‘baby’. Recognition has been found to be poorer for mispronounced than for correctly pronounced words. This suggests that infants’ representations of familiar words are phonetically very precise. [23] This result has also been taken to suggest that infants move from a word-based to a segment-based phonological system around 18 months of age.

Fast-mapping

Of course, the reason why children need to learn the sound distinctions of their language is because then they also have to learn the meaning associated with those different sounds. Young children have a remarkable ability to learn meanings for the words they extract from the speech they are exposed to, i.e., to map meaning onto the sounds. Often children already associate a meaning with a new word after only one exposure. This is referred to as “fast mapping”. At 20 months of age, when presented with three familiar objects (e.g., a ball, a bottle and a cup) and one unfamiliar object (e.g., an egg piercer), children are able to conclude that in the request “Can I have the zib,” zib must refer to the unfamiliar object, i.e., the egg piercer, even if they have never heard that pseudoword before. [24] [25] Children as young as 15 months can complete this task successfully if the experiment is conducted with fewer objects. [26] This task shows that children aged 15 to 20 months can assign meaning to a new word after only a single exposure. Fast mapping is a necessary ability for children to acquire the number of words they have to learn during the first few years of life: Children acquire an average of nine words per day between 18 months and 6 years of age. [27]

2–6 years

At 2 years, infants show first signs of phonological awareness, i.e., they are interested in word play, rhyming, and alliterations. [1] Phonological awareness does continue to develop until the first years of school. For example, only about half of the 4- and 5-year-olds tested by Liberman et al. (1974) were able to tap out the number of syllables in multisyllabic words, but 90% of the 6-year-olds were able to do so. [28] Most 3- to 4-year-olds are able to break simple consonant-vowel-consonant (CVC) syllables up into their constituents (onset and rime). The onset of a syllable consists of all the consonants preceding the syllable's vowel, and the rime is made up of the vowel and all following consonants. For example, the onset in the word ‘dog’ is /d/ and the rime is /og/. Children at 3–4 years of age were able to tell that the nonwords /fol/ and /fir/ would be liked by a puppet whose favorite sound is /f/. [29] [30] 4-year-olds are less successful at this task if the onset of the syllable contains a consonant cluster, such as /fr/ or /fl/. Liberman et al. found that no 4-year-olds and only 17% of 5-year-olds were able to tap out the number of phonemes (individual sounds) in a word. [28] 70% of 6-year-olds were able to do so. This might mean that children are aware of syllables as units of speech early on, while they don't show awareness of individual phonemes until school age. Another explanation is that individual sounds do not easily translate into beats, which makes clapping individual phonemes a much more difficult task than clapping syllables. One reason why phoneme awareness gets much better once children start school is because learning to read provides a visual aid as how to break up words into their smaller constituents. [1]

12 years

Although children perceive rhythmic patterns in their native language at 7–8 months, they are not able to reliably distinguish compound words and phrases that differ only in stress placement, such as ‘HOT dog’ vs. ‘hot DOG’ until around 12 years of age. Children in a study by Vogel and Raimy (2002) [31] were asked to show which of two pictures (i.e., a dog or a sausage) was being named. Children younger than 12 years generally preferred the compound reading (i.e., the sausage) to the phrasal reading (the dog). The authors concluded from this that children start out with a lexical bias, i.e., they prefer to interpret phrases like these as single words, and the ability to override this bias develops until late in childhood.

Production

12-14 months

Infants usually produce their first word around 12 –14 months of age. First words are simple in structure and contain the same sounds that were used in late babbling. [32] The lexical items they produce are probably stored as whole words rather than as individual segments that get put together online when uttering them. This is suggested by the fact that infants at this age may produce the same sounds differently in different words. [33]

16 months

Children's production vocabulary size at this age is typically around 50 words, although there is great variation in vocabulary size among children in the same age group, with a range between 0 and 160 words for the majority of children. [13]

18 months

Children's productions become more consistent around the age of 18 months. [32] When their words differ from adult forms, these differences are more systematic than before. These systematic transformations are referred to as “phonological processes”, and often resemble processes that are typically common in the adult phonologies of the world's languages (cf. reduplication in adult Jamaican Creole: “yellow yellow” = “very yellow” [34] ). Some common phonological processes are listed below. [1]

Whole word processes (until age 3 or 4)

- Weak syllable deletion: omission of an unstressed syllable in the target word, e.g., [nænæ] for ‘banana’

- Final consonant deletion: omission of the final consonant in the target word, e.g., [pikʌ] for ‘because’

- Reduplication : production of two identical syllables based on one of the target word syllables, e.g., [baba] for ‘bottle’

- Consonant harmony : a target word consonant takes on features of another target word consonant, e.g., [ɡʌk] for ‘duck’

- Consonant cluster reduction: omission of a consonant in a target word cluster, e.g., [kæk] for ‘cracker’

Segment substitution processes (into the early school years)

- Velar fronting: a velar is replaced by a coronal sound, e.g., [ti] for ‘key’

- Stopping: a fricative is replaced by a stop, e.g., [ti] for ‘sea’

- Gliding: a liquid is replaced by a glide, e.g., [wæbɪt] for ‘rabbit’

2 years

The size of the production vocabulary ranges from about 50 to 550 words at the age of 2 years. [13] Influences on the rate of word learning, and thus on the wide range of vocabulary sizes of children of the same age, include the amount of speech children are exposed to by their caregivers as well as differences in how rich the vocabulary in the speech a child hears is. Children also seem to build up their vocabulary faster if the speech they hear is related to their focus of attention more often. [1] [35] This would be the case if a caregiver talks about a ball the child is currently looking at.

4 years

A study by Gathercole and Baddeley (1989) showed the importance of sound for early word meaning. [36] They tested the phonological memory of 4- and 5-year-old children, i.e., how well these children were able to remember a sequence of unfamiliar sounds. They found that children with better phonological memory also had larger vocabularies at both ages. Moreover, phonological memory at age 4 predicted the children's vocabulary at age 5, even with earlier vocabulary and nonverbal intelligence factored out.

7 years

Children produce mostly adult-like segments. [37] Their ability to produce complex sound sequences and multisyllabic words continues to improve throughout middle childhood. [32]

Biological foundations of infants’ speech development

The developmental changes in infants’ vocalizations over the first year of life are influenced by physical developments during that time. Physical growth of the vocal tract, brain development, and development of neurological structures responsible for vocalization are factors for the development of infants’ vocal productions. [1]

Infants’ vocal tract

Infants vocal tracts are smaller, and initially also shaped differently from adults’ vocal tracts. The infant's tongue fills the entire mouth, thus reducing the range of movement. As the facial skeleton grows, the range for movement increases, which probably contributes to the increased variety of sounds infants start to produce. Development of muscles and sensory receptors also gives infants more control over sound production. [1] The limited movement possible by the infant jaw and mouth might be responsible for the typical consonant-vowel (CV) alternation in babbling and it has even been suggested that the predominance of CV syllables in the languages of the world might evolutionarily have been caused by this limited range of movements of the human vocal organs. [38]

The differences between the vocal tract of infants and adults can be seen in figure 3 (infants) and figure 4 (adults) below.

Fig. 3. Infant vocal tract: H = hard palate, S = soft palate, T = tongue, J = jaw, E = epiglottis, G = glottis; After Vihman (1996) Infantvocaltract1.jpeg
Fig. 3. Infant vocal tract: H = hard palate, S = soft palate, T = tongue, J = jaw, E = epiglottis, G = glottis; After Vihman (1996)
Fig. 4. Adult vocal tract: H = hard palate, S = soft palate, T = tongue, J = jaw, E = epiglottis, G = glottis; After Vihman (1996) Adultvocaltract.jpeg
Fig. 4. Adult vocal tract: H = hard palate, S = soft palate, T = tongue, J = jaw, E = epiglottis, G = glottis; After Vihman (1996)

The nervous system

Crying and vegetative sounds are controlled by the brain stem, which matures earlier than the cortex. Neurological development of higher brain structures coincides with certain developments in infants’ vocalizations. For example, the onset of cooing at 6 to 8 weeks happens as some areas of the limbic system begin to function. The limbic system is known to be involved in the expression of emotion, and cooing in infants is associated with a feeling of contentedness. Further development of the limbic system might be responsible for the onset of laughter around 16 weeks of age. The motor cortex, finally, which develops later than the abovementioned structures may be necessary for canonical babbling, which start around 6 to 9 months of age. [1]

Related Research Articles

Language acquisition is the process by which humans acquire the capacity to perceive and comprehend language. In other words, it is how human beings gain the ability to be aware of language, to understand it, and to produce and use words and sentences to communicate.

<span class="mw-page-title-main">Babbling</span> Stage in child development and language acquisition

Babbling is a stage in child development and a state in language acquisition during which an infant appears to be experimenting with uttering articulate sounds, but does not yet produce any recognizable words. Babbling begins shortly after birth and progresses through several stages as the infant's repertoire of sounds expands and vocalizations become more speech-like. Infants typically begin to produce recognizable words when they are around 12 months of age, though babbling may continue for some time afterward.

Baby talk is a type of speech associated with an older person speaking to a child or infant. It is also called caretaker speech, infant-directed speech (IDS), child-directed speech (CDS), child-directed language (CDL), caregiver register, parentese, or motherese.

<span class="mw-page-title-main">Vocabulary development</span> Process of learning words

Vocabulary development is a process by which people acquire words. Babbling shifts towards meaningful speech as infants grow and produce their first words around the age of one year. In early word learning, infants build their vocabulary slowly. By the age of 18 months, infants can typically produce about 50 words and begin to make word combinations.

Phonological awareness is an individual's awareness of the phonological structure, or sound structure, of words. Phonological awareness is an important and reliable predictor of later reading ability and has, therefore, been the focus of much research.

The phonology of Italian describes the sound system—the phonology and phonetics—of standard Italian and its geographical variants.

Language development in humans is a process which starts early in life. Infants start without knowing a language, yet by 10 months, babies can distinguish speech sounds and engage in babbling. Some research has shown that the earliest learning begins in utero when the fetus starts to recognize the sounds and speech patterns of its mother's voice and differentiate them from other sounds after birth.

Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.

Developmental linguistics is the study of the development of linguistic ability in an individual, particularly the acquisition of language in childhood. It involves research into the different stages in language acquisition, language retention, and language loss in both first and second languages, in addition to the area of bilingualism. Before infants can speak, the neural circuits in their brains are constantly being influenced by exposure to language. Developmental linguistics supports the idea that linguistic analysis is not timeless, as claimed in other approaches, but time-sensitive, and is not autonomous – social-communicative as well as bio-neurological aspects have to be taken into account in determining the causes of linguistic developments.

Speech production is the process by which thoughts are translated into speech. This includes the selection of words, the organization of relevant grammatical forms, and then the articulation of the resulting sounds by the motor system using the vocal apparatus. Speech production can be spontaneous such as when a person creates the words of a conversation, reactive such as when they name a picture or read aloud a written word, or imitative, such as in speech repetition. Speech production is not the same as language production since language can also be produced manually by signs.

<span class="mw-page-title-main">Speech repetition</span> Repeating something someone else said

Speech repetition occurs when individuals speak the sounds that they have heard another person pronounce or say. In other words, it is the saying by one individual of the spoken vocalizations made by another individual. Speech repetition requires the person repeating the utterance to have the ability to map the sounds that they hear from the other person's oral pronunciation to similar places and manners of articulation in their own vocal tract.

Emergent literacy is a term that is used to explain a child's knowledge of reading and writing skills before they learn how to read and write words. It signals a belief that, in literate society, young children—even one- and two-year-olds—are in the process of becoming literate. Through the support of parents, caregivers, and educators, a child can successfully progress from emergent to conventional reading.

Speech acquisition focuses on the development of vocal, acoustic and oral language by a child. This includes motor planning and execution, pronunciation, phonological and articulation patterns.

This article is about the phonology of Egyptian Arabic, also known as Cairene Arabic or Masri. It deals with the phonology and phonetics of Egyptian Arabic as well as the phonological development of child native speakers of the dialect. To varying degrees, it affects the pronunciation of Literary Arabic by native Egyptian Arabic speakers, as is the case for speakers of all other varieties of Arabic.

Manual babbling is a linguistic phenomenon that has been observed in deaf children and hearing children born to deaf parents who have been exposed to sign language. Manual babbles are characterized by repetitive movements that are confined to a limited area in front of the body similar to the sign-phonetic space used in sign languages. In their 1991 paper, Pettito and Marentette concluded that between 40% and 70% of deaf children's manual activity can be classified as manual babbling, whereas manual babbling accounts for less than 10% of hearing children’s manual activity. Manual Babbling appears in both deaf and hearing children learning American Sign Language from 6 to 14 months old.

Statistical language acquisition, a branch of developmental psycholinguistics, studies the process by which humans develop the ability to perceive, produce, comprehend, and communicate with natural language in all of its aspects through the use of general learning mechanisms operating on statistical patterns in the linguistic input. Statistical learning acquisition claims that infants' language-learning is based on pattern perception rather than an innate biological grammar. Several statistical elements such as frequency of words, frequent frames, phonotactic patterns and other regularities provide information on language structure and meaning for facilitation of language acquisition.

Statistical learning is the ability for humans and other animals to extract statistical regularities from the world around them to learn about the environment. Although statistical learning is now thought to be a generalized learning mechanism, the phenomenon was first identified in human infant language acquisition.

Phonemic contrast refers to a minimal phonetic difference, that is, small differences in speech sounds, that makes a difference in how the sound is perceived by listeners, and can therefore lead to different mental lexical entries for words. For example, whether a sound is voiced or unvoiced matters for how a sound is perceived in many languages, such that changing this phonetic feature can yield a different word ; see Phoneme. Another example in English of a phonemic contrast would be the difference between leak and league; the minimal difference of voicing between [k] and [g] does lead to the two utterances being perceived as different words. On the other hand, an example that is not a phonemic contrast in English is the difference between and. In this case the minimal difference of vowel length is not a contrast in English and so those two forms would be perceived as different pronunciations of the same word seat.

Prosodic bootstrapping in linguistics refers to the hypothesis that learners of a primary language (L1) use prosodic features such as pitch, tempo, rhythm, amplitude, and other auditory aspects from the speech signal as a cue to identify other properties of grammar, such as syntactic structure. Acoustically signaled prosodic units in the stream of speech may provide critical perceptual cues by which infants initially discover syntactic phrases in their language. Although these features by themselves are not enough to help infants learn the entire syntax of their native language, they provide various cues about different grammatical properties of the language, such as identifying the ordering of heads and complements in the language using stress prominence, indicating the location of phrase boundaries, and word boundaries. It is argued that prosody of a language plays an initial role in the acquisition of the first language helping children to uncover the syntax of the language, mainly due to the fact that children are sensitive to prosodic cues at a very young age.

Marilyn May Vihman is an American linguist known for her research on phonological development and bilingualism in early childhood. She holds the position of Professor of Linguistics at the University of York.

References

  1. 1 2 3 4 5 6 7 8 Erika Hoff (2009). Language development. Boston, MA: Wadsworth/Cengage Learning. ISBN   978-0-495-50171-8. OCLC   759925056.
  2. "Babies learn words before birth | Humans | Science News". www.sciencenews.org. Archived from the original on 2013-08-26.
  3. Mehler, J.; P. Jusczyk, G. Lambertz, N. Halsted, J. Bertoncini, C. Amiel-Tison (1988). "A precursor of language acquisition in young infants". Cognition. 29 (2): 143–178. doi:10.1016/0010-0277(88)90035-2. PMID   3168420. S2CID   43126875.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  4. 1 2 Eimas, P. D.; E. R. Siqueland, P. Jusczyk, J. Vigorito (1971). "Speech perception in infants". Science. 171 (3968): 303–306. Bibcode:1971Sci...171..303E. doi:10.1126/science.171.3968.303. hdl: 11858/00-001M-0000-002B-0DB3-1 . PMID   5538846. S2CID   15554065.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  5. Werker, J. F.; R. C. Tees (1984). "Cross-language speech perception: Evidence for perceptual reorganization during the first year of life". Infant Behavior and Development. 7: 49–63. CiteSeerX   10.1.1.537.6695 . doi:10.1016/S0163-6383(84)80022-3.
  6. Fernald, A.; P. K. Kuhl (1987). "Acoustic determinants of infant preference for motherese speech". Infant Behavior and Development. 10 (3): 279–293. doi:10.1016/0163-6383(87)90017-8.
  7. Mandel, D. R.; P. W. Jusczyk, D. B. Pisoni (1995). "Infants' recognition of the sound patterns of their own names". Psychological Science. 6 (5): 314–317. doi:10.1111/j.1467-9280.1995.tb00517.x. PMC   4140581 . PMID   25152566.
  8. Kuhl, P. K. (1983). "Perception of auditory equivalence classes for speech in early infancy". Infant Behavior and Development. 6 (2–3): 263–285. doi:10.1016/S0163-6383(83)80036-8.
  9. Maye, J.; J. F. Werker, L. Gerken (2002). "Infant sensitivity to distributional information can affect phonetic discrimination". Cognition. 82 (3): 101–111. doi:10.1016/S0010-0277(01)00157-3. PMID   11747867. S2CID   319422.
  10. Karzon, R. G. (1985). "Discrimination of polysyllabic sequences by one- to four-month-old infants". Journal of Experimental Child Psychology. 39 (2): 326–342. doi:10.1016/0022-0965(85)90044-X. PMID   3989467.
  11. Jusczyk, P. W.; D. M. Houston, M. Newsome (1999). "The beginning of word segmentation in English-learning infants". Cognitive Psychology. 39 (3–4): 159–207. doi: 10.1006/cogp.1999.0716 . PMID   10631011. S2CID   12097435.
  12. Morgan, J. L.; K. Demuth (1996). Signal to Syntax: Bootstrapping from Speech to Grammar in Early Acquisition. Mahwah, NJ: Earlbaum.
  13. 1 2 3 4 5 Fenson, L.; P. S. Dale; J. S. Reznick; E. Bates; D. J. Thal; S. J. Pethick (1994). "Variability in early communicative development". Monographs of the Society for Research in Child Development. 59 (5): 1–173, discussion 174–85. doi:10.2307/1166093. JSTOR   1166093. PMID   7845413.
  14. Jusczyk, P. W.; A. D. Friederici, J. M. I. Wessels, V. Y. Svenkerud (1993). "Infants' sensitivity to the soundpatterns of native language words". Journal of Memory and Language. 32 (3): 402–420. doi:10.1006/jmla.1993.1022.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  15. Werker, J F; R C Tees (1984). "Cross-language speech perception: Evidence for perceptual reorganization during the first year of life". Infant Behavior and Development. 7: 49–63. CiteSeerX   10.1.1.537.6695 . doi:10.1016/S0163-6383(84)80022-3.
  16. Stark, R. E. (1980). "Stages of speech development in the first year of life". In Yeni-Komshian, G. H.; J. F. Kavanagh, C. A. Ferguson (eds.). Child Phonology. Volume 1: Production. New York, NY: Academic Press. pp. 73–92.
  17. 1 2 Kent, R. D.; A. D. Murray (1982). "Acoustic features of infant vocalic utterances at 3, 6, and 9 months". Journal of the Acoustical Society of America . 72 (2): 353–363. Bibcode:1982ASAJ...72..353K. doi:10.1121/1.388089. PMID   7119278. S2CID   12186661.
  18. 1 2 Oller, D. K. (1986). "Metaphonology and infant vocalizations". In Lindblom, B.; R. Zetterstrom (eds.). Precursors of Early Speech. New York, NY: Stockton Press. pp. 21–35.
  19. Whalen, D. H.; A. G. Levitt, L. M. Goldstein (2007). "VOT in the babbling of French- and English-learning infants". Journal of Phonetics. 35 (3): 341–352. doi:10.1016/j.wocn.2006.10.001. PMC   2717044 . PMID   19641636.
  20. Brown, R. (1958). "How shall a thing be called?". Psychological Review. 65 (1): 14–21. doi:10.1037/h0041727. PMID   13505978.
  21. Bates, E. (1976). Language and Context: The Acquisition of Pragmatics . New York, NY: Academic Press. ISBN   9780120815500.
  22. 1 2 Werker, J. F.; C. T. Fennel, K. M. Corcoran, C. L. Stager (2002). "Infants' ability to learn phonetically similar words: Effects of age and vocabulary size". Infancy. 3: 1–30. doi:10.1207/s15327078in0301_1.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  23. Swingley, D.; R. N. Asley (2000). "Spoken word recognition and lexical representation in very young children". Cognition. 76 (2): 147–166. doi:10.1016/S0010-0277(00)00081-0. hdl: 11858/00-001M-0000-000E-E627-8 . PMID   10856741. S2CID   6324150.
  24. Dollaghan, C. (1985). "Child meets word: "Fast mapping" in preschool children". Journal of Speech and Hearing Research. 28 (3): 449–454. doi:10.1044/jshr.2803.454. PMID   4046586.
  25. Mervis, C. B.; J. Bertrand (1994). "Acquisition of the novel name-nameless category principle". Child Development. 65 (6): 1646–1662. doi:10.2307/1131285. JSTOR   1131285. PMID   7859547.
  26. Markman, E. M.; J. L. Wasow, M. B. Hansen (2003). "Use of the mutual exclusivity assumption by young word learners". Cognitive Psychology. 47 (3): 241–275. doi:10.1016/S0010-0285(03)00034-3. PMID   14559217. S2CID   42489580.
  27. Carey, S. (1978). "The child as a word learner". In Halle, M.; J. Bresnan, G. A. Miller (eds.). Linguistics Theory and Psychological Reality. Cambridge, MA: MIT Press. pp. 264–293.
  28. 1 2 Liberman, I. Y.; D. Shankweiler, F. W. Fischer, B. Carter (1974). "Explicit syllable and phoneme segmentation in the young child". Journal of Experimental Child Psychology. 18 (2): 201–212. CiteSeerX   10.1.1.602.5825 . doi:10.1016/0022-0965(74)90101-5.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  29. Treiman, R. (1985). "Onsets and rimes as units of spoken syllables: Evidence from children". Journal of Experimental Child Psychology. 39 (1): 181–191. doi:10.1016/0022-0965(85)90034-7. PMID   3989458.
  30. Bryant, P. E.; L. Bradley, M. Maclean, J. Crossland (1989). "Nursery rhymes, phonological skills and reading". Journal of Child Language. 16 (2): 407–428. doi:10.1017/S0305000900010485. PMID   2760133. S2CID   28419790.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  31. Vogel, I.; E. Raimy (2002). "The acquisition of compound vs. phrasal stress: the role of prosodic constituents". Journal of Child Language. 29 (2): 225–250. doi:10.1017/S0305000902005020. PMID   12109370. S2CID   9588933.
  32. 1 2 3 4 5 Vihman, M. M. (1996). Phonological Development. The Origins of Language in the Child . Oxford, UK: Blackwell.
  33. Walley, A. C. (1993). "The role of vocabulary development in children's spoken word recognition and segmentation ability". Developmental Review. 13 (3): 286–350. doi:10.1006/drev.1993.1015.
  34. Gooden, S. (2003). The Phonology and Phonetics of Jamaican Creole Reduplication. Columbus, OH: The Ohio State University, PhD Dissertation.
  35. Hoff, E.; L. Naigles (2002). "How children use input to acquire a lexicon". Child Development. 73 (2): 418–433. doi:10.1111/1467-8624.00415. PMID   11949900.
  36. Gathercole, S. E.; A. D. Baddeley (1989). "Evaluation of the role of phonological STM in the development of vocabulary in children: A longitudinal study". Journal of Memory and Language. 28 (2): 200–213. doi:10.1016/0749-596X(89)90044-2.
  37. Sander, E. K. (1972). "When are speech sounds learned?". Journal of Speech and Hearing Disorders. 37 (1): 55–63. doi:10.1044/jshd.3701.55. PMID   5053945.
  38. MacNeilage, P. F.; B. L. Davis (2001). "Motor mechanisms in speech ontogeny: phylogenetic, neurobiological and linguistic implications". Current Opinion in Neurobiology. 11 (6): 696–700. doi:10.1016/S0959-4388(01)00271-9. PMID   11741020. S2CID   34697879.