Speech acquisition focuses on the development of vocal, acoustic and oral language by a child. This includes motor planning and execution, pronunciation, phonological and articulation patterns (as opposed to content and grammar which is language).
Spoken speech consists of an organized set of sounds or phonemes that are used to convey meaning while language is an arbitrary association of symbols used according to prescribed rules to convey meaning. [1] While grammatical and syntactic learning can be seen as a part of language acquisition, speech acquisition includes the development of speech perception and speech production over the first years of a child's lifetime. There are several models to explain the norms of speech sound or phoneme acquisition in children.
Sensory learning concerning acoustic speech signals already starts during pregnancy. Hepper and Shahidullah (1992) described the progression of fetal response to different pure tone frequencies. They suggested fetuses respond to 500 Hertz (Hz) at 19 weeks gestation, 250 Hz and 500 Hz at 27 weeks gestation and finally respond to 250, 500, 1000, 3000 Hz between 33 and 35 weeks gestation. [2] Lanky and Williams (2005) [3] suggested that fetuses could respond to pure tone stimuli of 500 Hz as early as 16 weeks.
The newborn is already capable of discerning many phonetic contrasts. This capability may be innate. Speech perception becomes language-specific for vowels at around 6 months, for sound combinations at around 9 months and for language-specific consonants at around 11 months. [4]
Infants detect typical word stress patterns, and use stress to identify words around the age of 8 months. [4]
As an infant grows into a child their ability to discriminate between speech sounds should increase. Infants gradually gain the ability to distinguish differences between phonemes but lose this ability for languages to which they are not exposed to early in life. This implies a sensitive period for language acquisition and discrimination. Rvachew (2007) [5] described three developmental stages in which a child recognizes or discerns adult-like, phonological and articulatory representations of sounds. In the first stage, the child is generally unaware of phonological contrast and can produce sounds that are acoustically and perceptually similar. In the second stage the child is aware of phonological contrasts and can produce acoustically different variations imperceptible to adult listeners. Finally, in the third stage, children become aware of phonological contrasts and produce different sounds that are perceptually and acoustically accurate to an adult production.
It is suggested that a child's perceptual capabilities continue to develop for many years. Hazan and Barrett (2000) [6] suggest that this development can cotton into late childhood; 6- to 12-year-old children showed increasing mastery of discriminating synthesized differences in place, manner, and voicing of speech sounds without yet achieving adult-like accuracy in their own production.
In studies on infant brain activity researchers found activation in the newborn temporal cortex in response to spoken language but not whistle language. This shows an innate language understanding to speech but not to other vocalizations. When exposed to normal and reversed speech, there was activation in the left planum temporale to both conditions. In contrast, the angular gyrus responded selectively to forward speech. This suggests a left hemisphere dominance for speech processing at 3 months of age, but that selective responses to phonological cues are immature. Other studies show there may be a motor component to learning speech. When infants were given a teething ring to restrict mobility, they had a harder time distinguishing phonemes. Researchers also found activity in the motor cortex in response to speech. 7-month-old infants had equal activation of sensory and motor brain regions in response to both native and non-native phonemes, but 11-month-old infants had greater activation in auditory regions for native phonemes and in motor regions for non-native phonemes. [7]
Infants are born with the ability to vocalize, most notably through crying. As they grow and develop, infants add more sounds to their inventory. There are two primary typologies of infant vocalizations. Typology 1: Stark Assessment of Early Vocal Development [8] consists of 5 phases.
Typology 2: Oller's typology of infant phonations [9] consists primarily of 2 phases with several substages. The 2 primary phases include Non-speech-like vocalizations and Speech-like vocalizations. Non-speech-like vocalizations include a. vegetative sounds such as burping and b. fixed vocal signals like crying or laughing. Speech-like vocalizations consist of a. quasi-vowels, b. primitive articulation, c. expansion stage and d. canonical babbling.
Deaf infants do not vocally babble and lag behind their hearing counterparts until they receive hearing aids. However, deaf infants will manually babble if introduced to American Sign Language (ASL) early in life. Deaf infants, when provided access to visual language (like ASL), manually babble on the same developmental timetable as hearing infants that vocally babble. [7]
Knowing when a speech sound should be accurately produced helps parents and professionals determine when child may have an articulation disorder. There have been two traditional methods used to compare a child's articulation of speech sounds to chronological age. The first is comparing the number of correct responses on a standardized articulation test with the normative data for a given age on the same test. This allows evaluators to see how well a child is producing sounds compared to their same aged peers. The second method consists of comparing an individual sound a child produces with developmental norms for that individual sound. The second method can be difficult when considering the differing normative data and other factors that affect typical speech development. Many norms are based on age expectations in which a majority of children of a certain age are accurately producing a sound (75% or 90% depending on the study). Using the results from Sander (1972), [10] Templin (1957), [11] and Wellman, Case, Mengert, & Bradbury, (1931), [12] the American Speech-Language Hearing Association suggests the following: Sounds mastered by age 3 include /p, m, h, n, w, b/; by age 4 /k, g, d, f, y/; by age 6 /t, ŋ, r, l/; by age 7 /tʃ, ʃ, j, θ/. and by age 8 /s, z, v, ð, ʒ/. [13]
Shriberg (1993) [14] proposed a model for speech sound acquisition known as the Early, Middle, and Late 8 based on 64 children with speech delays ages 3 to 6 years. Shriberg proposed that there were three stages of phoneme development. Using a profile of "consonant mastery" he developed the following:
{{cite book}}: ISBN / Date incompatibility (help)[ page needed ]