Speech repetition

Last updated
Children copy with their own mouths the words spoken by the mouths of those around them. That enables them to learn the pronunciation of words not already in their vocabulary. LippenStudium1.JPG
Children copy with their own mouths the words spoken by the mouths of those around them. That enables them to learn the pronunciation of words not already in their vocabulary.

Speech repetition occurs when individuals speak the sounds that they have heard another person pronounce or say. In other words, it is the saying by one individual of the spoken vocalizations made by another individual. Speech repetition requires the person repeating the utterance to have the ability to map the sounds that they hear from the other person's oral pronunciation to similar places and manners of articulation in their own vocal tract.

Contents

Such speech imitation often occurs independently of speech comprehension such as in speech shadowing in which people automatically say words heard in earphones, and the pathological condition of echolalia in which people reflexively repeat overheard words. That links to speech repetition of words being separate in the brain to speech perception. Speech repetition occurs in the dorsal speech processing stream, and speech perception occurs in the ventral speech processing stream. Repetitions are often incorporated unawares by that route into spontaneous novel sentences immediately or after delay after the storage in phonological memory.

In humans, the ability to map heard input vocalizations into motor output is highly developed because of the copying ability playing a critical role in children's rapid expansion of their spoken vocabulary. In older children and adults, that ability remains important, as it enables the continued learning of novel words and names and additional languages. That repetition is also necessary for the propagation of language from generation to generation. It has also been suggested that the phonetic units out of which speech is made have been selected upon by the process of vocabulary expansion and vocabulary transmissions because children prefer to copy words in terms of more easily imitated elementary units.

Properties

Automatic

Vocal imitation happens quickly: words can be repeated within 250-300 milliseconds [1] both in normals (during speech shadowing) [2] and during echolalia. The imitation of speech syllables possibly happens even more quickly: people begin imitating the second phone in the syllable [ao] earlier than they can identify it (out of the set [ao], [aæ] and [ai]). [3] Indeed, "...simply executing a shift to [o] upon detection of a second vowel in [ao] takes very little longer than does interpreting and executing it as a shadowed response". [3] Neurobiologically this suggests "...that the early phases of speech analysis yield information which is directly convertible to information required for speech production". [3] Vocal repetition can be done immediately as in speech shadowing and echolalia. It can also be done after the pattern of pronunciation is stored in short-term memory or long-term memory. It automatically uses both auditory and where available visual information about how a word is produced. [4] [5]

The automatic nature of speech repetition was noted by Carl Wernicke, the late nineteenth century neurologist, who observed that "The primary speech movements, enacted before the development of consciousness, are reflexive and mimicking in nature..". [6]

Independent of speech

Vocal imitation arises in development before speech comprehension and also babbling: 18-week-old infants spontaneously copy vocal expressions provided the accompanying voice matches. [7] Imitation of vowels has been found as young as 12 weeks. [8] It is independent of native language, language skills, word comprehension and a speaker's intelligence. Many autistic and some mentally disabled people engage in the echolalia of overheard words (often their only vocal interaction with others) without understanding what they echo. [9] [10] [11] [12] Reflex uncontrolled echoing of others words and sentences occurs in roughly half of those with Gilles de la Tourette syndrome. [13] The ability to repeat words without comprehension also occurs in mixed transcortical aphasia where it links to the sparing of the short-term phonological store. [14]

The ability to repeat and imitate speech sounds occurs separately to that of normal speech. Speech shadowing provides evidence of a 'privileged' input/output speech loop that is distinct to the other components of the speech system. [15] Neurocognitive research likewise finds evidence of a direct (nonlexical) link between phonological analysis input and motor programming output. [16] [17] [18]

Effector independent

Speech sounds can be imitatively mapped into vocal articulations in spite of vocal tract anatomy differences in size and shape due to gender, age and individual anatomical variability. Such variability is extensive making input output mapping of speech more complex than a simple mapping of vocal track movements. The shape of the mouth varies widely: dentists recognize three basic shapes of palate: trapezoid, ovoid, and triangular; six types of malocclusion between the two jaws; nine ways teeth relate to the dental arch and a wide range of maxillary and mandible deformities. [19] Vocal sound can also vary due to dental injury and dental caries. Other factors that do not impede the sensory motor mapping needed for vocal imitation are gross oral deformations such as hare-lips, cleft palates or amputations of the tongue tip, pipe smoking, pencil biting and teeth clinching (such as in ventriloquism). Paranasal sinuses vary between individuals 20-fold in volume, and differ in the presence and the degree of their asymmetry. [20] [21]

Diverse linguistic vocalizations

Vocal imitation occurs potentially in regard to a diverse range of phonetic units and types of vocalization. The world's languages use consonantal phones that differ in thirteen imitable vocal tract place of articulations (from the lips to the glottis). These phones can potentially be pronounced with eleven types of imitable manner of articulations (nasal stops to lateral clicks). Speech can be copied in regard to its social accent, intonation, pitch and individuality (as with entertainment impersonators). Speech can be articulated in ways which diverge considerably in speed, timbre, pitch, loudness and emotion. Speech further exists in different forms such as song, verse, scream and whisper. Intelligible speech can be produced with pragmatic intonation and in regional dialects and foreign accents. These aspects are readily copied: people asked to repeat speech-like words imitate not only phones but also accurately other pronunciation aspects such as fundamental frequency, [22] schwa-syllable expression, [22] voice spectra and lip kinematics, [23] voice onset times, [24] and regional accent. [25]

Language acquisition

Vocabulary expansion

In 1874 Carl Wernicke proposed [26] that the ability to imitate speech plays a key role in language acquisition. This is now a widely researched issue in child development. [27] [28] [29] [30] [31] A study of 17,000 one and two word utterances made by six children between 18 months to 25 months found that, depending upon the particular infant, between 5% and 45% of their words might be mimicked. [27] These figures are minima since they concern only immediately heard words. Many words that may seem spontaneous are in fact delayed imitations heard days or weeks previously. [28] At 13 months children who imitate new words (but not ones they already know) show a greater increase in noun vocabulary at four months and non noun vocabulary at eight months. [29] A major predictor of vocabulary increase in both 20 months, [32] 24 months, [33] and older children between 4 and 8 years is their skill in repeating nonword phone sequences (a measure of mimicry and storage). [30] [31] This is also the case with children with Down's syndrome . [34] The effect is larger than even age: in a study of 222 two-year-old children that had spoken vocabularies ranging between 3–601 words the ability to repeat nonwords accounted for 24% of the variance compared to 15% for age and 6% for gender (girls better than boys). [33]

Nonvocabulary expansion uses of imitation

Imitation provides the basis for making longer sentences than children could otherwise spontaneously make on their own. [35] Children analyze the linguistic rules, pronunciation patterns, and conversational pragmatics of speech by making monologues (often in crib talk) in which they repeat and manipulate in word play phrases and sentences previously overheard. [36] Many proto-conversations involve children (and parents) repeating what each other has said in order to sustain social and linguistic interaction. It has been suggested that the conversion of speech sound into motor responses helps aid the vocal "alignment of interactions" by "coordinating the rhythm and melody of their speech". [37] Repetition enables immigrant monolingual children to learn a second language by allowing them to take part in 'conversations'. [38] Imitation related processes aids the storage of overheard words by putting them into speech based short- and long-term memory. [39]

Language learning

The ability to repeat nonwords predicts the ability to learn second-language vocabulary. [40] A study found that adult polyglots performed better in short-term memory tasks such as repeating nonword vocalizations compared to nonpolyglots though both are otherwise similar in general intelligence, visuo-spatial short-term memory and paired-associate learning ability. [41] Language delay in contrast links to impairments in vocal imitation. [42]

Speech repetition and phones

Electrical brain stimulation research upon the human brain finds that 81% of areas that show disruption of phone identification are also those in which the imitating of oral movements is disrupted and vice versa; [43] Brain injuries in the speech areas show a 0.9 correlation between those causing impairments to the copying of oral movements and those impairing phone production and perception. [44]

Mechanism

Spoken words are sequences of motor movements organized around vocal tract gesture motor targets. [45] Vocalization due to this is copied in terms of the motor goals that organize it rather than the exact movements with which it is produced. These vocal motor goals are auditory. According to James Abbs [46] 'For speech motor actions, the individual articulatory movements would not appear to be controlled with regard to three- dimensional spatial targets, but rather with regard to their contribution to complex vocal tract goals such as resonance properties (e.g., shape, degree of constriction) and or aerodynamically significant variables'. Speech sounds also have duplicable higher-order characteristics such as rates and shape of modulations and rates and shape of frequency shifts. [47] Such complex auditory goals (which often link—though not always—to internal vocal gestures) are detectable from the speech sound which they create.

Neurology

Dorsal speech processing stream function

Two cortical processing streams exist: a ventral one which maps sound onto meaning, and a dorsal one, that maps sound onto motor representations. The dorsal stream projects from the posterior Sylvian fissure at the temporoparietal junction, onto frontal motor areas, and is not normally involved in speech perception. [48] Carl Wernicke identified a pathway between the left posterior superior temporal sulcus (a cerebral cortex region sometimes called the Wernicke's area) as a centre of the sound "images" of speech and its syllables that connected through the arcuate fasciculus with part of the inferior frontal gyrus (sometimes called the Broca's area) responsible for their articulation. [6] This pathway is now broadly identified as the dorsal speech pathway, one of the two pathways (together with the ventral pathway) that process speech. [49] The posterior superior temporal gyrus is specialized for the transient representation of the phonetic sequences used for vocal repetition. [50] Part of the auditory cortex also can represent aspects of speech such as its consonantal features. [51]

Mirror neurons

Mirror neurons have been identified that both process the perception and production of motor movements. This is done not in terms of their exact motor performance but an inference of the intended motor goals with which it is organized. [52] Mirror neurons that both perceive and produce the motor movements of speech have been identified. [53] Speech is mirrored constantly into its articulations since speakers cannot know in advance that a word is unfamiliar and in need of repetition—which is only learnt after the opportunity to map it into articulations has gone. Thus, speakers if they are to incorporate unfamiliar words into their spoken vocabulary must by default map all spoken input. [54]

Sign language

Words in sign languages, unlike those in spoken ones, are made not of sequential units but of spatial configurations of subword unit arrangements, the spatial analogue of the sonic-chronological morphemes of spoken language. [55] These words, like spoken ones, are learnt by imitation. Indeed, rare cases of compulsive sign-language echolalia exist in otherwise language-deficient deaf autistic individuals born into signing families. [55] At least some cortical areas neurobiologically active during both sign and vocal speech, such as the auditory cortex, are associated with the act of imitation. [56]

Nonhuman animals

Birds

Birds learn their songs from those made by other birds. In several examples, birds show highly developed repetition abilities: the Sri Lankan Greater racket-tailed drongo (Dicrurus paradiseus) copies the calls of predators and the alarm signals of other birds [57] Albert's lyrebird (Menura alberti) can accurately imitate the satin bowerbird (Ptilonorhynchus violaceus), [58]

Research upon avian vocal motor neurons finds that they perceive their song as a series of articulatory gestures as in humans. [59] Birds that can imitate humans, such as the Indian hill myna (Gracula religiosa), imitate human speech by mimicking the various speech formants, created by changing the shape of the human vocal tract, with different vibration frequencies of its internal tympaniform membrane. [60] Indian hill mynahs also imitate such phonetic characteristics as voicing, fundamental frequencies, formant transitions, nasalization, and timing, through their vocal movements are made in a different way from those of the human vocal apparatus. [60]

Nonhuman mammals

Apes

Apes taught language show an ability to imitate language signs with chimpanzees such as Washoe who was able to learn with his arms a vocabulary of 250 American Sign Language gestures. However, such human trained apes show no ability to imitate human speech vocalizations. [67]

See also

Footnotes

  1. Indefrey, P.; Levelt, W. J. M. (2004). "The spatial and temporal signatures of word production components". Cognition. 92 (1–2): 101–144. CiteSeerX   10.1.1.475.251 . doi:10.1016/j.cognition.2002.06.001. PMID   15037128. S2CID   12662702.
  2. Marslen-Wilson, W. (1973). "Linguistic structure and speech shadowing at very short latencies". Nature. 244 (5417): 522–523. Bibcode:1973Natur.244..522M. doi:10.1038/244522a0. PMID   4621131. S2CID   4220775.
  3. 1 2 3 Porter Jr, R. J.; Lubker, J. F. (1980). "Rapid reproduction of vowel-vowel sequences: Evidence for a fast and direct acoustic-motoric linkage in speech". Journal of Speech and Hearing Research. 23 (3): 593–602. doi:10.1044/jshr.2303.593. PMID   7421161.
  4. Gentilucci, M.; Cattaneo, L. (2005). "Automatic audiovisual integration in speech perception". Experimental Brain Research. 167 (1): 66–75. doi:10.1007/s00221-005-0008-z. PMID   16034571. S2CID   20166301.
  5. "Acute hepatitis B virus infection in children and teachers, England and Wales 1985-90". Communicable Disease Report. 1 (17): 75–76. 1991. PMID   1669805.
  6. 1 2 Wernicke K. The aphasia symptom-complex. 1874. Breslau, Cohn and Weigert. Translated in: Eling P, editor. Reader in the history of aphasia. Vol. 4. Amsterdam: John Benjamins; 1994. p. 69–89. ISBN   978-90-272-1893-3
  7. Kuhl, P. K.; Meltzoff, A. N. (1982). "The bimodal perception of speech in infancy". Science. 218 (4577): 1138–1141. Bibcode:1982Sci...218.1138K. doi:10.1126/science.7146899. PMID   7146899.
  8. Kuhl, P. K.; Meltzoff, A. N. (1996). "Infant vocalizations in response to speech: Vocal imitation and developmental change". The Journal of the Acoustical Society of America. 100 (4 Pt 1): 2425–2438. Bibcode:1996ASAJ..100.2425K. doi:10.1121/1.417951. PMC   3651031 . PMID   8865648.
  9. Roberts, J. M. (1989). "Echolalia and comprehension in autistic children". Journal of Autism and Developmental Disorders. 19 (2): 271–281. doi:10.1007/BF02211846. PMID   2745392. S2CID   6925526.
  10. Schneider, DE (1938). "The clinical syndromes of echolalia, echopraxia, grasping and sucking". Journal of Nervous and Mental Disease. 88: 18–35, 200–216. doi:10.1097/00005053-193807000-00003. S2CID   143703500.
  11. Schuler, A. L. (1979). "Echolalia: Issues and clinical applications". The Journal of Speech and Hearing Disorders. 44 (4): 411–34. doi:10.1044/jshd.4404.411. PMID   390245.
  12. Stengel, E. (1947). "A Clinical and Psychological Study of Echo-Reactions". The British Journal of Psychiatry. 93 (392): 598–612. doi:10.1192/bjp.93.392.598. PMID   20273402.
  13. Lees, A. J.; Robertson, M.; Trimble, M. R.; Murray, N. M. (1984). "A clinical study of Gilles de la Tourette syndrome in the United Kingdom". Journal of Neurology, Neurosurgery, and Psychiatry. 47 (1): 1–8. doi:10.1136/jnnp.47.1.1. PMC   1027633 . PMID   6582230.
  14. Trojano, L.; Fragassi, N. A.; Postiglione, A.; Grossi, D. (1988). "Mixed transcortical aphasia. On relative sparing of phonological short-term store in a case". Neuropsychologia. 26 (4): 633–638. doi:10.1016/0028-3932(88)90120-0. PMID   2457182. S2CID   35115074.
  15. McLeod P. Posner MI. (1984). Privileged loops from percept to act. In H. Bouma D. Bouwhuis, (Eds), Attention and performance X (pp. 55-66). Hillsdale, NJ, Erlbaum. ISBN   978-0-86377-005-0
  16. Coslett, H. B.; Roeltgen, D. P.; Gonzalez Rothi, L.; Heilman, K. M. (1987). "Transcortical sensory aphasia: Evidence for subtypes". Brain and Language. 32 (2): 362–378. doi:10.1016/0093-934X(87)90133-7. PMID   3690258. S2CID   6079313.
  17. McCarthy, R.; Warrington, E. K. (1984). "A two-route model of speech production. Evidence from aphasia". Brain: A Journal of Neurology. 107 (2): 463–485. doi: 10.1093/brain/107.2.463 . PMID   6722512.
  18. McCarthy, R. A.; Warrington, E. K. (2001). "Repeating Without Semantics: Surface Dysphasia?". Neurocase. 7 (1): 77–87. doi:10.1093/neucas/7.1.77. PMID   11239078.
  19. Bloomer HH. (1971). Speech defects associated with dental malocclusions and related abnormalities. In L. E. (Eds), Handbook of speech pathology and audiology (pp. 715-766), New York, Appleton Century. ISBN   978-0-13-381764-5
  20. Williams RJ. (1967). You are extra-ordinary. New York, Random House. pp. 26-27. OCLC   156187572
  21. Vocal traits also vary moreover when people get upper respiratory tract infections as the shape and size of sinus cavities is further changed with the swelling of mucous membranes.
  22. 1 2 Kappes, J.; Baumgaertner, A.; Peschke, C.; Ziegler, W. (2009). "Unintended imitation in nonword repetition". Brain and Language. 111 (3): 140–151. doi:10.1016/j.bandl.2009.08.008. PMID   19811813. S2CID   2113790.
  23. Gentilucci, M; Bernardis, P (2007). "Imitation during phoneme production". Neuropsychologia. 45 (3): 608–15. doi:10.1016/j.neuropsychologia.2006.04.004. PMID   16698051. S2CID   40687020.
  24. Shockley, K.; Sabadini, L.; Fowler, C. A. (2004). "Imitation in shadowing words". Perception & Psychophysics. 66 (3): 422–429. doi: 10.3758/BF03194890 . PMID   15283067.
  25. Delvaux, V; Soquet, A (2007). "The influence of ambient speech on adult speech productions through unintentional imitation". Phonetica. 64 (2–3): 145–73. doi:10.1159/000107914. PMID   17914281. S2CID   22042824.
  26. Wernicke K. (1874). The aphasia symptom-complex. Breslau, Cohn and Weigert. Translated in: Eling P, editor. (1994). p. 69–89.Reader in the history of aphasia. Vol. 4. Amsterdam: John Benjamins: "The major tasks of the child in speech acquisition is mimicry of the spoken word". p76
  27. 1 2 Bloom, L.; Hood, L.; Lightbown, P. (1974). "Imitation in language development: If, when, and why". Cognitive Psychology. 6 (3): 380–420. doi:10.1016/0010-0285(74)90018-8.
  28. 1 2 Miller GA. (1977). Spontaneous apprentices: Children and language. New York, Seabury Press. ISBN   978-0-8164-9330-2
  29. 1 2 Masur, EF (1995). "Infants' early verbal imitation and their later lexical development". Merrill-Palmer Quarterly. 41: 286–306. OCLC   89395784.
  30. 1 2 Gathercole, SE. Baddeley AD. (1989). "Evaluation of the role of phonological STM in the development of vocabulary in children, A longitudinal study". Journal of Memory and Language. 28 (2): 200–213. doi:10.1016/0749-596x(89)90044-2. Archived from the original on 2012-08-17. Retrieved 2009-12-19.
  31. 1 2 Gathercole, S. E. (2006). "Nonword repetition and word learning: The nature of the relationship". Applied Psycholinguistics. 27 (4): 513–543. doi:10.1017/S0142716406060383. S2CID   145633911. PDF Archived 2011-06-05 at the Wayback Machine
  32. Hoff, E; Core, C; Bridges, K (2008). "Non-word repetition assesses phonological memory and is related to vocabulary development in 20- to 24-month-olds". Journal of Child Language. 35 (4): 903–16. doi:10.1017/S0305000908008751. PMID   18838017. S2CID   18566002.
  33. 1 2 Stokes, S. F.; Klee, T (2009). "Factors that influence vocabulary development in two-year-old children". Journal of Child Psychology and Psychiatry. 50 (4): 498–505. doi:10.1111/j.1469-7610.2008.01991.x. PMID   19017366.
  34. Laws, G.; Gunn, D. (2004). "Phonological memory as a predictor of language comprehension in Down syndrome: A five-year follow-up study". Journal of Child Psychology and Psychiatry, and Allied Disciplines. 45 (2): 326–337. doi:10.1111/j.1469-7610.2004.00224.x. PMID   14982246.
  35. Speidel GE. Herreshoff MJ. (1989). Imitation and the construction of long utterances. In G. E. Speidel & K. E. Nelson, (Eds), The many faces of imitation in language learning (pp. 181-197). New York, Springer-Verlag. ISBN   978-0-387-96885-8
  36. Kuczaj SA. (1983). Crib speech and language practice. New York, Springer-Verlag. ISBN   978-0-387-90860-1
  37. Scott, S. K.; McGettigan, C.; Eisner, F. (2009). "A little more conversation, a little less action — candidate roles for the motor cortex in speech perception". Nature Reviews Neuroscience. 10 (4): 295–302. doi:10.1038/nrn2603. hdl:11858/00-001M-0000-0013-2999-F. PMC   4238059 . PMID   19277052. p. 201
  38. Fillmore LW. (1979). Individual differences in second language acquisition. In C. J. Fillmore, D. Kempler & W. S-Y. Wang, (Eds), Individual differences in language ability and language behavior (pp. 203-228). New York, Academic Press. OCLC   4983571
  39. Gathercole, S. E. (1995). "Is nonword repetition a test of phonological memory or long-term knowledge? It all depends on the nonwords". Memory & Cognition. 23 (1): 83–94. doi:10.3758/BF03210559. PMID   7885268. S2CID   20774241.
  40. Cheng, H (1996). "Nonword span as a unique predictor of second-language vocabulary learning". Developmental Psychology. 32 (5): 867–873. doi:10.1037/0012-1649.32.5.867.
  41. Papagno, C.; Vallar, G. (1995). "Verbal short-term memory and vocabulary learning in polyglots". The Quarterly Journal of Experimental Psychology. A, Human Experimental Psychology. 48 (1): 98–107. doi:10.1080/14640749508401378. PMID   7754088. S2CID   19242688.
  42. Bishop, D. V.; North, T.; Donlan, C. (1996). "Nonword repetition as a behavioural marker for inherited language impairment: Evidence from a twin study". Journal of Child Psychology and Psychiatry, and Allied Disciplines. 37 (4): 391–403. doi:10.1111/j.1469-7610.1996.tb01420.x. PMID   8735439.
  43. Ojemann, GA (1983). "Brain organization for language from the perspective of electrical stimulation mapping". Behavioral and Brain Sciences. 6 (2): 189–230. doi:10.1017/s0140525x00015491. S2CID   143189089.
  44. Kimura, D.; Watson, N. (1989). "The relation between oral movement control and speech". Brain and Language. 37 (4): 565–590. doi:10.1016/0093-934X(89)90112-0. PMID   2479446. S2CID   39913744.
  45. Shaffer LH. (1984). Motor programming in language production. In H. Bouma & D. G. Bouwhuis, (Eds), Attention and performance, X. pp. (17-41). London, Erlbaum. ISBN   978-0-86377-005-0
  46. Abbs JH. (1986). Invariance and variability in speech production, A distinction between linguistic intent and its neuromotor implementation. In J. S. Perkell, & D. H. Klatt, (Eds), Invariance and variability in speech processes (pp. 202-219). Hillsdale, NJ, Erlbaum. ISBN   978-0-89859-545-1
  47. Porter RJ. (1987). What is the relation between speech production and speech perception? In: Allport A, MacKay D G, Prinz W G, Scheerer E, eds. Language Perception and Production. London: Academic Press,: 85-106. ISBN   978-0-12-052750-2
  48. Hickok, G.; Poeppel, D. (2004). "Dorsal and ventral streams: A framework for understanding aspects of the functional anatomy of language". Cognition. 92 (1–2): 67–99. doi:10.1016/j.cognition.2003.10.011. PMID   15037127. S2CID   635860.
  49. Okada, K.; Hickok, G. (2006). "Left posterior auditory-related cortices participate both in speech perception and speech production: Neural overlap revealed by fMRI". Brain and Language. 98 (1): 112–117. doi:10.1016/j.bandl.2006.04.006. PMID   16716388. S2CID   1056984.
  50. Wise, R. J.; Scott, S. K.; Blank, S. C.; Mummery, C. J.; Murphy, K.; Warburton, E. A. (2001). "Separate neural subsystems within 'Wernicke's area'". Brain: A Journal of Neurology. 124 (Pt 1): 83–95. doi: 10.1093/brain/124.1.83 . PMID   11133789.
  51. Obleser, J.; Scott, S. K.; Eulitz, C. (2005). "Now You Hear It, Now You Don't: Transient Traces of Consonants and their Nonspeech Analogues in the Human Brain". Cerebral Cortex. 16 (8): 1069–1076. doi: 10.1093/cercor/bhj047 . PMID   16207930.
  52. Umiltà, M. A.; Kohler, E.; Gallese, V.; Fogassi, L.; Fadiga, L.; Keysers, C.; Rizzolatti, G. (2001). "I know what you are doing. A neurophysiological study". Neuron. 31 (1): 155–165. doi: 10.1016/s0896-6273(01)00337-3 . PMID   11498058.
  53. Hickok, G. (2010). "The role of mirror neurons in speech and language processing". Brain and Language. 112 (1): 1–2. doi:10.1016/j.bandl.2009.10.006. PMC   2813993 . PMID   19948355.
  54. Skoyles, J. R. (2010). "Mapping of heard speech into articulation information and speech acquisition". Proceedings of the National Academy of Sciences. 107 (18): E73. Bibcode:2010PNAS..107E..73S. doi: 10.1073/pnas.1003007107 . PMC   2889576 . PMID   20427741.
  55. 1 2 Poizner H. Klima ES. Bellugi U. (1987). What the hands reveal about the brain. MIT Press. ISBN   978-0-262-66066-2
  56. Nishimura, H.; Hashikawa, K.; Doi, K.; Iwaki, T.; Watanabe, Y.; Kusuoka, H.; Nishimura, T.; Kubo, T. (1999). "Sign language 'heard' in the auditory cortex". Nature. 397 (6715): 116. Bibcode:1999Natur.397..116N. doi: 10.1038/16376 . PMID   9923672. S2CID   4414422.
  57. Goodale, E.; Kotagama, S. W. (2006). "Context-dependent vocal mimicry in a passerine bird". Proceedings of the Royal Society B: Biological Sciences. 273 (1588): 875–880. doi:10.1098/rspb.2005.3392. PMC   1560225 . PMID   16618682.
  58. Putland, D. A.; Nicholls, J. A.; Noad, M. J.; Goldizen, A. W. (2006). "Imitating the neighbours: Vocal dialect matching in a mimic-model system". Biology Letters. 2 (3): 367–370. doi:10.1098/rsbl.2006.0502. PMC   1686190 . PMID   17148405.
  59. Williams, H.; Nottebohm, F. (1985). "Auditory responses in avian vocal motor neurons: A motor theory for song perception in birds". Science. 229 (4710): 279–282. Bibcode:1985Sci...229..279W. doi:10.1126/science.4012321. PMID   4012321. S2CID   19053313.
  60. 1 2 Klatt, D. H.; Stefanski, R. A. (1974). "How does a mynah bird imitate human speech?". The Journal of the Acoustical Society of America. 55 (4): 822–832. Bibcode:1974ASAJ...55..822K. doi:10.1121/1.1914607. PMID   4833078.
  61. Reiss, D.; McCowan, B. (1993). "Spontaneous vocal mimicry and production by bottlenose dolphins (Tursiops truncatus): Evidence for vocal learning". Journal of Comparative Psychology. 107 (3): 301–312. doi:10.1037/0735-7036.107.3.301. PMID   8375147.
  62. Foote, A. D.; Griffin, R. M.; Howitt, D.; Larsson, L.; Miller, P. J. O.; Hoelzel, A. (2006). "Killer whales are capable of vocal learning". Biology Letters. 2 (4): 509–512. doi:10.1098/rsbl.2006.0525. PMC   1834009 . PMID   17148275.
  63. Ralls, K.; Fiorelli, P.; Gish, S. (1985). "Vocalizations and vocal mimicry in captive harbor seals, Phoca vitulina". Canadian Journal of Zoology. 63 (5): 1050–1056. Bibcode:1985CaJZ...63.1050R. doi:10.1139/z85-157.
  64. Poole, J. H.; Tyack, P. L.; Stoeger-Horwath, A. S.; Watwood, S. (2005). "Animal behaviour: Elephants are capable of vocal learning". Nature. 434 (7032): 455–456. Bibcode:2005Natur.434..455P. doi:10.1038/434455a. PMID   15791244. S2CID   4369863.
  65. Esser, K. H. (1994). "Audio-vocal learning in a non-human mammal: The lesser spear-nosed bat Phyllostomus discolor". NeuroReport. 5 (14): 1718–1720. doi:10.1097/00001756-199409080-00007. PMID   7827315.
  66. Wich, S. A.; Swartz, K. B.; Hardus, M. E.; Lameira, A. R.; Stromberg, E.; Shumaker, R. W. (2008). "A case of spontaneous acquisition of a human sound by an orangutan". Primates. 50 (1): 56–64. doi:10.1007/s10329-008-0117-y. PMID   19052691. S2CID   708682.
  67. Hayes C. (1951). The ape in our house, Harper, New York. OCLC   1579444

Related Research Articles

<span class="mw-page-title-main">Aphasia</span> Inability to comprehend or formulate language

In aphasia, a person may be unable to comprehend or unable to formulate language because of damage to specific brain regions. The major causes are stroke and head trauma; prevalence is hard to determine, but aphasia due to stroke is estimated to be 0.1–0.4% in the Global North. Aphasia can also be the result of brain tumors, epilepsy, autoimmune neurological diseases, brain infections, or neurodegenerative diseases.

<span class="mw-page-title-main">Language center</span> Speech processing areas of the brain

In neuroscience and psychology, the term language center refers collectively to the areas of the brain which serve a particular function for speech processing and production. Language is a core system that gives humans the capacity to solve difficult problems and provides them with a unique type of social interaction. Language allows individuals to attribute symbols to specific concepts, and utilize them through sentences and phrases that follow proper grammatical rules. Finally, speech is the mechanism by which language is orally expressed.

<span class="mw-page-title-main">Receptive aphasia</span> Language disorder involving inability to understand language

Wernicke's aphasia, also known as receptive aphasia, sensory aphasia, fluent aphasia, or posterior aphasia, is a type of aphasia in which individuals have difficulty understanding written and spoken language. Patients with Wernicke's aphasia demonstrate fluent speech, which is characterized by typical speech rate, intact syntactic abilities and effortless speech output. Writing often reflects speech in that it tends to lack content or meaning. In most cases, motor deficits do not occur in individuals with Wernicke's aphasia. Therefore, they may produce a large amount of speech without much meaning. Individuals with Wernicke's aphasia often suffer of anosognosia – they are unaware of their errors in speech and do not realize their speech may lack meaning. They typically remain unaware of even their most profound language deficits.

<span class="mw-page-title-main">Broca's area</span> Speech production region in the dominant hemisphere of the hominid brain

Broca's area, or the Broca area, is a region in the frontal lobe of the dominant hemisphere, usually the left, of the brain with functions linked to speech production.

A communication disorder is any disorder that affects an individual's ability to comprehend, detect, or apply language and speech to engage in dialogue effectively with others. This also encompasses deficiencies in verbal and non-verbal communication styles. The delays and disorders can range from simple sound substitution to the inability to understand or use one's native language. This article covers subjects such as diagnosis, the DSM-IV, the DSM-V, and examples like sensory impairments, aphasia, learning disabilities, and speech disorders.

Aphasiology is the study of language impairment usually resulting from brain damage, due to neurovascular accident—hemorrhage, stroke—or associated with a variety of neurodegenerative diseases, including different types of dementia. These specific language deficits, termed aphasias, may be defined as impairments of language production or comprehension that cannot be attributed to trivial causes such as deafness or oral paralysis. A number of aphasias have been described, but two are best known: expressive aphasia and receptive aphasia.

Agraphia is an acquired neurological disorder causing a loss in the ability to communicate through writing, either due to some form of motor dysfunction or an inability to spell. The loss of writing ability may present with other language or neurological disorders; disorders appearing commonly with agraphia are alexia, aphasia, dysarthria, agnosia, acalculia and apraxia. The study of individuals with agraphia may provide more information about the pathways involved in writing, both language related and motoric. Agraphia cannot be directly treated, but individuals can learn techniques to help regain and rehabilitate some of their previous writing abilities. These techniques differ depending on the type of agraphia.

<span class="mw-page-title-main">Temporal lobe</span> One of the four lobes of the mammalian brain

The temporal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The temporal lobe is located beneath the lateral fissure on both cerebral hemispheres of the mammalian brain.

<span class="mw-page-title-main">Wernicke's area</span> Speech comprehension region in the dominant hemisphere of the hominid brain

Wernicke's area, also called Wernicke's speech area, is one of the two parts of the cerebral cortex that are linked to speech, the other being Broca's area. It is involved in the comprehension of written and spoken language, in contrast to Broca's area, which is primarily involved in the production of language. It is traditionally thought to reside in Brodmann area 22, located in the superior temporal gyrus in the dominant cerebral hemisphere, which is the left hemisphere in about 95% of right-handed individuals and 70% of left-handed individuals.

<span class="mw-page-title-main">Conduction aphasia</span> Inability to repeat speech despite being able to perceive and produce it

In neurology, conduction aphasia, also called associative aphasia, is an uncommon form of difficulty in speaking (aphasia). It is caused by damage to the parietal lobe of the brain. An acquired language disorder, it is characterised by intact auditory comprehension, coherent speech production, but poor speech repetition. Affected people are fully capable of understanding what they are hearing, but fail to encode phonological information for production. This deficit is load-sensitive as the person shows significant difficulty repeating phrases, particularly as the phrases increase in length and complexity and as they stumble over words they are attempting to pronounce. People have frequent errors during spontaneous speech, such as substituting or transposing sounds. They are also aware of their errors and will show significant difficulty correcting them.

<span class="mw-page-title-main">Echolalia</span> Speech disorder

Echolalia is the unsolicited repetition of vocalizations made by another person; when repeated by the same person, it is called palilalia. In its profound form it is automatic and effortless. It is one of the echophenomena, closely related to echopraxia, the automatic repetition of movements made by another person; both are "subsets of imitative behavior" whereby sounds or actions are imitated "without explicit awareness". Echolalia may be an immediate reaction to a stimulus or may be delayed.

Transcortical sensory aphasia (TSA) is a kind of aphasia that involves damage to specific areas of the temporal lobe of the brain, resulting in symptoms such as poor auditory comprehension, relatively intact repetition, and fluent speech with semantic paraphasias present. TSA is a fluent aphasia similar to Wernicke's aphasia, with the exception of a strong ability to repeat words and phrases. The person may repeat questions rather than answer them ("echolalia").

Dual stream connectivity between the auditory cortex and frontal lobe of monkeys and humans. Top: The auditory cortex of the monkey (left) and human (right) is schematically depicted on the supratemporal plane and observed from above. Bottom: The brain of the monkey (left) and human (right) is schematically depicted and displayed from the side. Orange frames mark the region of the auditory cortex, which is displayed in the top sub-figures. Top and Bottom: Blue colors mark regions affiliated with the ADS, and red colors mark regions affiliated with the AVS. Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.

<span class="mw-page-title-main">Speech</span> Human vocal communication using spoken language

Speech is the use of the human voice as a medium for language. Spoken language combines vowel and consonant sounds to form units of meaning like words, which belong to a language's lexicon. There are many different intentional speech acts, such as informing, declaring, asking, persuading, directing; acts may vary in various aspects like enunciation, intonation, loudness, and tempo to convey meaning. Individuals may also unintentionally communicate aspects of their social position through speech, such as sex, age, place of origin, physiological and mental condition, education, and experiences.

The two-streams hypothesis is a model of the neural processing of vision as well as hearing. The hypothesis, given its initial characterisation in a paper by David Milner and Melvyn A. Goodale in 1992, argues that humans possess two distinct visual systems. Recently there seems to be evidence of two distinct auditory systems as well. As visual information exits the occipital lobe, and as sound leaves the phonological network, it follows two main pathways, or "streams". The ventral stream leads to the temporal lobe, which is involved with object and visual identification and recognition. The dorsal stream leads to the parietal lobe, which is involved with processing the object's spatial location relative to the viewer and with speech repetition.

Auditory verbal agnosia (AVA), also known as pure word deafness, is the inability to comprehend speech. Individuals with this disorder lose the ability to understand language, repeat words, and write from dictation. Some patients with AVA describe hearing spoken language as meaningless noise, often as though the person speaking was doing so in a foreign language. However, spontaneous speaking, reading, and writing are preserved. The maintenance of the ability to process non-speech auditory information, including music, also remains relatively more intact than spoken language comprehension. Individuals who exhibit pure word deafness are also still able to recognize non-verbal sounds. The ability to interpret language via lip reading, hand gestures, and context clues is preserved as well. Sometimes, this agnosia is preceded by cortical deafness; however, this is not always the case. Researchers have documented that in most patients exhibiting auditory verbal agnosia, the discrimination of consonants is more difficult than that of vowels, but as with most neurological disorders, there is variation among patients.

Paraphasia is a type of language output error commonly associated with aphasia and characterized by the production of unintended syllables, words, or phrases during the effort to speak. Paraphasic errors are most common in patients with fluent forms of aphasia, and come in three forms: phonemic or literal, neologistic, and verbal. Paraphasias can affect metrical information, segmental information, number of syllables, or both. Some paraphasias preserve the meter without segmentation, and some do the opposite. However, most paraphasias affect both partially.

Auditory agnosia is a form of agnosia that manifests itself primarily in the inability to recognize or differentiate between sounds. It is not a defect of the ear or "hearing", but rather a neurological inability of the brain to process sound meaning. While auditory agnosia impairs the understanding of sounds, other abilities such as reading, writing, and speaking are not hindered. It is caused by bilateral damage to the anterior superior temporal gyrus, which is part of the auditory pathway responsible for sound recognition, the auditory "what" pathway.

Speech shadowing is a psycholinguistic experimental technique in which subjects repeat speech at a delay to the onset of hearing the phrase. The time between hearing the speech and responding, is how long the brain takes to process and produce speech. The task instructs participants to shadow speech, which generates intent to reproduce the phrase while motor regions in the brain unconsciously process the syntax and semantics of the words spoken. Words repeated during the shadowing task would also imitate the parlance of the shadowed speech.

Jargon aphasia is a type of fluent aphasia in which an individual's speech is incomprehensible, but appears to make sense to the individual. Persons experiencing this condition will either replace a desired word with another that sounds or looks like the original one, or has some other connection to it, or they will replace it with random sounds. Accordingly, persons with jargon aphasia often use neologisms, and may perseverate if they try to replace the words they can not find with sounds.