Speech shadowing

Last updated

Speech shadowing is a psycholinguistic experimental technique in which subjects repeat speech at a delay to the onset of hearing the phrase. [1] The time between hearing the speech and responding, is how long the brain takes to process and produce speech. The task instructs participants to shadow speech, which generates intent to reproduce the phrase while motor regions in the brain unconsciously process the syntax and semantics of the words spoken. [2] Words repeated during the shadowing task would also imitate the parlance of the shadowed speech. [3]

Contents

The reaction time between perceiving speech and then producing speech has been recorded at 250 ms for a standardised test. [2] However, for people with left dominant brains, the reaction time has been recorded at 150 ms. [4] Functional imaging finds that the shadowing of speech occurs through the dorsal stream. [5] This area links auditory and motor representations of speech through a pathway that starts in the superior temporal cortex, extends to the inferior parietal cortex and ends with the posterior and inferior frontal cortexes, specifically in Broca's area. [6]

The speech shadowing technique was created as a research technique by the Leningrad Group led by Ludmilla Chistovich and Valerij Kozhevnikov in the late 1950s. [4] [7] In the 1950s, the Motor theory of speech perception was also in development through Alvin Liberman and Franklin S. Cooper. [8] It has been used for research on stuttering [9] and divided attention, [10] with focus on the distraction of conversational audio while driving. [11] Speech shadowing also has applications for language learning, [12] as an interpretation method [13] and in singing. [14]

History

The Leningrad group was interested in the time difference between the articulation and perception of speech. The speech shadowing technique was formulated to measure this difference. [15] To measure the initiation of speech, an artificial palate was placed in the speaker's mouth. When the tongue moved to begin pronunciation and touched the palate, the measurement of reaction time began. [15] The experiment concluded that the reaction time for consonants was consistently shorter than the reaction time to any vowel. The reaction time to a vowel depended on the consonant that came before it. [15] This supported the phoneme as being the most basic unit of speech registered by the brain, rather than a syllable. The phoneme is the smallest distinguishable unit of sound, but the smallest unit that has assigned meaning is a consonant-vowel syllable. [15]

Ludmilla Chistovich and Valerij Kozhevnikov focused on research of the mental processes that stimulate the functions of perception and production of speech in communication. [16] In linguistics, speech perception was the chronological process that analysed steadily paced and similar sounding words but Chistovich and Kozhevnikov found speech perception to be the staggered integration of syllables known as non-linear dynamics. [16] This refers to the diversity of tones and syllables in speech, which is perceived without a conscious detection of delay and forgotten with the limited working memory capacity. [17] This observation developed research towards the speech shadowing technique for research in psycholinguistics. [1]

Shadowing was used to measure the reaction time taken to repeat consonant-vowel syllables. Alveolar consonants were measured when the tongue first touched an artificial palate and labial consonants were measured by the contact of metal pieces when the upper and lower lips pressed together. [15]  The participant would begin to mimic the consonant as the speaker finished the utterance of the consonant. This consistent rapid response shifted research focus towards close speech shadowing.

Close speech shadowing is when the technique requires an immediate repetition, at the fastest pace a person is able to achieve. [1] It does not allow people to hear the entire phrase beforehand or to understand the words vocalised until the end of a sentence. [16] It was found that close speech shadowing would occur at the shortest delay of 250 ms. [1]  It has also been found to occur with a minimum delay between 150 m/s in left-hemisphere dominant brains. [18] The left hemisphere is associated with enhanced performance with linguistic skill and information processing. [19]  It engages with analytic patterns of thought and experiences ease with the speech shadowing task. [19]

The short delay of response occurs as the motor regions of the brain have recorded cues that are related to consonants. The brain would then estimate the adjacent vowel syllable before it is heard. When the vowel is registered through the auditory system, it would confirm the action to produce speech based on the estimate. If the vowel estimate is denied, a short delay in response occurs as the motor region configures an alternate vowel. [15]

Biological functioning

Frequency detection by the basilar membrane Uncoiled cochlea with basilar membrane.png
Frequency detection by the basilar membrane

Research has developed a biological model as to how the meaning of speech can be perceived instantaneously even though the sentence has never been heard before. An understanding of syntactic, lexical and phonemic characteristics is first required for this to occur. [20] Speech perception also requires the physical components of the auditory system to recognise similarities in sounds. Within the basilar membrane, energy is transferred, and specific frequencies can be detected and activate auditory hairs. The auditory hairs can be stimulated to sharpened activity when a tonal emission is held for 100 ms. [20] This length of time indicates that speech shadowing ability can be enhanced by a moderately paced phrase. [20]

Shadowing is more complex than only the use of the auditory system. A shadow response can reduce the delay by analysing the temporal difference between the pronunciation of phonemes within a syllable. [21]  During a shadowing task, the process of perceiving speech and a subsequent response by the production of speech does not occur separately, it would partially overlap. The auditory system shifts between a translation stage of perceiving phonemes and a choice phase of anticipating the following phonemes to create an immediate response. [22] This period of overlap occurs in 20 – 90 ms, depending on the combination of vowels with consonants. [21]

The translation phase involves afferent codes that uses the auditory system and neural networks. The choice phase involves efferent codes, which uses muscle groups that contribute to a response. [22]  These coding systems are functionally different but interact to create a positive feedback loop in auditory functioning. This linking between perception and response in a speech shadowing task can be enhanced by the instructions given to participants. Analysing the variations of instructions of shadowing tasks concludes that through each case, the motor systems are primed to respond optimally and reduce a delay in reaction time. [22]  These points of interaction between the systems that permit speech perception and production occur without consciousness. This feedback loop is experienced as a linear process in functional reality. [22]  When participants are instructed to shadow speech, functional reality consists only of intent to reproduce speech, active listening and production of speech.

Speech perception also has links to phonological processing skills. This includes recognition of all phonemes in a language and how they can combine to form common syllables. [23]  A low understanding of phonological norms can negatively affect performance in a speech shadowing task. [23] This is measured through the inclusion of proper and nonsense words in the task. [24]  High phonological processing skills produced shorter reaction times and low phonological processing skilled participants experienced uncertainty and slower responses.

Motor theory of speech perception

The mechanisms of speech shadowing could also be accounted for by the motor theory of speech perception. It states that shadowed words are perceived by shifting attention towards to motions and gestures that are created during pronunciation of speech instead of an attentional shift towards rhythmic and tonal characteristics of sound. [8] The behaviourist theory cites that the motor system has primary functioning during both speech perception and production. Auditory and visual analysis has established that the vocal tract has developed a coarticulation of consonants and vowels during shadowing. [25] This provides evidence that human speech is a communication form of efficient coding rather than of complex semantics and syntax. [25] The interaction between the coding of perception and production of speech in this motor theory has also gained more evidence through the discovery of mirror neurones. [25]

Experimental techniques

Stuttering

The speech shadowing technique is part of research methods that examine the mechanics of stuttering and identifies practical improvement strategies. [26]  A primary characteristic of stuttering is a repeated movement, characterised by the repetition of a syllable. In this activity, stutters are made to shadow a repeated movement that is internally or externally sourced. [27]  It reduces the likelihood of stuttering as the linguistic mental block is overturned and conditioned to provide an opening for fluid speech. [26] [28] Mirror neurones of the frontal lobe are active during this exercise and act to link speech perception and production. [27] This process combined with cortical priming is engaged to produce the visible response. [29]

Another primary characteristic of stuttering is a fixed posture, involving the prolongation of sounds. Speech shadowing research involving fixed postures produces no benefit in improving speech flow. [26] [28] The elongation of words in this stuttering characteristic does not align with the auditory system, which functions efficiently with moderately paced speech.

Speech shadowing has also been used in research into pseudo-stuttering, a voluntary speech impediment. Pseudo-stuttering involves identifying primary stuttering characteristics and realistic shadowing. [30] It is used as an activity when studying fluency disorders, [31] for students to experience how psychological and social outcomes are impacted by stuttering with strangers. [31]  Participants of this activity reported feelings of anxiety, frustration and embarrassment, which aligned with the reported emotional states of natural stutterers. [30] The participants also reported lowered expectations towards sufferers in public situations. [32]

Dichotic Listening Test

The speech shadowing technique is used in dichotic listening tests, produced by E. Colin Cherry in 1953. [33] During dichotic listening tests, subjects are presented with two different messages, one in the right ear and one in the left ear. The participants are instructed to focus on one of the two messages and to shadow the attended message out loud. The perceptual ability of the participant is measured as subjects attend to the instructed message while the alternate message behaves as a distraction. [34] Various stimuli are then presented to the other ear, and subjects are afterwards queried on what can be recalled from these messages despite instruction to ignore. [35] Speech shadowing has here been manipulated as an experimental technique to study and test divided attention. [12] [10] [36]

Driving

Mobile phone use while driving Cell phone use while driving.jpg
Mobile phone use while driving

Research into the effect of audio stimuli resulting from mobile phone use while driving, has used the speech shadowing technique in its methodology. [37]  Speech shadowing tasks that have combined a conversational stimulus with a visual stimulus while driving are reported by participants as a distraction that directs focus away from the road and visual periphery. [11] The study concludes that the combination of audio and visual stimuli have little effect on a driver's ability to manoeuvre a vehicle but it does impair spatial and temporal judgement, which is not detected by the driver. [38] This includes a driver's judgement of their speed, distance from a parallel vehicle and a delayed reaction to a sudden brake from a driver ahead.

The speech shadowing technique had also been used to research whether it is the action of producing speech or concentration on the semantics of speech that distracts drivers. The task of simple speech shadowing had no effects on driving ability but the combination of simple speech shadowing with a content associated follow-up activity showed impairment in reaction time. [39] The high attentional demand required for this alternate task shifts concentration from the primary task of driving. [39] This impairment is problematic as fast reaction time when driving is required to respond to general traffic signals and signage as well as unpredictable events to maintain safety. [39]

Speech shadowing has also been used to imitate the amount of concentration that is lost when people engage in mobile phone conversations while driving, depending on the location that the mobile phone is placed. [11] Speech shadowing from a sound source that is located in front of a driver produces a shorter delay in reaction time and more accuracy in shadowed content than when the sound source is located beside the driver. This research concluded that concentration on a visual stimuli draws the attention of the auditory system to the same direction and that conversational audio emitted from a mobile phone placed in front of a driver produces less distraction than a mobile phone placed to the side of a driver as it is closest to the forward-facing visual stimuli of the road that is a driver's primary focus. [11]

Applications

Language learning

The most basic form of speech shadowing occurs without the need of cognition. This is evidenced by the phonetic imitation of mentally impaired individuals who do not require prior knowledge to engage in a shadowing task but do not understand the semantics of the shadowed speech. [40] The higher process of acquiring a language is also innate. It can be spontaneously developed through the technique of speech shadowing as sounds are repeated and also semantically related. [41] Research to enhance the developing reading skills of children use the speech shadowing technique which states that the pace children are verbally taught should be catered towards a child's reading ability. [42] Poor readers have slower reaction times in speech shadowing activities than good readers for age-relatively difficult content. They would also experience slower shadowing responses when sentences were partially grammatically incorrect. [42] Shadowing research has identified a low understanding of grammatical structure and a low range of vocabulary as characteristics of a poor reader and target areas for developmental aid. [42]

When learning a foreign language, shadowing can be used as a technique to practice speech and to acquire knowledge. [36] It follows an interactionist perspective of language development. [43] The method of speech shadowing in a learning setting involves providing shadowing tasks of incremental semantic and pronunciation difficulty and rating the accuracy of the shadowed response. It was previously difficult to create a standardised scoring system as learners would slur and skip words when uncertain in order to keep up with the pace of the phrases that were to be shadowed. [44] Automatic scoring using alignment-based and clustering-based scoring techniques were designed and are now implemented to improve the experience of learning of a foreign language through speech shadowing techniques. [36]

Remote learning of language can occur without the presence of a real-time speaker through text-to-speech applications and using the principle of speech shadowing. [41] As part of the process to perceive sound, the auditory system distinguishes formant frequencies.  The first formant characteristic perceived in the cochlear is the most prominent cue as it there is an attentional shift towards this signal. [20] The formant characteristics of synthetically produced speech currently differs to speech produced by the human vocal tract. This information received effects the pronunciation of speech produced in a shadowing activity. [20]  Applications for learning languages are focused on developing greater accuracy in pronunciation and pitch since these features are also replicated when shadowing speech. [41]

Interpretation

Interpreters also use the speech shadowing technique, with modifications to the delivery and expected result. [45] The first difference is that the shadowing response is chosen to be delivered in a different language to the initial vocalisation of the phrase. The phrase is also not translated verbatim. Languages may not carry parallel words of meaning, so the role of an interpreter is to place emphasis on semantics during translation. [45] Close speech shadowing would be the primary focus of an interpreter as the role involves the production of a semantically accurate response as well as a steady, conversation-like pace. The goal of interpretation is to generate the effect of an absent third person while producing brevity and clarity in the conversation. [13] Although the role of the interpreter is to be aligned with the pace, the conversation cannot move too fast. Mental load only allows for partial overlap between perceiving, comprehending, translating and producing speech and it is also affected by diminishing returns. [13] An interpreter is commonly engaged with a non-dominant language to communicate. Shadowing speech during a positron emission tomography finds greater stimulation of the temporal cortex and motor-function regions. [46] This demonstrates that a greater conscious effort is required to engage with a non-dominant language. [46]

Singing

Speech shadowing can be used in the alternate form of vocal shadowing. It also requires the process of perception and production but with inverted energy distributions of a low input and a large output. [14]  Vocal shadowing perceives pure tones and focuses on the manipulation of the vocal tract to produce a shadowed response. [14]  Singers in comparison to non-singers are able to produce a shadowed response phrase that includes more accuracy in achieving the target frequencies and rapid movement between the frequencies. [47] Research associates this ability with greater control and awareness of the vocal-fold breadth. The glottal stop is a technique manipulated by singers during shadowing to enhance frequency change. [47]

See also

Footnotes

  1. 1 2 3 4 Marslen-Wilson, William D. (1985). "Speech shadowing and speech comprehension". Speech Communication. 4 (1–3): 55–73. doi:10.1016/0167-6393(85)90036-6. ISSN   0167-6393.
  2. 1 2 Marslen-Wilson, W. (1973). "Linguistic structure and speech shadowing at very short latencies". Nature. 244 (5417): 522–523. Bibcode:1973Natur.244..522M. doi:10.1038/244522a0. PMID   4621131. S2CID   4220775.
  3. Shockley, Kevin; Sabadini, Laura; Fowler, Carol (2004-05-01). "Imitation in shadowing words". Perception & Psychophysics. 66 (3): 422–9. doi: 10.3758/BF03194890 . PMID   15283067.
  4. 1 2 Marslen-Wilson, W. D. (1985). "Speech shadowing and speech comprehension". Speech Communication. 4 (1–3): 55–73. doi:10.1016/0167-6393(85)90036-6.
  5. Peschke, C.; Ziegler, W.; Kappes, J.; Baumgaertner, A. (2009). "Auditory–motor integration during fast repetition: The neuronal correlates of shadowing". NeuroImage. 47 (1): 392–402. doi:10.1016/j.neuroimage.2009.03.061. PMID   19345269. S2CID   17943264.
  6. Hickok, G.; Poeppel, D. (2004). "Dorsal and ventral streams: A framework for understanding aspects of the functional anatomy of language". Cognition. 92 (1–2): 67–99. doi:10.1016/j.cognition.2003.10.011. PMID   15037127. S2CID   635860.
  7. Chistovich, L. A.; Pickett, J. M.; Porter, R. J. (1998). "Speech research at the I. P. Pavlov Institute in Leningrad/St. Petersburg". The Journal of the Acoustical Society of America. 103 (5): 3024. Bibcode:1998ASAJ..103.3024C. doi:10.1121/1.422540.
  8. 1 2 Lane, Harlan (1965). "The motor theory of speech perception: A critical review". Psychological Review. 72 (4): 275–309. doi:10.1037/h0021986. ISSN   0033-295X. PMID   14348425.
  9. Harbison Jr, D. C.; Porter Jr, R. J.; Tobey, E. A. (1989). "Shadowed and simple reaction times in stutterers and nonstutterers". The Journal of the Acoustical Society of America. 86 (4): 1277–1284. Bibcode:1989ASAJ...86.1277H. doi:10.1121/1.398742. PMID   2808903.
  10. 1 2 Spence, Charles; Read, Liliana (May 2003). "Speech Shadowing While Driving". Psychological Science. 14 (3): 251–256. doi:10.1111/1467-9280.02439. ISSN   0956-7976. PMID   12741749. S2CID   37071908.
  11. 1 2 3 4 Spence, Charles; Read, Liliana (2003). "Speech Shadowing While Driving". Psychological Science. 14 (3): 251–256. doi:10.1111/1467-9280.02439. ISSN   0956-7976. PMID   12741749. S2CID   37071908.
  12. 1 2 Luo, Dean; Minematsu, Nobuaki; Yamauchi, Yutaka; Hirose, Keikichi (December 2008). "Automatic Assessment of Language Proficiency through Shadowing". 2008 6th International Symposium on Chinese Spoken Language Processing. IEEE. pp. 1–4. doi:10.1109/chinsl.2008.ecp.22. ISBN   978-1-4244-2942-4. S2CID   2640173.
  13. 1 2 3 Sabatini, Elisabetta (2000-12-31). "Listening comprehension, shadowing and simultaneous interpretation of two 'non-standard' English speeches". Interpreting. International Journal of Research and Practice in Interpreting. 5 (1): 25–48. doi:10.1075/intp.5.1.03sab. ISSN   1384-6647.
  14. 1 2 3 Leonard, Rebecca J.; Ringel, Robert L.; Daniloff, Raymond G.; Horii, Yoshiyuki (1987). "Voice frequency change in singers and nonsingers". Journal of Voice. 1 (3): 234–239. doi:10.1016/s0892-1997(87)80005-x. ISSN   0892-1997.
  15. 1 2 3 4 5 6 Pickett, J.M. (1985). "Shadows, echoes and auditory analysis of speech". Speech Communication. 4 (1–3): 19–30. doi:10.1016/0167-6393(85)90033-0. ISSN   0167-6393.
  16. 1 2 3 Tatham, Mark; Morton, Katherine (2006), "Speech Perception: Production for Perception", Speech Production and Perception, Palgrave Macmillan UK, pp. 218–234, doi:10.1057/9780230513969_8, ISBN   978-1-4039-1733-1 , retrieved 2020-05-29
  17. Tuller, Betty; Nguyen, Noël; Lancia, Leonardo; Vallabha, Gautam K. (2010), "Nonlinear Dynamics in Speech Perception", Nonlinear Dynamics in Human Behavior, Studies in Computational Intelligence, Springer Berlin Heidelberg, vol. 328, pp. 135–150, doi:10.1007/978-3-642-16262-6_6, ISBN   978-3-642-16261-9 , retrieved 2020-05-29
  18. Rinne, Teemu; Alho, Kimmo; Alku, Paavo; Holi, Markus; Sinkkonen, Janne; Virtanen, Juha; Bertrand, Olivier; Näätänen, Risto (1999). "Analysis of speech sounds is left-hemisphere predominant at 100–150 ms after sound onset". NeuroReport. 10 (5): 1113–1117. doi:10.1097/00001756-199904060-00038. ISSN   0959-4965. PMID   10321493.
  19. 1 2 Oflaz, Merve (2011). "The effect of right and left brain dominance in language learning". Procedia - Social and Behavioral Sciences. 15: 1507–1513. doi: 10.1016/j.sbspro.2011.03.320 . ISSN   1877-0428.
  20. 1 2 3 4 5 Čistovič, L.; Golusina, A.; Lublinskaja, V.; Malinnikova, Τ.; Žukova, Μ. (1968). "Psychological Methods in Speech Perception Research". STUF - Language Typology and Universals. 21 (1–6). doi:10.1524/stuf.1968.21.16.33. ISSN   2196-7148. S2CID   147272707.
  21. 1 2 Porter, Robert J.; Castellanos, F. Xavier (1980). "Speech‐production measures of speech perception: Rapid shadowing of VCV syllables". The Journal of the Acoustical Society of America. 67 (4): 1349–1356. Bibcode:1980ASAJ...67.1349P. doi:10.1121/1.384187. ISSN   0001-4966. PMID   7372922.
  22. 1 2 3 4 Prinz, W. (1990), "A Common Coding Approach to Perception and Action", Relationships Between Perception and Action, Springer Berlin Heidelberg, pp. 167–201, doi:10.1007/978-3-642-75348-0_7, ISBN   978-3-642-75350-3 , retrieved 2020-05-29
  23. 1 2 McBride-Chang, Catherine (1996). "Models of Speech Perception and Phonological Processing in Reading". Child Development. 67 (4): 1836–1856. doi:10.2307/1131735. ISSN   0009-3920. JSTOR   1131735. PMID   8890511.
  24. McBride-Chang, Catherine (1995). "What is phonological awareness?". Journal of Educational Psychology. 87 (2): 179–192. doi:10.1037/0022-0663.87.2.179. ISSN   0022-0663.
  25. 1 2 3 Galantucci, Bruno; Fowler, Carol A.; Turvey, M. T. (2006). "The motor theory of speech perception reviewed". Psychonomic Bulletin & Review. 13 (3): 361–377. doi: 10.3758/bf03193857 . ISSN   1069-9384. PMC   2746041 . PMID   17048719.
  26. 1 2 3 Saltuklaroglu, Tim; Kalinowski, Joseph; Dayalu, Vikram N.; Stuart, Andrew; Rastatter, Michael P. (2004). "Voluntary stuttering suppresses true stuttering: A window on the speech perception-production link". Perception & Psychophysics. 66 (2): 249–254. doi: 10.3758/bf03194876 . hdl: 10342/7726 . ISSN   0031-5117. PMID   15129746.
  27. 1 2 Saltuklaroglu, Tim; Kalinowski, Joseph (2011). "The Inhibition of Stuttering Via the Perceptions and Production of Syllable Repetitions". International Journal of Neuroscience. 121 (1): 44–49. doi:10.3109/00207454.2011.536361. ISSN   0020-7454. PMID   21171820. S2CID   12642333.
  28. 1 2 Zebrowski, Patricia M. (1994). "Duration of Sound Prolongation and Sound/Syllable Repetition in Children Who Stutter". Journal of Speech, Language, and Hearing Research. 37 (2): 254–263. doi:10.1044/jshr.3702.254. ISSN   1092-4388. PMID   8028307.
  29. Schacter, Daniel L; Wig, Gagan S; Stevens, W Dale (2007). "Reductions in cortical activity during priming". Current Opinion in Neurobiology. 17 (2): 171–176. doi:10.1016/j.conb.2007.02.001. ISSN   0959-4388. PMID   17303410. S2CID   11940663.
  30. 1 2 LOHMAN, PATRICIA (2008). "Students' Perceptions of Face-To-Face Pseudostuttering Experience". Perceptual and Motor Skills. 107 (7): 951–962. doi:10.2466/pms.107.7.951-962. ISSN   0031-5125. PMID   19235424.
  31. 1 2 Hughes, Stephanie (2010). "Ethical and Clinical Implications of Pseudostuttering". Perspectives on Fluency and Fluency Disorders. 20 (3): 84–96. doi:10.1044/ffd20.3.84. ISSN   1940-7599.
  32. Rami, Manish; Kalinowski, Joseph; Stuart, Andrew; Rastatter, Michael (2003). "Self-Perceptions of speech language pathologists-in-training before and after pseudostuttering experiences on the telephone". Disability and Rehabilitation. 25 (9): 491–496. doi:10.1080/0963828031000090425. ISSN   0963-8288. PMID   12745945. S2CID   7721634.
  33. Cherry 1953, p. 976.
  34. Goldstein, B. (2011). Cognitive Psychology: Connecting Mind, Research, and Everyday Experience--with coglab manual. (3rd ed.). Belmont, CA: Wadsworth.
  35. Cherry 1953, p. 977-979.
  36. 1 2 3 Martinsen, Rob; Montgomery, Cherice; Willardson, Véronique (2017-11-24). "The Effectiveness of Video-Based Shadowing and Tracking Pronunciation Exercises for Foreign Language Learners". Foreign Language Annals. 50 (4): 661–680. doi:10.1111/flan.12306. ISSN   0015-718X.
  37. Kaplan, Sinan; Guvensan, Mehmet Amac; Yavuz, Ali Gokhan; Karalurt, Yasin (2015). "Driver Behavior Analysis for Safe Driving: A Survey". IEEE Transactions on Intelligent Transportation Systems. 16 (6): 3017–3032. doi:10.1109/tits.2015.2462084. ISSN   1524-9050. S2CID   637699.
  38. Young, Mark S. (2010-02-25). "Human Factors of Visual and Cognitive Performance in Driving". Ergonomics. 53 (3): 444–445. doi:10.1080/00140130903494785. ISSN   0014-0139. S2CID   110064355.
  39. 1 2 3 Strayer, David L.; Johnston, William A. (2001). "Driven to Distraction: Dual-Task Studies of Simulated Driving and Conversing on a Cellular Telephone". Psychological Science. 12 (6): 462–466. doi:10.1111/1467-9280.00386. ISSN   0956-7976. PMID   11760132. S2CID   15730996.
  40. Yu, Alan C. L.; Abrego-Collier, Carissa; Sonderegger, Morgan (2013-09-30). "Phonetic Imitation from an Individual-Difference Perspective: Subjective Attitude, Personality and "Autistic" Traits". PLOS ONE. 8 (9): e74746. Bibcode:2013PLoSO...874746Y. doi: 10.1371/journal.pone.0074746 . ISSN   1932-6203. PMC   3786990 . PMID   24098665.
  41. 1 2 3 Bailly, G. (2003). "Close shadowing natural versus synthetic speech". International Journal of Speech Technology. 6 (1): 11–19. doi:10.1023/a:1021091720511. ISSN   1381-2416. S2CID   16968458.
  42. 1 2 3 Jordan, N.C. (1988). "Language processing and reading ability in children: A study based on speech-shadowing techniques". Journal of Psycholinguistic Research. 17 (5): 357–377. doi:10.1007/BF01067224. S2CID   142190317.
  43. Berk, Laura E. (2018). Development through the lifespan (Seventh ed.). Hoboken, NJ. ISBN   978-0-13-441969-5. OCLC   946161390.{{cite book}}: CS1 maint: location missing publisher (link)
  44. Dean, Lou (December 2007). "Automatic pronunciation evaluation of language learners' utterances generated through shadowing". Interspeech 2008. 9.
  45. 1 2 Lambert, Sylvie (2002-09-30). "Shadowing". Meta. 37 (2): 263–273. doi: 10.7202/003378ar . ISSN   1492-1421.
  46. 1 2 Tommola, Jorma; Laine, Matti; Sunnari, Marianna; Rinne, Juha O. (2000-12-31). "Images of shadowing and interpreting". Interpreting. International Journal of Research and Practice in Interpreting. 5 (2): 147–167. doi:10.1075/intp.5.2.06tom. ISSN   1384-6647.
  47. 1 2 Murry, Thomas (1990). "Pitch-matching accuracy in singers and nonsingers". Journal of Voice. 4 (4): 317–321. doi:10.1016/s0892-1997(05)80048-7. ISSN   0892-1997.

Bibliography

Related Research Articles

<span class="mw-page-title-main">Phonetics</span> Study of the sounds of human language

Phonetics is a branch of linguistics that studies how humans produce and perceive sounds, or in the case of sign languages, the equivalent aspects of sign. Linguists who specialize in studying the physical properties of speech are phoneticians. The field of phonetics is traditionally divided into three sub-disciplines based on the research questions involved such as how humans plan and execute movements to produce speech, how various movements affect the properties of the resulting sound, or how humans convert sound waves to linguistic information. Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones, and it is also defined as the smallest unit that discerns meaning between sounds in any given language.

Stuttering, also known as stammering, is a speech disorder in which the flow of speech is disrupted by involuntary repetitions and prolongations of sounds, syllables, words, or phrases as well as involuntary silent pauses or blocks in which the person who stutters is unable to produce sounds. The term stuttering is most commonly associated with involuntary sound repetition, but it also encompasses the abnormal hesitation or pausing before speech, referred to by people who stutter as blocks, and the prolongation of certain sounds, usually vowels or semivowels. According to Watkins et al., stuttering is a disorder of "selection, initiation, and execution of motor sequences necessary for fluent speech production". For many people who stutter, repetition is the main concern. The term "stuttering" covers a wide range of severity, from barely perceptible impediments that are largely cosmetic to severe symptoms that effectively prevent oral communication. Almost 70 million people worldwide stutter, about 1% of the world's population.

<span class="mw-page-title-main">McGurk effect</span> Perceptual illusion

The McGurk effect is a perceptual phenomenon that demonstrates an interaction between hearing and vision in speech perception. The illusion occurs when the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound. The visual information a person gets from seeing a person speak changes the way they hear the sound. If a person is getting poor-quality auditory information but good-quality visual information, they may be more likely to experience the McGurk effect. Integration abilities for audio and visual information may also influence whether a person will experience the effect. People who are better at sensory integration have been shown to be more susceptible to the effect. Many people are affected differently by the McGurk effect based on many factors, including brain damage and other disorders.

Auditory phonetics is the branch of phonetics concerned with the hearing of speech sounds and with speech perception. It thus entails the study of the relationships between speech stimuli and a listener's responses to such stimuli as mediated by mechanisms of the peripheral and central auditory systems, including certain areas of the brain. It is said to compose one of the three main branches of phonetics along with acoustic and articulatory phonetics, though with overlapping methods and questions.

<span class="mw-page-title-main">Speech</span> Human vocal communication using spoken language

Speech is a human vocal communication using language. Each language uses phonetic combinations of vowel and consonant sounds that form the sound of its words, and using those words in their semantic character as words in the lexicon of a language according to the syntactic constraints that govern lexical words' function in a sentence. In speaking, speakers perform many different intentional speech acts, e.g., informing, declaring, asking, persuading, directing, and can use enunciation, intonation, degrees of loudness, tempo, and other non-representational or paralinguistic aspects of vocalization to convey meaning. In their speech, speakers also unintentionally communicate many aspects of their social position such as sex, age, place of origin, physical states, psychological states, physico-psychological states, education or experience, and the like.

The two-streams hypothesis is a model of the neural processing of vision as well as hearing. The hypothesis, given its initial characterisation in a paper by David Milner and Melvyn A. Goodale in 1992, argues that humans possess two distinct visual systems. Recently there seems to be evidence of two distinct auditory systems as well. As visual information exits the occipital lobe, and as sound leaves the phonological network, it follows two main pathways, or "streams". The ventral stream leads to the temporal lobe, which is involved with object and visual identification and recognition. The dorsal stream leads to the parietal lobe, which is involved with processing the object's spatial location relative to the viewer and with speech repetition.

Categorical perception is a phenomenon of perception of distinct categories when there is a gradual change in a variable along a continuum. It was originally observed for auditory stimuli but now found to be applicable to other perceptual modalities.

Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.

Delayed Auditory Feedback (DAF), also called delayed sidetone, is a type of altered auditory feedback that consists of extending the time between speech and auditory perception. It can consist of a device that enables a user to speak into a microphone and then hear their voice in headphones a fraction of a second later. Some DAF devices are hardware; DAF computer software is also available. Most delays that produce a noticeable effect are between 50–200 milliseconds (ms). DAF usage has been shown to induce mental stress.

<span class="mw-page-title-main">Electronic fluency device</span> Devices intended to improve the fluency of persons who stutter

Electronic fluency devices are electronic devices intended to improve the fluency of persons who stutter. Most electronic fluency devices change the sound of the user's voice in his or her ear.

Developmental dysfluency, or "normal dysfluency", is a lack of language fluency that occurs during early childhood development. It is commonly observed in children ages 2 to 4 years old. This typically occurs as they begin to learn language and communication skills. Developmental dysfluency refers to speech that is continually interrupted rather than flowing naturally. Developmental dysfluency is most commonly expressed through inconsistencies in speech such as stuttering, repetition, lengthening of sounds and syllables, mistiming, and poor inflection.

Phonological development refers to how children learn to organize sounds into meaning or language (phonology) during their stages of growth.

The motor theory of speech perception is the hypothesis that people perceive spoken words by identifying the vocal tract gestures with which they are pronounced rather than by identifying the sound patterns that speech generates. It originally claimed that speech perception is done through a specialized module that is innate and human-specific. Though the idea of a module has been qualified in more recent versions of the theory, the idea remains that the role of the speech motor system is not only to produce speech articulations but also to detect them.

<span class="mw-page-title-main">Speech repetition</span> Repeating something someone else said

Speech repetition occurs when individuals speak the sounds that they have heard another person pronounce or say. In other words, it is the saying by one individual of the spoken vocalizations made by another individual. Speech repetition requires the person repeating the utterance to have the ability to map the sounds that they hear from the other person's oral pronunciation to similar places and manners of articulation in their own vocal tract.

Dichotic listening is a psychological test commonly used to investigate selective attention and the lateralization of brain function within the auditory system. It is used within the fields of cognitive psychology and neuroscience.

Auditory feedback (AF) is an aid used by humans to control speech production and singing by helping the individual verify whether the current production of speech or singing is in accordance with his acoustic-auditory intention. This process is possible through what is known as the auditory feedback loop, a three-part cycle that allows individuals to first speak, then listen to what they have said, and lastly, correct it when necessary. From the viewpoint of movement sciences and neurosciences, the acoustic-auditory speech signal can be interpreted as the result of movements of speech articulators. Auditory feedback can hence be inferred as a feedback mechanism controlling skilled actions in the same way that visual feedback controls limb movements.

Speech acquisition focuses on the development of vocal, acoustic and oral language by a child. This includes motor planning and execution, pronunciation, phonological and articulation patterns.

Frank H. Guenther is an American computational and cognitive neuroscientist whose research focuses on the neural computations underlying speech, including characterization of the neural bases of communication disorders and development of brain–computer interfaces for communication restoration. He is currently a professor of speech, language, and hearing sciences and biomedical engineering at Boston University.

Interindividual differences in perception describes the effect that differences in brain structure or factors such as culture, upbringing and environment have on the perception of humans. Interindividual variability is usually regarded as a source of noise for research. However, in recent years, it has become an interesting source to study sensory mechanisms and understand human behavior. With the help of modern neuroimaging methods such as fMRI and EEG, individual differences in perception could be related to the underlying brain mechanisms. This has helped to explain differences in behavior and cognition across the population. Common methods include studying the perception of illusions, as they can effectively demonstrate how different aspects such as culture, genetics and the environment can influence human behavior.

The speech-to-song illusion is an auditory illusion discovered by Diana Deutsch in 1995. A spoken phrase is repeated several times, without altering it in any way, and without providing any context. This repetition causes the phrase to transform perceptually from speech into song.