Delayed auditory feedback

Last updated

Delayed Auditory Feedback (DAF), also called delayed sidetone, is a type of altered auditory feedback that consists of extending the time between speech and auditory perception. [1] It can consist of a device that enables a user to speak into a microphone and then hear their voice in headphones a fraction of a second later. Some DAF devices are hardware; DAF computer software is also available. Most delays that produce a noticeable effect are between 50–200 milliseconds (ms). DAF usage (with a 175 ms delay) has been shown to induce mental stress. [2]

Contents

It is a type of altered auditory feedback that—along with frequency-altered feedback and white noise masking—is used to treat stuttering; it has also demonstrated interesting discoveries about the auditory feedback system when used with non-stuttering individuals. It is most effective when used in both ears. Delayed auditory feedback devices are used in speech perception experiments in order to demonstrate the importance of auditory feedback in speech perception as well as in speech production. [3]

There are now also different mobile apps available that use DAF in phone calls.

Effects in people who stutter

Electronic fluency devices use delayed auditory feedback and have been used as a technique to aid with stuttering. Stuttering is a speech disorder that interferes with the fluent production of speech. Some of the symptoms that characterize stuttering disfluencies are repetitions, prolongations and blocks. [4] Early investigators suggested and have continually been proven correct in assuming that those who stutter had an abnormal speech–auditory feedback loop that was corrected or bypassed while speaking under DAF. [5] More specifically, neuroimaging studies of people with stuttering have revealed abnormalities in several fronto-paretotemporal pathways and are thought to affect connectivity between speech (pre)motor regions and auditory regions. The above is consistent with behavioral studies that demonstrate that stutterers present reduced compensatory motor responses to unexpected perturbations of auditory feedback. [6]

The mechanism of action of DAF is to reduce the speed of speech in such a way that the longer the delay time, the greater the reduction is made. It has been proposed that it is in fact the reduction in speaking rate that produces fluency when using DAF however, it has been evidenced in other studies that a slow speaking rate is not a prerequisite for improving fluency under DAF. Furthermore, DAF is believed to continue to cause increased fluency over a long period of time, but reports of long-term effects are inconsistent. This is because in some cases a continued but small benefit was obtained, while in others little benefit was found from the beginning and they did not continue using DAF. Clinical observations have determined that DAF may be less effective in people whose fluency failures are mostly blocks as opposed to people who present mostly repetitions and prolongations. [7] In people who stutter with atypical auditory anatomy, DAF improves fluency, but not in those with typical anatomy. DAF is also used with people who clutter. Its effects are slowing of speech which can result in increased fluency for people who clutter and also syllable awareness. [5]

Effects in people who do not stutter

Studies that are more recent have looked at the effects of DAF in people who do not stutter to see what it can prove about the structure of the auditory and verbal pathways in the brain.

Indirect effects of delayed auditory feedback in people who do not stutter include a reduction in the rate of speech, an increase in intensity, and an increase in fundamental frequency that occur to overcome the effects of the feedback. [8] Direct effects include the repetition of syllables, mispronunciations, omissions, and omitted word endings. These direct effects are often referred to as "artificial stuttering". [9]

With an individual who does not stutter, auditory feedback speech sounds are directed to the inner ear with a 0.001 second delay. [10] In delayed auditory feedback, the delay is artificially disrupted.

Studies have found that in children ages 4–6 there is less disturbance of speech than in children ages 7–9 under a delay of 200 ms. [11] Younger children are maximally disrupted around 500 ms while older children around 400 ms. A 200 ms delay produces maximum disruption for adults. As the data collected from these studies indicate, the delay required for maximum disruption decreases with age. [12] However, it increases again for older adults, to 400 ms. [13]

Sex differences in DAF show no difference or indicate that men are generally more affected than women, [1] indicating that the feedback subsystems in the vocal monitor process could be different between the sexes. [14]

In general, more rapid, fluent speakers are less affected by DAF than slower, less fluent speakers. Also, more rapid fluent speakers are maximally disrupted by a shorter delay time, while slower speakers are maximally disrupted under longer delay times.

Studies using computational modeling and functional magnetic resonance imaging (fMRI) have shown that the temporo-parietal regions function as a conscious self-monitoring system to support an automatic speech production system [15] and that projections from auditory error cells in the posterior superior temporal cortex that go to motor correction cells in right frontal cortex mediate auditory feedback control of speech. [16]

Effects in non-humans

Juvenile songbirds learn to sing through sensory learning. They memorize songs and then engage in sensorimotor learning through vocal practice. Songs produced during sensorimotor learning are more variable and dependent on auditory feedback unlike adult songs. Adult zebra finches and Bengal finches, for example, need feedback to keep their songs stable, and deafening in these species leads to song impairment. [17]

Continuous delayed auditory feedback in zebra finch songbirds caused them to change their song syllable timing, indicating that DAF can change the motor program of syllable timing generation during short periods of time in zebra finches, similar to the effects observed in humans. [18] Moreover, in experiments, DAF is used to selectively interrupt auditory feedback in such a way that when adult zebra finches are exposed, their songs degrade and when discontinued they recover. As DAF is reversible and precise it can be applied and directed to specific syllables within a song as only the target syllable is degraded while the flanking syllables are not affected. Furthermore, contingent DAF, applied based on pitch thresholds, triggers adaptive changes in pitch and minimizes feedback interference in adult finches. [17]

Related Research Articles

Stuttering, also known as stammering, is a speech disorder characterized externally by involuntary repetitions and prolongations of sounds, syllables, words, or phrases as well as involuntary silent pauses or blocks in which the person who stutters is unable to produce sounds.

<span class="mw-page-title-main">FOXP2</span> Transcription factor gene of the forkhead box family

Forkhead box protein P2 (FOXP2) is a protein that, in humans, is encoded by the FOXP2 gene. FOXP2 is a member of the forkhead box family of transcription factors, proteins that regulate gene expression by binding to DNA. It is expressed in the brain, heart, lungs and digestive system.

<span class="mw-page-title-main">Bird vocalization</span> Sounds birds use to communicate

Bird vocalization includes both bird calls and bird songs. In non-technical use, bird songs are the bird sounds that are melodious to the human ear. In ornithology and birding, songs are distinguished by function from calls.

<span class="mw-page-title-main">Wernicke's area</span> Speech comprehension region in the dominant hemisphere of the hominid brain

Wernicke's area, also called Wernicke's speech area, is one of the two parts of the cerebral cortex that are linked to speech, the other being Broca's area. It is involved in the comprehension of written and spoken language, in contrast to Broca's area, which is primarily involved in the production of language. It is traditionally thought to reside in Brodmann area 22, which is located in the superior temporal gyrus in the dominant cerebral hemisphere, which is the left hemisphere in about 95% of right-handed individuals and 70% of left-handed individuals.

<span class="mw-page-title-main">Insular cortex</span> Portion of the mammalian cerebral cortex

The insular cortex is a portion of the cerebral cortex folded deep within the lateral sulcus within each hemisphere of the mammalian brain.

<span class="mw-page-title-main">Language processing in the brain</span> How humans use words to communicate

In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.

<span class="mw-page-title-main">Society finch</span> Subspecies of bird

The Society finch, also known as the Bengali finch or Bengalese finch, is a domesticated subspecies of finch. It became a popular cage and trade bird after appearing in European zoos in the 1860s through being imported from Japan, though it was domesticated in China. Coloration and behavior were modified through centuries of selection in Asia, then later in Europe and North America.

The two-streams hypothesis is a model of the neural processing of vision as well as hearing. The hypothesis, given its initial characterisation in a paper by David Milner and Melvyn A. Goodale in 1992, argues that humans possess two distinct visual systems. Recently there seems to be evidence of two distinct auditory systems as well. As visual information exits the occipital lobe, and as sound leaves the phonological network, it follows two main pathways, or "streams". The ventral stream leads to the temporal lobe, which is involved with object and visual identification and recognition. The dorsal stream leads to the parietal lobe, which is involved with processing the object's spatial location relative to the viewer and with speech repetition.

<span class="mw-page-title-main">Electronic fluency device</span> Devices intended to improve the fluency of persons who stutter

Electronic fluency devices are electronic devices intended to improve the fluency of persons who stutter. Most electronic fluency devices change the sound of the user's voice in his or her ear.

Vocal learning is the ability to modify acoustic and syntactic sounds, acquire new sounds via imitation, and produce vocalizations. "Vocalizations" in this case refers only to sounds generated by the vocal organ as opposed to by the lips, teeth, and tongue, which require substantially less motor control. A rare trait, vocal learning is a critical substrate for spoken language and has only been detected in eight animal groups despite the wide array of vocalizing species; these include humans, bats, cetaceans, pinnipeds, elephants, and three distantly related bird groups including songbirds, parrots, and hummingbirds. Vocal learning is distinct from auditory learning, or the ability to form memories of sounds heard, a relatively common trait which is present in all vertebrates tested. For example, dogs can be trained to understand the word "sit" even though the human word is not in its innate auditory repertoire. However, the dog cannot imitate and produce the word "sit" itself as vocal learners can.

Stuttering therapy is any of the various treatment methods that attempt to reduce stuttering to some degree in an individual. Stuttering can be seen as a challenge to treat because there is a lack of consensus about therapy.

Speech shadowing is a psycholinguistic experimental technique in which subjects repeat speech at a delay to the onset of hearing the phrase. The time between hearing the speech and responding, is how long the brain takes to process and produce speech. The task instructs participants to shadow speech, which generates intent to reproduce the phrase while motor regions in the brain unconsciously process the syntax and semantics of the words spoken. Words repeated during the shadowing task would also imitate the parlance of the shadowed speech.

The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.

Auditory feedback (AF) is an aid used by humans to control speech production and singing by helping the individual verify whether the current production of speech or singing is in accordance with his acoustic-auditory intention. This process is possible through what is known as the auditory feedback loop, a three-part cycle that allows individuals to first speak, then listen to what they have said, and lastly, correct it when necessary. From the viewpoint of movement sciences and neurosciences, the acoustic-auditory speech signal can be interpreted as the result of movements of speech articulators. Auditory feedback can hence be inferred as a feedback mechanism controlling skilled actions in the same way that visual feedback controls limb movements.

Frank H. Guenther is an American computational and cognitive neuroscientist whose research focuses on the neural computations underlying speech, including characterization of the neural bases of communication disorders and development of brain–computer interfaces for communication restoration. He is currently a professor of speech, language, and hearing sciences and biomedical engineering at Boston University.

The temporal dynamics of music and language describes how the brain coordinates its different regions to process musical and vocal sounds. Both music and language feature rhythmic and melodic structure. Both employ a finite set of basic elements that are combined in ordered ways to create complete musical or lingual ideas.

The neuroscience of rhythm refers to the various forms of rhythm generated by the central nervous system (CNS). Nerve cells, also known as neurons in the human brain are capable of firing in specific patterns which cause oscillations. The brain possesses many different types of oscillators with different periods. Oscillators are simultaneously outputting frequencies from .02 Hz to 600 Hz. It is now well known that a computer is capable of running thousands of processes with just one high-frequency clock. Humans have many different clocks as a result of evolution. Prior organisms had no need for a fast-responding oscillator. This multi-clock system permits quick response to constantly changing sensory input while still maintaining the autonomic processes that sustain life. This method modulates and controls a great deal of bodily functions.

<span class="mw-page-title-main">Verbal intelligence</span> The ability to understand concepts in words

Verbal intelligence is the ability to understand and reason using concepts framed in words. More broadly, it is linked to problem solving, abstract reasoning, and working memory. Verbal intelligence is one of the most g-loaded abilities.

Auditory arrhythmia is the inability to rhythmically perform music, to keep time, and to replicate musical or rhythmic patterns. It has been caused by damage to the cerebrum or rewiring of the brain.

Sarah M. N. Woolley is a neuroscientist and Professor of Psychology at Columbia University's Zuckerman Institute. Her work centers on the neuroscience of communication, using songbirds to understand how the brain learns and understands vocal communication.

References

  1. 1 2 Ball, MJ; Code, C (1997). Instrumental Clinical Phonetics. London: Whurr Publishers. ISBN   978-1-897635-18-6 . Retrieved 7 December 2015.
  2. Badian, M.; et al. (1979). "Standardized mental stress in healthy volunteers induced by delayed auditory feedback (DAF)". European Journal of Clinical Pharmacology. 16 (3): 171–6. doi:10.1007/BF00562057. PMID   499316. S2CID   34214832.
  3. Perkell, J.; et al. (1997). "Speech Motor Control: Acoustic Goals, Saturation Effects, Auditory Feedback and Internal Models". Speech Communication . 22 (2–3): 227–250. doi:10.1016/S0167-6393(97)00026-5.
  4. Toyomura, Akira; Miyashiro, Daiki; Kuriki, Shinya; Sowman, Paul F. (2020). "Speech-Induced Suppression for Delayed Auditory Feedback in Adults Who Do and Do Not Stutter". Frontiers in Human Neuroscience. 14: 150. doi: 10.3389/fnhum.2020.00150 . ISSN   1662-5161. PMC   7193705 . PMID   32390816.
  5. 1 2 Peter Ramig; Darrell Dodge (2009-10-07). The Child and Adolescent Stuttering Treatment & Activity Resource Guide. Cengage Learning. p. 60. ISBN   978-1-4354-8117-6.
  6. Daliri, Ayoub; Max, Ludo (2018-02-01). "Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback". Cortex. 99: 55–68. doi:10.1016/j.cortex.2017.10.019. ISSN   0010-9452. PMC   5801108 . PMID   29169049.
  7. Van Borsel, John; Drummond, Diana; de Britto Pereira, Mônica Medeiros (2010-09-01). "Delayed auditory feedback and acquired neurogenic stuttering". Journal of Neurolinguistics. The Multidimensional Nature Of Acquired Neurogenic Fluency Disorders. 23 (5): 479–487. doi:10.1016/j.jneuroling.2009.01.001. ISSN   0911-6044.
  8. Fairbanks, G. (1955). "Selective Vocal Effects of Delayed Auditory Feedback". J. Speech Hear. Disord. 20 (4): 333–346. doi:10.1044/jshd.2004.333. PMID   13272227.
  9. Lee, BS (1950). "Some effects of side-tone delay". J Acoust Soc Am. 22 (5): 639. Bibcode:1950ASAJ...22..639L. doi:10.1121/1.1906665.
  10. Yates, AJ (1963). "Delayed Auditory Feedback". Psychol Bull. 60 (3): 213–232. doi:10.1037/h0044155. PMC   2027608 . PMID   14002534.
  11. Chase, RA; Sutton, S; First, D; Zubin, J (1961). "A developmental study of changes in behavior under delayed auditory feedback". J Genet Psychol. 99: 101–12. doi:10.1080/00221325.1961.10534396. PMID   13692555.
  12. MacKay, D.G. (1968). "Metamorphosis of a critical interval: Age-linked changes in the delay in auditory linked changes in the delay in auditory feedback that produces maximal disruption of speech". The Journal of the Acoustical Society of America. 43 (4): 811–821. Bibcode:1968ASAJ...43..811M. doi:10.1121/1.1910900. PMID   5645830.
  13. Siegel, GM; Fehst, CA; Garber, SR; Pick, HL (1980). "Delayed Auditory Feedback with Children". Journal of Speech, Language, and Hearing Research. 23 (4): 802–813. doi:10.1044/jshr.2304.802. PMID   7442213.
  14. Stuart, A; Kalinowski, J (2015). "Effect of Delayed Auditory Feedback, Speech Rate, and Sex on Speech Production". Perceptual and Motor Skills. 120 (3): 747–765. doi:10.2466/23.25.PMS.120v17x2. PMID   26029968. S2CID   26867069.
  15. Tourville, JA; Reilly, KJ; Guenther, FH (2008). "Neural mechanisms underlying auditory feedback control of speech". NeuroImage. 39 (3): 1429–1443. doi:10.1016/j.neuroimage.2007.09.054. PMC   3658624 . PMID   18035557.
  16. Hashimoto, Y; Kuniyoshi, SL (2003). "Brain activations during conscious self-monitoring of speech production with delayed auditory feedback: An fMRI study". Human Brain Mapping. 20 (1): 22–28. doi:10.1002/hbm.10119. PMC   6871912 . PMID   12953303.
  17. 1 2 Tschida, Katherine; Mooney, Richard (2012-04-01). "The role of auditory feedback in vocal learning and maintenance". Current Opinion in Neurobiology. Neuroethology. 22 (2): 320–327. doi:10.1016/j.conb.2011.11.006. ISSN   0959-4388. PMC   3297733 . PMID   22137567.
  18. Fukushima, M; Margoliash, D (2015). "The effects of delayed auditory feedback revealed by bone conduction microphone in adult zebra finches". Scientific Reports. 5: 8800. Bibcode:2015NatSR...5E8800F. doi:10.1038/srep08800. PMC   4350079 . PMID   25739659.