Sign language in the brain

Last updated

Sign language refers to any natural language which uses visual gestures produced by the hands and body language to express meaning. The brain's left side is the dominant side utilized for producing and understanding sign language, just as it is for speech. [1] In 1861, Paul Broca studied patients with the ability to understand spoken languages but the inability to produce them. The damaged area was named Broca's area, and located in the left hemisphere’s inferior frontal gyrus (Brodmann areas 44, 45). Soon after, in 1874, Carl Wernicke studied patients with the reverse deficits: patients could produce spoken language, but could not comprehend it. The damaged area was named Wernicke's area, and is located in the left hemisphere’s posterior superior temporal gyrus (Brodmann area 22).

Contents

Signers with damage in Broca's area have problems producing signs. Those with damage in the Wernicke's area (left hemisphere) in the temporal lobe of the brain have problems comprehending signed languages. Early on, it was noted that Broca’s area was near the part of the motor cortex controlling the face and mouth. Likewise, Wernicke's area was near the auditory cortex. These motor and auditory areas are important in spoken language processing and production, but the connection to signed languages had yet to be uncovered. For this reason, the left hemisphere was described as the verbal hemisphere, with the right hemisphere deemed to be responsible for spatial tasks. This criterion and classification was used to denounce signed languages as not equal to spoken language until it was widely agreed upon that due to the similarities in cortical connectivity they are linguistically and cognitively equivalent.

In the 1980s research on deaf patients with left hemisphere stroke were examined to explore the brains connection with signed languages. The left perisylvian region was discovered to be functionally critical for language, spoken and signed. [1] [2] Its location near several key auditory processing regions led to the belief that language processing required auditory input and was used to discredit signed languages as "real languages." [2] This research opened the doorway for linguistic analysis and further research on signed languages. Signed languages, like spoken languages, are highly structured linguistic systems; they have their own sets of phonological, morphological and syntactic characteristics. Despite some differences between spoken and signed languages, the associated brain areas share a lot in common. [3]

Figure 1. Schematic of the ascending auditory pathway Auditory Pathway.png
Figure 1. Schematic of the ascending auditory pathway

How the brain processes auditory information

One main structure for hearing is the cochlea, a tiny coiled structure within the ear (shown in Figure 2). [4] This is one of several structures that can be damaged to cause hearing loss. When sound waves enter the ear, they cause a vibration of the eardrum. This vibration causes the ossicles of the ear to move, causing a depression of the oval window. This depression causes waves in the fluid of the cochlea which initiates movement of the basilar membrane. Different sections of the basilar membrane are responsible for responding to different types of sound, with that specific sound’s wave reaching a peak at the responsible part of the basilar membrane. This process is what transforms the sound into neural activity via hair cell receptors. These receptors have stereocilia that cause a release of neurotransmitter onto the vestibulocochlear nerve when moved.

The vestibulocochlear nerve synapses in superior medulla using cochlear nuclei. [5] This is considered the beginning of the ascending auditory pathway (shown in Figure 1). The cochlear nuclei then send information to the superior olivary nucleus to initiate the brain’s process of interpreting and combining information. The brain is able to localize sound by understanding the differences in sounds’ timing and intensities in each ear. [5] This information continues on to the inferior colliculus, which is important for the integration of a majority of ascending auditory information. The inferior colliculus sends this information to the medial geniculate nucleus within the thalamus. The thalamus finally projects the information received to the auditory cortex, which is housed in the temporal lobe.

Figure 2. Schematic of the ear and internal structures Anatomy of the Human Ear.svg
Figure 2. Schematic of the ear and internal structures

Hemispheric similarities and differences between spoken and signed languages

Both the left and right hemisphere have brain structures associated with spoken and signed languages. Spoken and signed languages both depend on the same cortical substrate. [2] This shows that the left hemisphere is responsible for processing all facets of language, not just speech. The neural organization underlying sign language abilities, however, has more in common with that of spoken language than it does with the neural organization underlying visuospatial processing, which is processed dominantly in the right hemisphere. [2] Those patients with left hemisphere damage (LHD), in areas ranging from the frontal lobe to the occipital lobe, exhibited both Broca’s and Wernicke’s aphasia symptoms. Patients performed poorly on many language-based tasks such as comprehending signs and sentences and fluently signing. Similar to hearing patients’ “slips of the tongue” after LHD, deaf LHD patients experienced paraphasias, or “slips of the hand.” These slips of the hand usually involve an incorrect hand shape in the correct location and with the correct movement, similar to a hearing patient substituting “bline” or “gine” for “fine.” [6] Some right hemisphere damage does lead to disruptions in sign languages, however. The topographical use of signing space is often imprecise in patients with RHD; the relation between the location of hands in signing space and the location of objects in physical space is often impaired. Rather than being misunderstood, however, subjects and objects in a sentence may simply be placed incorrectly relative to the other subjects and objects in a sentence, like saying “the pencil is in the book” rather than, “the pencil is on top of the book.” [6] Around the time of the experiment, theories began to float around the community that there may be an unexplained involvement of the right hemisphere in signed languages not seen in spoken languages. These theories were also adopted by signed language linguists and further imaging studies and neuropsychological testing confirmed the presence of activity in the right hemisphere. [7] Prior right hemisphere studies on spoken languages has led to prevailing theories in its role in discourse cohesion and prosody. The right hemisphere has been proposed to assist in detection, processing and discrimination of visual movement. [2] The right hemisphere has also been shown to play a role in the perception of body movements and positions. [2] All of these right hemisphere features are more prominent for signed languages than spoken languages, hence the argument that signed languages engage the right hemisphere more than spoken languages.

As brain imaging technology such as EEG became more developed and commonplace, it was eventually applied to sign language comprehension. Using EEG to record event-related potentials can correlate specific brain activity to language processing in real time. Previous application of ERP on hearing patients showed neural activity in the left hemisphere related to syntactic errors. [2] When electrodes are hooked up to deaf native signers, similar syntactic anomalies associated with an event-related potential were recorded across both left and right hemisphere. This shows that syntactic processing for American Sign Language (ASL) is not lateralized to the left hemisphere. [2]

When communicating in their respective languages, similar brain regions are activated for both deaf and hearing subjects with a few exceptions. During the processing of auditory stimuli for spoken languages there is detectable activity within Broca's Area, Wernicke's Area, the angular gyrus, dorsolateral prefrontal cortex, and superior temporal sulcus. [8] Right hemisphere activity was detectable in less than 50% of trials for hearing subjects reciting English sentences. When deaf subjects were tasked with reading English, none of the left hemisphere structures seen with hearing subjects were visible. [8] Deaf subjects also displayed obvious middle and posterior temporal-parietal activation within the right hemisphere. [8] When hearing subjects were presented various signs designed to evoke emotion within native signers, there was no clear changes in brain activity in traditional language processing centers. Brain activity of deaf native signers when processing signs was similar to activity of hearing subjects processing English. However, processing of ASL extensively recruited right hemisphere structures including significant activation of the entire superior temporal lobe, the angular region, and inferior prefrontal cortex. Since native hearing signers also exhibited this right hemisphere activation when processing ASL, it has been proposed that this right hemisphere activation is due to the temporal visuospatial decoding necessary to process signed languages. [8]

In a similar study published in 2017, deaf individuals who use French Sign Language were studied during processing French Sign Language and written French. During the processes of each of the languages, there was bilateral activation in the occipital lobes, in the temporal lobes near the superior temporal sulcus, and in the frontal gyri. [9] The processing of sign language showed stronger activation in both occipital lobes, both posterior temporal lobes, and in the thalamus bilaterally. It also showed strong activation particularly in structures in the right hemisphere: the superior temporal sulcus, the fusiform gyrus, and the inferior frontal gyrus. [9] Opposed to processing sign language, when the individuals processed written French there was strong activation bilaterally and in the left hemisphere. The areas that showed bilateral activation were the inferior parietal lobes, fusiform gyri, and Brodmann Area 44, among others. The areas lateralized to the left hemisphere were the calcarine and fusiform gyrus, specifically at the location for visual word form. [9]

Neurological differences between deaf and hearing groups

It is thought that there are significant neuroanatomical differences among congenitally deaf humans versus those who become deaf later in life. [10] Therefore, it is widely thought that research into the differences in connections and projections of neurons in deaf humans must block into two groups—congenitally deaf and deaf after birth. Structural brain imaging has commonly shown white matter volume of the auditory cortices differs between deaf and hearing subjects, regardless of the first language learned. [10] Deaf humans are thought to have a larger ratio of gray matter to white matter in certain auditory cortices, such as left and right Heschl's gyrus and Superior Temporal gyrus. [11] This heightened ratio is thought to exist due to less overall white matter in Heschl's gyrus and the Superior Temoral gyrus among deaf humans. Overall, the auditory cortices of deaf humans have an increased gray-white matter ratio as a result of the lack of auditory stimuli which is commonly thought to lead to less myelination and fewer projections to and from the auditory cortices. [11]

It has been thought that congenitally deaf people could provide insight into brain plasticity; the decreased auditory connectivity and brain volume for auditory processing provides an opportunity for enhancement in the visual cortices which are of greater importance to deaf humans. [12] The Calcarine sulcus acts as the hub for the Primary Visual Cortex in humans. Congenitally deaf humans have measurably higher volume of Calcarine cortex than hearing humans. [12] The increased volume and size of visual cortices of deaf individuals can lead to heightened visual processing. Deaf humans have demonstrated, via event-related potential, an increased sensitivity and reactivity to new visual stimuli—evidence of brain plasticity leading to behavioral enhancement. [13]

Differences between signers and non-signers

In one experiment published in 1992, visual mental imagery was studied in ASL signers—deaf and hearing—and hearing non-signers. These hearing signers were born to deaf parents, and ASL was their first language. Another aspect looked at in this study was the difference between native signers and those who learned sign language at a later age. In this experiment, native signers are considered deaf individuals who were born to deaf parents and therefore started absorbing the language in infancy. The other deaf signers' primary language is sign language, but they did not learn it until between the ages of two to sixteen. [14]

In the experiment of generating simple and complex images, deaf individuals were the quickest, followed by hearing signers and then hearing non-signers. This was expected; however, looking at a chart of the results, the hearing signers performed almost identically, in regards to the simple and complex images, to the deaf signers but just more slowly. [14] The hearing non-signers were right on track in following behind on the simple image, but their reaction time was vastly longer. [14] At least in this area, experience with a visual-spatial language provides quicker reaction times.

The results are consistent with abilities recruited for processing sign language being enhanced in the brain, compared to those abilities in non-signers. A couple of things the subjects were tested on were mental rotation and mirror reversals. Signers had an advantage in mirror reversals, but there was no difference between signers and non-signers performing mental rotation. Because of these results, it may not be true to say that signers have a better ability to transform images, but the ability may be in rotating images. Because of this experiment, the cause of enhanced abilities was questioned to be because of auditory deprivation or because of using a visual-spatial language. Hearing signers who learned sign language as a first language may be the key to answering this question. [14]

Beneficial uses of sign language

While sign language is mostly used by people who are deaf, hard of hearing, or in close relationships with people that are deaf or hard of hearing, sign language can be beneficial for other conditions that cause difficulties with communicating using verbal language. These disorders can include issues with articulation, fluency, and voice. [15]

Apraxia of speech

This is a disorder that affects the brain's ability to plan the movements involved in speech. [16] In this disorder, the person cognitively knows what they want to say but is not able to produce that thought verbally. Apraxia of speech can be either acquired or present from birth. Acquired apraxia is due to damage in parts of the brain that are used for speech production, and the causes of apraxia from birth are not clear.

Some symptoms of apraxia are:

Dysarthria

This is a disorder that is a result of either weakness in the muscles that are used for speech production or there is a decrease in the ability to control those muscles. [17] Some common causes of dysarthria are nervous system disorders or other disorders that can cause paralysis in the face, tongue, and throat.

Some symptoms of dysarthria are:

Aphasia

This is a disorder that impacts the way a person comprehends, speaks, and writes language. Aphasia usually is a result of traumatic head injury or stroke, but can have other causes such as tumors or progressive diseases. [18] There are several types of aphasia, with the two most popular being Broca’s Aphasia and Wernicke’s Aphasia. The different types of aphasia all have different impacts on the comprehension and production of language.

Some symptoms of aphasia are:

Dysphonia

This is a disorder of the voice and includes two different types. Hypofunctional dysphonia is due to closure of vocal cords or vocal folds being incomplete, whereas hyperfunctional dysphonia is due to overuse of laryngeal muscles. [19]

Some symptoms of dysphonia are:

Related Research Articles

<span class="mw-page-title-main">Aphasia</span> Inability to comprehend or formulate language

In aphasia, a person may be unable to comprehend or unable to formulate language because of damage to specific brain regions. The major causes are stroke and head trauma; prevalence is hard to determine, but aphasia due to stroke is estimated to be 0.1–0.4% in the Global North. Aphasia can also be the result of brain tumors, epilepsy, autoimmune neurological diseases, brain infections, or neurodegenerative diseases.

<span class="mw-page-title-main">Language center</span> Speech processing areas of the brain

In neuroscience and psychology, the term language center refers collectively to the areas of the brain which serve a particular function for speech processing and production. Language is a core system that gives humans the capacity to solve difficult problems and provides them with a unique type of social interaction. Language allows individuals to attribute symbols to specific concepts, and utilize them through sentences and phrases that follow proper grammatical rules. Finally, speech is the mechanism by which language is orally expressed.

<span class="mw-page-title-main">Receptive aphasia</span> Language disorder involving inability to understand language

Wernicke's aphasia, also known as receptive aphasia, sensory aphasia, fluent aphasia, or posterior aphasia, is a type of aphasia in which individuals have difficulty understanding written and spoken language. Patients with Wernicke's aphasia demonstrate fluent speech, which is characterized by typical speech rate, intact syntactic abilities and effortless speech output. Writing often reflects speech in that it tends to lack content or meaning. In most cases, motor deficits do not occur in individuals with Wernicke's aphasia. Therefore, they may produce a large amount of speech without much meaning. Individuals with Wernicke's aphasia often suffer of anosognosia – they are unaware of their errors in speech and do not realize their speech may lack meaning. They typically remain unaware of even their most profound language deficits.

<span class="mw-page-title-main">Broca's area</span> Speech production region in the dominant hemisphere of the hominid brain

Broca's area, or the Broca area, is a region in the frontal lobe of the dominant hemisphere, usually the left, of the brain with functions linked to speech production.

<span class="mw-page-title-main">Agnosia</span> Inability to process sensory information

Agnosia is a neurological disorder characterized by an inability to process sensory information. Often there is a loss of ability to recognize objects, persons, sounds, shapes, or smells while the specific sense is not defective nor is there any significant memory loss. It is usually associated with brain injury or neurological illness, particularly after damage to the occipitotemporal border, which is part of the ventral stream. Agnosia only affects a single modality, such as vision or hearing. More recently, a top-down interruption is considered to cause the disturbance of handling perceptual information.

Aphasiology is the study of language impairment usually resulting from brain damage, due to neurovascular accident—hemorrhage, stroke—or associated with a variety of neurodegenerative diseases, including different types of dementia. These specific language deficits, termed aphasias, may be defined as impairments of language production or comprehension that cannot be attributed to trivial causes such as deafness or oral paralysis. A number of aphasias have been described, but two are best known: expressive aphasia and receptive aphasia.

<span class="mw-page-title-main">Anomic aphasia</span> Medical condition

Anomic aphasia is a mild, fluent type of aphasia where individuals have word retrieval failures and cannot express the words they want to say. By contrast, anomia is a deficit of expressive language, and a symptom of all forms of aphasia, but patients whose primary deficit is word retrieval are diagnosed with anomic aphasia. Individuals with aphasia who display anomia can often describe an object in detail and maybe even use hand gestures to demonstrate how the object is used, but cannot find the appropriate word to name the object. Patients with anomic aphasia have relatively preserved speech fluency, repetition, comprehension, and grammatical speech.

<span class="mw-page-title-main">Temporal lobe</span> One of the four lobes of the mammalian brain

The temporal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The temporal lobe is located beneath the lateral fissure on both cerebral hemispheres of the mammalian brain.

<span class="mw-page-title-main">Wernicke's area</span> Speech comprehension region in the dominant hemisphere of the hominid brain

Wernicke's area, also called Wernicke's speech area, is one of the two parts of the cerebral cortex that are linked to speech, the other being Broca's area. It is involved in the comprehension of written and spoken language, in contrast to Broca's area, which is primarily involved in the production of language. It is traditionally thought to reside in Brodmann area 22, which is located in the superior temporal gyrus in the dominant cerebral hemisphere, which is the left hemisphere in about 95% of right-handed individuals and 70% of left-handed individuals.

<span class="mw-page-title-main">Conduction aphasia</span> Inability to repeat speech despite being able to perceive and produce it

In neurology, conduction aphasia, also called associative aphasia, is an uncommon form of difficulty in speaking (aphasia). It is caused by damage to the parietal lobe of the brain. An acquired language disorder, it is characterised by intact auditory comprehension, coherent speech production, but poor speech repetition. Affected people are fully capable of understanding what they are hearing, but fail to encode phonological information for production. This deficit is load-sensitive as the person shows significant difficulty repeating phrases, particularly as the phrases increase in length and complexity and as they stumble over words they are attempting to pronounce. People have frequent errors during spontaneous speech, such as substituting or transposing sounds. They are also aware of their errors and will show significant difficulty correcting them.

Amusia is a musical disorder that appears mainly as a defect in processing pitch but also encompasses musical memory and recognition. Two main classifications of amusia exist: acquired amusia, which occurs as a result of brain damage, and congenital amusia, which results from a music-processing anomaly present since birth.

Dual stream connectivity between the auditory cortex and frontal lobe of monkeys and humans. Top: The auditory cortex of the monkey (left) and human (right) is schematically depicted on the supratemporal plane and observed from above. Bottom: The brain of the monkey (left) and human (right) is schematically depicted and displayed from the side. Orange frames mark the region of the auditory cortex, which is displayed in the top sub-figures. Top and Bottom: Blue colors mark regions affiliated with the ADS, and red colors mark regions affiliated with the AVS. Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.

<span class="mw-page-title-main">Brodmann area 22</span> Region of the brains temporal lobe

Brodmann area 22 is a Brodmann's area that is cytoarchitecturally located in the posterior superior temporal gyrus of the brain. In the left cerebral hemisphere, it is one portion of Wernicke's area. The left hemisphere BA22 helps with generation and understanding of individual words. On the right side of the brain, BA22 helps to discriminate pitch and sound intensity, both of which are necessary to perceive melody and prosody. Wernicke's area is active in processing language and consists of the left Brodmann area 22 and Brodmann area 40, the supramarginal gyrus.

<span class="mw-page-title-main">Speech</span> Human vocal communication using spoken language

Speech is the use of the human voice as a medium for language. Spoken language combines vowel and consonant sounds to form units of meaning like words, which belong to a language's lexicon. There are many different intentional speech acts, such as informing, declaring, asking, persuading, directing; acts may vary in various aspects like enunciation, intonation, loudness, and tempo to convey meaning. Individuals may also unintentionally communicate aspects of their social position through speech, such as sex, age, place of origin, physiological and mental condition, education, and experiences.

Auditory verbal agnosia (AVA), also known as pure word deafness, is the inability to comprehend speech. Individuals with this disorder lose the ability to understand language, repeat words, and write from dictation. Some patients with AVA describe hearing spoken language as meaningless noise, often as though the person speaking was doing so in a foreign language. However, spontaneous speaking, reading, and writing are preserved. The maintenance of the ability to process non-speech auditory information, including music, also remains relatively more intact than spoken language comprehension. Individuals who exhibit pure word deafness are also still able to recognize non-verbal sounds. The ability to interpret language via lip reading, hand gestures, and context clues is preserved as well. Sometimes, this agnosia is preceded by cortical deafness; however, this is not always the case. Researchers have documented that in most patients exhibiting auditory verbal agnosia, the discrimination of consonants is more difficult than that of vowels, but as with most neurological disorders, there is variation among patients.

<span class="mw-page-title-main">Lateralization of brain function</span> Specialization of some cognitive functions in one side of the brain

The lateralization of brain function is the tendency for some neural functions or cognitive processes to be specialized to one side of the brain or the other. The median longitudinal fissure separates the human brain into two distinct cerebral hemispheres, connected by the corpus callosum. Although the macrostructure of the two hemispheres appears to be almost identical, different composition of neuronal networks allows for specialized function that is different in each hemisphere.

<span class="mw-page-title-main">Foix–Chavany–Marie syndrome</span> Medical condition

Foix–Chavany–Marie syndrome (FCMS), also known as bilateral opercular syndrome, is a neuropathological disorder characterized by paralysis of the facial, tongue, pharynx, and masticatory muscles of the mouth that aid in chewing. The disorder is primarily caused by thrombotic and embolic strokes, which cause a deficiency of oxygen in the brain. As a result, bilateral lesions may form in the junctions between the frontal lobe and temporal lobe, the parietal lobe and cortical lobe, or the subcortical region of the brain. FCMS may also arise from defects existing at birth that may be inherited or nonhereditary. Symptoms of FCMS can be present in a person of any age and it is diagnosed using automatic-voluntary dissociation assessment, psycholinguistic testing, neuropsychological testing, and brain scanning. Treatment for FCMS depends on the onset, as well as on the severity of symptoms, and it involves a multidisciplinary approach.

Auditory agnosia is a form of agnosia that manifests itself primarily in the inability to recognize or differentiate between sounds. It is not a defect of the ear or "hearing", but rather a neurological inability of the brain to process sound meaning. While auditory agnosia impairs the understanding of sounds, other abilities such as reading, writing, and speaking are not hindered. It is caused by bilateral damage to the anterior superior temporal gyrus, which is part of the auditory pathway responsible for sound recognition, the auditory "what" pathway.

<span class="mw-page-title-main">Superior temporal sulcus</span> Part of the brains temporal lobe

In the human brain, the superior temporal sulcus (STS) is the sulcus separating the superior temporal gyrus from the middle temporal gyrus in the temporal lobe of the brain. A sulcus is a deep groove that curves into the largest part of the brain, the cerebrum, and a gyrus is a ridge that curves outward of the cerebrum.

<span class="mw-page-title-main">Verbal intelligence</span> The ability to understand concepts in words

Verbal intelligence is the ability to understand and reason using concepts framed in words. More broadly, it is linked to problem solving, abstract reasoning, and working memory. Verbal intelligence is one of the most g-loaded abilities.

References

  1. 1 2 Campbell, Ruth (June 29, 2007). "Sign Language and the Brain". Journal of Deaf Studies and Deaf Education. 13 (1): 3–20. doi: 10.1093/deafed/enm035 . PMID   17602162.
  2. 1 2 3 4 5 6 7 8 Campbell, Ruth; MacSweeney, Mairéad; Waters, Dafydd (2008). "Sign Language and the Brain: A Review". Journal of Deaf Studies and Deaf Education. 13 (1): 3–20. ISSN   1081-4159.
  3. Poizner H, Klima ES, Bellugi U., What the hands reveal about the brain, 1987, Cambridge, MA The MIT Press
  4. 2-Minute Neuroscience: The Cochlea , retrieved 2023-12-11
  5. 1 2 Peelle, J.E.; Engelhard, N. (2022-01-01). "Auditory Pathways to the Brain-Introduction to Sensation and Perception". University of Minnesota Libraries.
  6. 1 2 Emmorey, K; Damasio, H; McCullough, S; Grabowski, T; Ponto, LLB; Hichwa, RD; Bellugi, U (2002). "Neural systems underlying spatial language in American Sign Language". NeuroImage. 17 (2): 812–24. doi:10.1016/s1053-8119(02)91187-0. PMID   12377156.
  7. Hickok, G (February 1999). "Discourse deficits following right hemisphere damage in deaf signers". Brain and Language. 66 (2): 233–48. CiteSeerX   10.1.1.531.4691 . doi:10.1006/brln.1998.1995. PMID   10190988. S2CID   12070728.
  8. 1 2 3 4 Neville, Helen (February 3, 1998). "Cerebral organization for language in deaf and hearing subjects: Biological constraints and effects of experience". Proceedings of the National Academy of Sciences of the United States of America. Retrieved May 8, 2017.
  9. 1 2 3 Moreno, Antonio, Limousin, Fanny, Dehaene, Stanislas, Pallier, Christophe (2018-02-15). "Brain correlates of constituent structure in sign language comprehension". NeuroImage. 167: 151–161. doi:10.1016/j.neuroimage.2017.11.040. ISSN   1053-8119. PMC   6044420 . PMID   29175202.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  10. 1 2 Olulade, Olumide (15 April 2014). "Brain Anatomy Differences Between Dear, Hearing Depend on First Language Learned". Georgetown University Medical Center. Retrieved May 6, 2017.
  11. 1 2 Emmorey, Karen; Allen, John S.; Bruss, Joel; Schenker, Natalie; Damasio, Hanna (2003). "A Morphometric Analysis of Auditory Brain Regions in Congenitally Deaf Adults". Proceedings of the National Academy of Sciences of the United States of America. 100 (17): 10049–10054. ISSN   0027-8424.
  12. 1 2 Allen JS, Emmorey K, Bruss J, Damasio H. Neuroanatomical differences in visual, motor, and language cortices between congenitally deaf signers, hearing signers, and hearing non-signers. Frontiers in Neuroanatomy. 2013;7:26. doi : 10.3389/fnana.2013.00026
  13. Bottari D, Caclin A, Giard M-H, Pavani F. Changes in Early Cortical Visual Processing Predict Enhanced Reactivity in Deaf Individuals. Sirigu A, ed. PLoS ONE. 2011;6(9):e25607. doi : 10.1371/journal.pone.0025607.
  14. 1 2 3 4 Emmory, Karen, Kosslyn, Steven M., Bellugi, Ursula (1993-02-01). "Visual imagery and visual-spatial language: Enhanced imagery abilities in deaf and hearing ASL signers". Cognition. 46 (2): 139–181. doi:10.1016/0010-0277(93)90017-P. ISSN   0010-0277. PMID   8432094. S2CID   628189.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  15. Zieve, D; Conaway, B. "Speech and Language Disorders- Symptoms and Causes". www.pennmedicine.org. Retrieved 2023-12-11.
  16. "What Is Apraxia of Speech? | NIDCD". www.nidcd.nih.gov. 2017-10-31. Retrieved 2023-12-11.
  17. "Dysarthria - Symptoms and causes". Mayo Clinic. Retrieved 2023-12-11.
  18. "Aphasia: Communications disorder can be disabling-Aphasia - Symptoms & causes". Mayo Clinic. Retrieved 2023-12-11.
  19. "Functional Dysphonia | University of Michigan Health". www.uofmhealth.org. Retrieved 2023-12-11.

[1] [2]

  1. "How Do We Hear? | NIDCD". www.nidcd.nih.gov. 2022-03-16. Retrieved 2023-12-11.
  2. "Hearing loss and deafness: Normal hearing and impaired hearing", InformedHealth.org [Internet], Institute for Quality and Efficiency in Health Care (IQWiG), 2017-11-30, retrieved 2023-12-11