Auditory agnosia is a form of agnosia that manifests itself primarily in the inability to recognize or differentiate between sounds. It is not a defect of the ear or "hearing", but rather a neurological inability of the brain to process sound meaning. While auditory agnosia impairs the understanding of sounds, other abilities such as reading, writing, and speaking are not hindered. [1] It is caused by bilateral damage to the anterior superior temporal gyrus, which is part of the auditory pathway responsible for sound recognition, the auditory "what" pathway. [2]
Persons with auditory agnosia can physically hear the sounds and describe them using unrelated terms, but are unable to recognize them. They might describe the sound of some environmental sounds, such as a motor starting, as resembling a lion roaring, but would not be able to associate the sound with "car" or "engine", nor would they say that it was a lion creating the noise. [3] All auditory agnosia patients read lips in order to enhance the speech comprehension. [4]
It is yet unclear whether auditory agnosia (also called general auditory agnosia) is a combination of milder disorders, such auditory verbal agnosia (pure word deafness), non-verbal auditory agnosia, amusia and word-meaning deafness, or a mild case of the more severe disorder, cerebral deafness. Typically, a person with auditory agnosia would be incapable of comprehending spoken language as well as environmental sounds. Some may say that the milder disorders are how auditory agnosia occurs. There are few cases where a person may not be able to understand spoken language. This is called verbal auditory agnosia or pure word deafness. [5] Nonverbal auditory agnosia is diagnosed when a person’s understanding of environmental sounds is inhibited. Combined, these two disorders portray auditory agnosia. [6] The blurriness between the combination of these disorders may lead to discrepancies in reporting. As of 2014 [update] , 203 patients with auditory perceptual deficits due to CNS damage were reported in the medical literature, of which 183 diagnosed with general auditory agnosia or word deafness, 34 with cerebral deafness, 51 with non-verbal auditory agnosia-amusia and 8 word meaning deafness (for a list of patients see [7] ).
A relationship between hearing and the brain was first documented by Ambroise Paré, a 16th century battlefield doctor, who associated parietal lobe damage with acquired deafness (reported in Henschen, 1918 [8] ). Systematic research into the manner in which the brain processes sounds, however, only began toward the end of the 19th century. In 1874, Wernicke [9] was the first to ascribe to a brain region a role in auditory perception. Wernicke proposed that the impaired perception of language in his patients was due to losing the ability to register sound frequencies that are specific to spoken words (he also suggested that other aphasic symptoms, such as speaking, reading and writing errors occur because these speech specific frequencies are required for feedback). Wernicke localized the perception of spoken words to the posterior half of the left STG (superior temporal gyrus). Wernicke also distinguished between patients with auditory agnosia (which he labels as receptive aphasia) with patients who cannot detect sound at any frequency (which he labels as cortical deafness). [10]
In 1877, Kussmaul was the first to report auditory agnosia in a patient with intact hearing, speaking, and reading-writing abilities. This case-study led Kussmaul to propose of distinction between the word perception deficit and Wernicke's sensory aphasia. He coined the former disorder as "word deafness". Kussmaul also localized this disorder to the left STG. Wernicke interpreted Kussmaul's case as an incomplete variant of his sensory aphasia. [10]
In 1885, [11] Lichtheim also reported of an auditory agnosia patient. This patient, in addition to word deafness, was impaired at recognizing environmental sounds and melodies. Based on this case study, as well as other aphasic patients, Lichtheim proposed that the language reception center receives afferents from upstream auditory and visual word recognition centers, and that damage to these regions results in word deafness or word blindness (i.e., alexia), respectively. Because the lesion of Lichtheim's auditory agnosia patient was sub-cortical deep to the posterior STG (superior temporal gyrus), Lichtheim renamed auditory agnosia as "sub-cortical speech deafness".
The language model proposed by Wernicke and Lichtheim wasn't accepted at first. For example, in 1897 Bastian [12] argued that, because aphasic patients can repeat single words, their deficit is in the extraction of meaning from words. He attributed both aphasia and auditory agnosia to damage in Lichtheim's auditory word center. He hypothesized that aphasia is the outcome of partial damage to the left auditory word center, whereas auditory agnosia is the result of complete damage to the same area. Bastian localized the auditory word center to the posterior MTG (middle temporal gyrus).
Other opponents to the Wernicke-Lichtheim model were Sigmund Freud and Carl Freund. Freud [13] (1891) suspected that the auditory deficits in aphasic patients was due to a secondary lesion to cochlea. This assertion was confirmed by Freund [14] (1895), who reported two auditory agnosia patients with cochlear damage (although in a later autopsy, Freund reported also the presence of a tumor in the left STG in one of these patients). This argument, however, was refuted by Bonvicini [15] (1905), who measured the hearing of an auditory agnosia patient with tuning forks, and confirmed intact pure tone perception. Similarly, Barrett's aphasic patient, [16] who was incapable of comprehending speech, had intact hearing thresholds when examined with tuning forks and with a Galton whistle. The most adverse opponent to the model of Wernicke and Lichtheim was Marie [17] (1906), who argued that all aphasic symptoms manifest because of a single lesion to the language reception center, and that other symptoms such as auditory disturbances or paraphasia are expressed because the lesion encompasses also sub-cortical motor or sensory regions.
In the following years, increasing number of clinical reports validated the view that the right and left auditory cortices project to a language reception center located in the posterior half of the left STG, and thus established the Wernicke-Lichtheim model. This view was also consolidated by Geschwind [18] (1965) who reported that, in humans, the left planum temporale is larger in the left hemisphere than on the right. Geschwind interpreted this asymmetry as anatomical verification for the role of left posterior STG in the perception of language.
The Wernicke-Lichtheim-Geschwind model persisted throughout the 20th century. However, with the advent of MRI and its usage for lesion mapping, it was shown that this model is based on incorrect correlation between symptoms and lesions. [19] [20] [21] Although this model is considered outdated, it is still widely mentioned in Psychology and medical textbooks, and consequently in medical reports of auditory agnosia patients. As will be mentioned below, based on cumulative evidence the process of sound recognition was recently shifted to the left and right anterior auditory cortices, instead of the left posterior auditory cortex.
After auditory agnosia was first discovered, subsequent patients were diagnosed with different types of hearing impairments. In some reports, the deficit was restricted to spoken words, environmental sounds or music. In one case study, each of the three sound types (music, environmental sounds, speech) was also shown to recover independently (Mendez and Geehan, 1988-case 2 [22] ). It is yet unclear whether general auditory agnosia is a combination of milder auditory disorders, or whether the source of this disorder is at an earlier auditory processing stage.
Cerebral deafness (also known as cortical deafness or central deafness) is a disorder characterized by complete deafness that is the result of damage to the central nervous system. The primary distinction between auditory agnosia and cerebral deafness is the ability to detect pure tones, as measured with pure tone audiometry. Using this test, auditory agnosia patients were often reported [23] [24] capable of detecting pure tones almost as good as healthy individuals, whereas cerebral deafness patients found this task almost impossible or they required very loud presentations of sounds (above 100 dB). In all reported cases, cerebral deafness was associated with bilateral temporal lobe lesions. A study [24] that compared the lesions of two cerebral deafness patients to an auditory agnosia patient concluded that cerebral deafness is the result of complete de-afferentation of the auditory cortices, whereas in auditory agnosia some thalamo-cortical fibers are spared. In most cases the disorder is transient and the symptoms mitigate into auditory agnosia (although chronic cases were reported [25] ). Similarly, a monkey study [26] that ablated both auditory cortices of monkeys reported of deafness that lasted 1 week in all cases, and that was gradually mitigated into auditory agnosia in a period of 3–7 weeks.
Since the early days of aphasia research, the relationship between auditory agnosia and speech perception has been debated. Lichtheim [11] (1885) proposed that auditory agnosia is the result of damage to a brain area dedicated to the perception of spoken words, and consequently renamed this disorder from 'word deafness' to 'pure word deafness'. The description of word deafness as being exclusively for words was adopted by the scientific community despite the patient reported by Lichtheim's who also had more general auditory deficits. Some researchers who surveyed the literature, however, argued against labeling this disorder as pure word deafness on the account that all patients reported impaired at perceiving spoken words were also noted with other auditory deficits or aphasic symptoms. [27] [4] In one review of the literature, Ulrich [28] (1978) presented evidence for separation of word deafness from more general auditory agnosia, and suggested naming this disorder "linguistic auditory agnosia" (this name was later rephrased into "verbal auditory agnosia" [29] ). To contrast this disorder with auditory agnosia in which speech repetition is intact (word meaning deafness), the name "word sound deafness" [30] and "phonemic deafness" [31] (Kleist, 1962) were also proposed. Although some researchers argued against the purity of word deafness, some anecdotal cases with exclusive impaired perception of speech were documented. [32] [33] [34] [35] [36] [37] [38] [39] [40] [ excessive citations ] On several occasions, patients were reported to gradually transition from pure word deafness to general auditory agnosia/cerebral deafness [41] [42] [43] or recovery from general auditory agnosia/cerebral deafness to pure word deafness. [44] [45]
In a review of the auditory agnosia literature, Phillips and Farmer [46] showed that patients with word deafness are impaired in their ability to discriminate gaps between click sounds as long as 15-50 milliseconds, which is consistent with the duration of phonemes. They also showed that patients with general auditory agnosia are impaired in their ability to discriminate gaps between click sounds as long as 100–300 milliseconds. The authors further showed that word deafness patients liken their auditory experience to hearing foreign language, whereas general auditory agnosia described speech as incomprehensible noise. Based on these findings, and because both word deafness and general auditory agnosia patients were reported to have very similar neuroanatomical damage (bilateral damage to the auditory cortices), the authors concluded that word deafness and general auditory agnosia is the same disorder, but with a different degree of severity.
Pinard et al [43] also suggested that pure word deafness and general auditory agnosia represent different degrees of the same disorder. They suggested that environmental sounds are spared in the mild cases because they are easier to perceive than speech parts. They argued that environmental sounds are more distinct than speech sounds because they are more varied in their duration and loudness. They also proposed that environmental sounds are easier to perceive because they are composed of a repetitive pattern (e.g., the bark of a dog or the siren of the ambulance).
Auerbach et al [47] considered word deafness and general auditory agnosia as two separate disorders, and labelled general auditory agnosia as pre-phonemic auditory agnosia and word deafness as post-phonemic auditory agnosia. They suggested that pre-phonemic auditory agnosia manifests because of general damage to the auditory cortex of both hemispheres, and that post-phonemic auditory agnosia manifests because of damage to a spoken word recognition center in the left hemisphere. A recent research on an epileptic patient supported this hypothesis. The patient undergone electro-stimulation to the anterior superior temporal gyrus, and demonstrated a transient loss of speech comprehension, while preserving intact perception of environmental sounds and music. [48]
The term auditory agnosia was originally coined by Sigmund Freud [13] in 1891, to describe patients with selective impairment of environmental sounds. In a review of the auditory agnosia literature, Ulrich [28] re-named this disorder as non-verbal auditory agnosia (although sound auditory agnosia and environmental sound auditory agnosia are also commonly used). This disorder is very rare and only 18 cases have been documented. [7] In contradiction to pure word deafness and general auditory agnosia, this disorder is likely under-diagnosed because patients are often not aware of their disorder, and thus don't seek medical intervention. [49] [50] [51]
Throughout the 20th century, all reported non-verbal auditory agnosia patients had bilateral or right temporal lobe damage. For this reason, the right hemisphere was traditionally attributed with the perception of environmental sounds. However, Tanaka et al [52] reported 8 patients with non-verbal auditory agnosia, 4 with right hemisphere lesions and 4 with left hemisphere lesions. Saygin [49] et al also reported a patient with damage to the left auditory cortex.
The underlying deficit in non-verbal auditory agnosia appears to be varied. Several patients were characterized by impaired discrimination of pitch [53] [54] [55] whereas others reported with impaired discrimination of timbre and rhythm [56] [57] [58] (discrimination of pitch was relatively preserved in one of these cases [56] ). In contrast, to patients with pure word deafness and general auditory agnosia, patients with non-verbal auditory agnosia were reported impaired at discriminating long gaps between click sounds, but impaired at short gaps. [59] [60] A possible neuroanatomical structure that relays longer sound duration was suggested by Tanaka et al. [24] By comparing the lesions of two cortically deaf patients with the lesion of a word deafness patient, they proposed the existence of two thalamocortical pathways that inter-connect the MGN with the auditory cortex. They suggested that spoken words are relayed via a direct thalamocortical pathway that passes underneath the putamen, and that environmental sounds are relayed via a separate thalamocortical pathway that passes above the putamen near the parietal white matter.
Auditory agnosia patients are often impaired in the discrimination of all sounds, including music. However, in two such patients music perception was spared [59] [61] and in one patient music perception was enhanced. [62] The medical literature reports of 33 patients diagnosed with an exclusive deficit for the discrimination and recognition of musical segments [7] (i.e., amusia). The damage in all these cases was localized to the right hemisphere or was bilateral. (with the exception of one case. [63] ) The damage in these cases tended to focus around the temporal pole. Consistently, removal of the anterior temporal lobe was also associated with loss of music perception, [64] and recordings directly from the anterior auditory cortex revealed that in both hemispheres, music is perceived medially to speech. [65] These findings therefore imply that the loss of music perception in auditory agnosia is because of damage to the medial anterior STG. In contrast to the association of amusia specific to recognition of melodies (amelodia) with the temporal pole, posterior STG damage was associated with loss of rhythm perception (arryhthmia). [66] [67] [68] Conversely, in two patients rhythm perception was intact, while recognition/discrimination of musical segments was impaired. [69] [70] Amusia also dissociates in regard to enjoyment from music. In two reports, [71] [72] amusic patients, who weren't able to distinguish musical instruments, reported that they still enjoy listening to music. On the other hand, a patient with left hemispheric damage in the amygdala was reported to perceive, but not enjoy, music. [73]
In 1928, Kleist [31] suggested that the etiology of word deafness could be due either to impaired perception of the sound (apperceptive auditory agnosia), or to impaired extraction of meaning from a sound (associative auditory agnosia). This hypothesis was first tested by Vignolo et al [74] (1969), who examined unilateral stroke patients. They reported that patients with left hemisphere damage were impaired in matching environmental sounds with their corresponding pictures, whereas patients with right hemisphere damage were impaired in the discrimination of meaningless noise segments. The researchers then concluded that left hemispheric damage results in associative auditory agnosia, and right hemisphere damage results in apperceptive auditory agnosia. Although the conclusion reached by this study could be considered over-reaching, associative auditory agnosia could correspond with the disorder word meaning deafness.
Patients with word meaning deafness are characterized by impaired speech recognition but intact repetition of speech and left hemisphere damage. [75] [76] [77] [78] [79] [31] [80] [81] These patients often repeat words in an attempt to extract its meaning (e.g., "Jar....Jar....what is a jar?" [75] ). In the first documented case, [76] Bramwell (1897 - translated by Ellis, 1984) reported a patient, who in order to comprehend speech wrote what she heard and then read her own handwriting. Kohn and Friedman, [80] and Symonds [81] also reported word meaning deafness patients who are able to write to dictation. In at least 12 cases, patients with symptoms that correspond with word meaning deafness were diagnosed as auditory agnosia. [7] Unlike most auditory agnosia patients, word meaning deafness patients are not impaired at discriminating gaps of click sounds. [82] [83] [84] It is yet unclear whether word meaning deafness is also synonymous with the disorder deep dysphasia, in which patients cannot repeat nonsense words and produce semantic paraphasia during repetition of real words. [85] [86] Word meaning deafness is also often confused with transcortical sensory aphasia, but such patients differ from the latter by their ability to express themselves appropriately orally or in writing.
Auditory agnosia (with the exception of non-verbal auditory agnosia and amusia) is strongly dependent on damage to both hemispheres. [7] The order of hemispheric damage is irrelevant to manifestation of symptoms, and years could take between the damage of the first hemisphere and the second hemisphere (after which the symptoms suddenly emerge). [4] [28] A study [87] that compared lesion locations, reported that in all cases with bilateral hemispheric damage, at least in one side the lesion included Heschl's gyrus or its underlying white matter. A rare insight into the etiology of this disorder was reported in a study of an auditory agnosia patient with damage to the brainstem, instead of cortex. [2] fMRI scanning of the patient revealed weak activation of the anterior Heschl's gyrus (area R) and anterior superior temporal gyrus. These brain areas are part of the auditory 'what' pathway, and are known from both human and monkey research to participate in the recognition of sounds. [88]
Aphasia, also known as dysphasia, is an impairment in a person’s ability to comprehend or formulate language because of damage to specific brain regions. The major causes are stroke and head trauma; prevalence is hard to determine, but aphasia due to stroke is estimated to be 0.1–0.4% in the Global North. Aphasia can also be the result of brain tumors, epilepsy, autoimmune neurological diseases, brain infections, or neurodegenerative diseases.
Wernicke's aphasia, also known as receptive aphasia, sensory aphasia, fluent aphasia, or posterior aphasia, is a type of aphasia in which individuals have difficulty understanding written and spoken language. Patients with Wernicke's aphasia demonstrate fluent speech, which is characterized by typical speech rate, intact syntactic abilities and effortless speech output. Writing often reflects speech in that it tends to lack content or meaning. In most cases, motor deficits do not occur in individuals with Wernicke's aphasia. Therefore, they may produce a large amount of speech without much meaning. Individuals with Wernicke's aphasia often suffer of anosognosia – they are unaware of their errors in speech and do not realize their speech may lack meaning. They typically remain unaware of even their most profound language deficits.
Agnosia is a neurological disorder characterized by an inability to process sensory information. Often there is a loss of ability to recognize objects, persons, sounds, shapes, or smells while the specific sense is not defective nor is there any significant memory loss. It is usually associated with brain injury or neurological illness, particularly after damage to the occipitotemporal border, which is part of the ventral stream. Agnosia only affects a single modality, such as vision or hearing. More recently, a top-down interruption is considered to cause the disturbance of handling perceptual information.
Agraphia is an acquired neurological disorder causing a loss in the ability to communicate through writing, either due to some form of motor dysfunction or an inability to spell. The loss of writing ability may present with other language or neurological disorders; disorders appearing commonly with agraphia are alexia, aphasia, dysarthria, agnosia, acalculia and apraxia. The study of individuals with agraphia may provide more information about the pathways involved in writing, both language related and motoric. Agraphia cannot be directly treated, but individuals can learn techniques to help regain and rehabilitate some of their previous writing abilities. These techniques differ depending on the type of agraphia.
Anomic aphasia, also known as dysnomia, nominal aphasia, and amnesic aphasia, is a mild, fluent type of aphasia where individuals have word retrieval failures and cannot express the words they want to say. By contrast, anomia is a deficit of expressive language, and a symptom of all forms of aphasia, but patients whose primary deficit is word retrieval are diagnosed with anomic aphasia. Individuals with aphasia who display anomia can often describe an object in detail and maybe even use hand gestures to demonstrate how the object is used, but cannot find the appropriate word to name the object. Patients with anomic aphasia have relatively preserved speech fluency, repetition, comprehension, and grammatical speech.
The temporal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The temporal lobe is located beneath the lateral fissure on both cerebral hemispheres of the mammalian brain.
Wernicke's area, also called Wernicke's speech area, is one of the two parts of the cerebral cortex that are linked to speech, the other being Broca's area. It is involved in the comprehension of written and spoken language, in contrast to Broca's area, which is primarily involved in the production of language. It is traditionally thought to reside in Brodmann area 22, located in the superior temporal gyrus in the dominant cerebral hemisphere, which is the left hemisphere in about 95% of right-handed individuals and 70% of left-handed individuals.
Conduction aphasia, also called associative aphasia, is an uncommon form of aphasia caused by damage to the parietal lobe of the brain. An acquired language disorder, it is characterized by intact auditory comprehension, coherent speech production, but poor speech repetition. Affected people are fully capable of understanding what they are hearing, but fail to encode phonological information for production. This deficit is load-sensitive as the person shows significant difficulty repeating phrases, particularly as the phrases increase in length and complexity and as they stumble over words they are attempting to pronounce. People have frequent errors during spontaneous speech, such as substituting or transposing sounds. They are also aware of their errors and will show significant difficulty correcting them.
Global aphasia is a severe form of nonfluent aphasia, caused by damage to the left side of the brain, that affects receptive and expressive language skills as well as auditory and visual comprehension. Acquired impairments of communicative abilities are present across all language modalities, impacting language production, comprehension, and repetition. Patients with global aphasia may be able to verbalize a few short utterances and use non-word neologisms, but their overall production ability is limited. Their ability to repeat words, utterances, or phrases is also affected. Due to the preservation of the right hemisphere, an individual with global aphasia may still be able to express themselves through facial expressions, gestures, and intonation. This type of aphasia often results from a large lesion of the left perisylvian cortex. The lesion is caused by an occlusion of the left middle cerebral artery and is associated with damage to Broca's area, Wernicke's area, and insular regions which are associated with aspects of language.
Amusia is a musical disorder that appears mainly as a defect in processing pitch but also encompasses musical memory and recognition. Two main classifications of amusia exist: acquired amusia, which occurs as a result of brain damage, and congenital amusia, which results from a music-processing anomaly present since birth.
In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.
Brodmann area 22 is a Brodmann's area that is cytoarchitecturally located in the posterior superior temporal gyrus of the brain. In the left cerebral hemisphere, it is one portion of Wernicke's area. The left hemisphere BA22 helps with generation and understanding of individual words. On the right side of the brain, BA22 helps to discriminate pitch and sound intensity, both of which are necessary to perceive melody and prosody. Wernicke's area is active in processing language and consists of the left Brodmann area 22 and Brodmann area 40, the supramarginal gyrus.
Associative visual agnosia is a form of visual agnosia. It is an impairment in recognition or assigning meaning to a stimulus that is accurately perceived and not associated with a generalized deficit in intelligence, memory, language or attention. The disorder appears to be very uncommon in a "pure" or uncomplicated form and is usually accompanied by other complex neuropsychological problems due to the nature of the etiology. Affected individuals can accurately distinguish the object, as demonstrated by the ability to draw a picture of it or categorize accurately, yet they are unable to identify the object, its features or its functions.
Speech is the use of the human voice as a medium for language. Spoken language combines vowel and consonant sounds to form units of meaning like words, which belong to a language's lexicon. There are many different intentional speech acts, such as informing, declaring, asking, persuading, directing; acts may vary in various aspects like enunciation, intonation, loudness, and tempo to convey meaning. Individuals may also unintentionally communicate aspects of their social position through speech, such as sex, age, place of origin, physiological and mental condition, education, and experiences.
Auditory verbal agnosia (AVA), also known as pure word deafness, is the inability to comprehend speech. Individuals with this disorder lose the ability to understand language, repeat words, and write from dictation. Some patients with AVA describe hearing spoken language as meaningless noise, often as though the person speaking was doing so in a foreign language. However, spontaneous speaking, reading, and writing are preserved. The maintenance of the ability to process non-speech auditory information, including music, also remains relatively more intact than spoken language comprehension. Individuals who exhibit pure word deafness are also still able to recognize non-verbal sounds. The ability to interpret language via lip reading, hand gestures, and context clues is preserved as well. Sometimes, this agnosia is preceded by cortical deafness; however, this is not always the case. Researchers have documented that in most patients exhibiting auditory verbal agnosia, the discrimination of consonants is more difficult than that of vowels, but as with most neurological disorders, there is variation among patients.
Cortical deafness is a rare form of sensorineural hearing loss caused by damage to the primary auditory cortex. Cortical deafness is an auditory disorder where the patient is unable to hear sounds but has no apparent damage to the structures of the ear. It has been argued to be as the combination of auditory verbal agnosia and auditory agnosia. Patients with cortical deafness cannot hear any sounds, that is, they are not aware of sounds including non-speech, voices, and speech sounds. Although patients appear and feel completely deaf, they can still exhibit some reflex responses such as turning their head towards a loud sound.
The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.
Neuroscientists have learned much about the role of the brain in numerous cognitive mechanisms by understanding corresponding disorders. Similarly, neuroscientists have come to learn much about music cognition by studying music-specific disorders. Even though music is most often viewed from a "historical perspective rather than a biological one" music has significantly gained the attention of neuroscientists all around the world. For many centuries music has been strongly associated with art and culture. The reason for this increased interest in music is because it "provides a tool to study numerous aspects of neuroscience, from motor skill learning to emotion".
Phonagnosia is a type of agnosia, or loss of knowledge, that involves a disturbance in the recognition of familiar voices and the impairment of voice discrimination abilities in which the affected individual does not suffer from comprehension deficits. Phonagnosia is an auditory agnosia, an acquired auditory processing disorder resulting from brain damage. Other auditory agnosias include cortical deafness and auditory verbal agnosia also known as pure word deafness.
Sign language refers to any natural language which uses visual gestures produced by the hands and body language to express meaning. The brain's left side is the dominant side utilized for producing and understanding sign language, just as it is for speech. In 1861, Paul Broca studied patients with the ability to understand spoken languages but the inability to produce them. The damaged area was named Broca's area, and located in the left hemisphere’s inferior frontal gyrus. Soon after, in 1874, Carl Wernicke studied patients with the reverse deficits: patients could produce spoken language, but could not comprehend it. The damaged area was named Wernicke's area, and is located in the left hemisphere’s posterior superior temporal gyrus.