Speech

Last updated
Speech production visualized by Real-time MRI

Speech is a human vocal communication using language. Each language uses phonetic combinations of vowel and consonant sounds that form the sound of its words (that is, all English words sound different from all French words, even if they are the same word, e.g., "role" or "hotel"), and using those words in their semantic character as words in the lexicon of a language according to the syntactic constraints that govern lexical words' function in a sentence. In speaking, speakers perform many different intentional speech acts, e.g., informing, declaring, asking, persuading, directing, and can use enunciation, intonation, degrees of loudness, tempo, and other non-representational or paralinguistic aspects of vocalization to convey meaning. In their speech, speakers also unintentionally communicate many aspects of their social position such as sex, age, place of origin (through accent), physical states (alertness and sleepiness, vigor or weakness, health or illness), psychological states (emotions or moods), physico-psychological states (sobriety or drunkenness, normal consciousness and trance states), education or experience, and the like.

Contents

Although people ordinarily use speech in dealing with other persons (or animals), when people swear they do not always mean to communicate anything to anyone, and sometimes in expressing urgent emotions or desires they use speech as a quasi-magical cause, as when they encourage a player in a game to do or warn them not to do something. There are also many situations in which people engage in solitary speech. People talk to themselves sometimes in acts that are a development of what some psychologists (e.g., Lev Vygotsky) have maintained is the use of silent speech in an interior monologue to vivify and organize cognition, sometimes in the momentary adoption of a dual persona as self addressing self as though addressing another person. Solo speech can be used to memorize or to test one's memorization of things, and in prayer or in meditation (e.g., the use of a mantra).

Researchers study many different aspects of speech: speech production and speech perception of the sounds used in a language, speech repetition, speech errors, the ability to map heard spoken words onto the vocalizations needed to recreate them, which plays a key role in children's enlargement of their vocabulary, and what different areas of the human brain, such as Broca's area and Wernicke's area, underlie speech. Speech is the subject of study for linguistics, cognitive science, communication studies, psychology, computer science, speech pathology, otolaryngology, and acoustics. Speech compares with written language, [1] which may differ in its vocabulary, syntax, and phonetics from the spoken language, a situation called diglossia.

The evolutionary origins of speech are unknown and subject to much debate and speculation. While animals also communicate using vocalizations, and trained apes such as Washoe and Kanzi can use simple sign language, no animals' vocalizations are articulated phonemically and syntactically, and do not constitute speech.

Evolution

Although related to the more general problem of the origin of language, the evolution of distinctively human speech capacities has become a distinct and in many ways separate area of scientific research. [2] [3] [4] [5] [6] The topic is a separate one because language is not necessarily spoken: it can equally be written or signed. Speech is in this sense optional, although it is the default modality for language.

Places of articulation (passive and active):
1. Exo-labial, 2. Endo-labial, 3. Dental, 4. Alveolar, 5. Post-alveolar, 6. Pre-palatal, 7. Palatal, 8. Velar, 9. Uvular, 10. Pharyngeal, 11. Glottal, 12. Epiglottal, 13. Radical, 14. Postero-dorsal, 15. Antero-dorsal, 16. Laminal, 17. Apical, 18. Sub-apical Places of articulation.svg
Places of articulation (passive and active):
1. Exo-labial, 2. Endo-labial, 3. Dental, 4. Alveolar, 5. Post-alveolar, 6. Pre-palatal, 7. Palatal, 8. Velar, 9. Uvular, 10. Pharyngeal, 11. Glottal, 12. Epiglottal, 13. Radical, 14. Postero-dorsal, 15. Antero-dorsal, 16. Laminal, 17. Apical, 18. Sub-apical

Monkeys, non-human apes and humans, like many other animals, have evolved specialised mechanisms for producing sound for purposes of social communication. [7] On the other hand, no monkey or ape uses its tongue for such purposes. [8] [9] The human species' unprecedented use of the tongue, lips and other moveable parts seems to place speech in a quite separate category, making its evolutionary emergence an intriguing theoretical challenge in the eyes of many scholars. [10]

Determining the timeline of human speech evolution is made additionally challenging by the lack of data in the fossil record. The human vocal tract does not fossilize, and indirect evidence of vocal tract changes in hominid fossils has proven inconclusive. [10]

Production

Speech production is an unconscious multi-step process by which thoughts are generated into spoken utterances. Production involves the unconscious mind selecting appropriate words and the appropriate form of those words from the lexicon and morphology, and the organization of those words through the syntax. Then, the phonetic properties of the words are retrieved and the sentence is articulated through the articulations associated with those phonetic properties. [11]

In linguistics, articulatory phonetics is the study of how the tongue, lips, jaw, vocal cords, and other speech organs are used to make sounds. Speech sounds are categorized by manner of articulation and place of articulation. Place of articulation refers to where in the neck or mouth the airstream is constricted. Manner of articulation refers to the manner in which the speech organs interact, such as how closely the air is restricted, what form of airstream is used (e.g. pulmonic, implosive, ejectives, and clicks), whether or not the vocal cords are vibrating, and whether the nasal cavity is opened to the airstream. [12] The concept is primarily used for the production of consonants, but can be used for vowels in qualities such as voicing and nasalization. For any place of articulation, there may be several manners of articulation, and therefore several homorganic consonants.

Normal human speech is pulmonic, produced with pressure from the lungs, which creates phonation in the glottis in the larynx, which is then modified by the vocal tract and mouth into different vowels and consonants. However humans can pronounce words without the use of the lungs and glottis in alaryngeal speech, of which there are three types: esophageal speech, pharyngeal speech and buccal speech (better known as Donald Duck talk).

Errors

Speech production is a complex activity, and as a consequence errors are common, especially in children. Speech errors come in many forms and are used to provide evidence to support hypotheses about the nature of speech. [13] As a result, speech errors are often used in the construction of models for language production and child language acquisition. For example, the fact that children often make the error of over-regularizing the -ed past tense suffix in English (e.g. saying 'singed' instead of 'sang') shows that the regular forms are acquired earlier. [14] [15] Speech errors associated with certain kinds of aphasia have been used to map certain components of speech onto the brain and see the relation between different aspects of production; for example, the difficulty of expressive aphasia patients in producing regular past-tense verbs, but not irregulars like 'sing-sang' has been used to demonstrate that regular inflected forms of a word are not individually stored in the lexicon, but produced from affixation to the base form. [16]

Perception

Speech perception refers to the processes by which humans can interpret and understand the sounds used in language. The study of speech perception is closely linked to the fields of phonetics and phonology in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how listeners recognize speech sounds and use this information to understand spoken language. Research into speech perception also has applications in building computer systems that can recognize speech, as well as improving speech recognition for hearing- and language-impaired listeners. [17]

Speech perception is categorical, in that people put the sounds they hear into categories rather than perceiving them as a spectrum. People are more likely to be able to hear differences in sounds across categorical boundaries than within them. A good example of this is voice onset time (VOT), one aspect of the phonetic production of consonant sounds. For example, Hebrew speakers, who distinguish voiced /b/ from voiceless /p/, will more easily detect a change in VOT from -10 ( perceived as /b/ ) to 0 ( perceived as /p/ ) than a change in VOT from +10 to +20, or -10 to -20, despite this being an equally large change on the VOT spectrum. [18]

Development

Most human children develop proto-speech babbling behaviors when they are four to six months old. Most will begin saying their first words at some point during the first year of life. Typical children progress through two or three word phrases before three years of age followed by short sentences by four years of age. [19]

Repetition

In speech repetition, speech being heard is quickly turned from sensory input into motor instructions needed for its immediate or delayed vocal imitation (in phonological memory). This type of mapping plays a key role in enabling children to expand their spoken vocabulary. Masur (1995) found that how often children repeat novel words versus those they already have in their lexicon is related to the size of their lexicon later on, with young children who repeat more novel words having a larger lexicon later in development. Speech repetition could help facilitate the acquisition of this larger lexicon. [20]

Problems

There are several organic and psychological factors that can affect speech. Among these are:

  1. Diseases and disorders of the lungs or the vocal cords, including paralysis, respiratory infections (bronchitis), vocal fold nodules and cancers of the lungs and throat.
  2. Diseases and disorders of the brain, including alogia, aphasias, dysarthria, dystonia and speech processing disorders, where impaired motor planning, nerve transmission, phonological processing or perception of the message (as opposed to the actual sound) leads to poor speech production.
  3. Hearing problems, such as otitis media with effusion, and listening problems, auditory processing disorders, can lead to phonological problems. In addition to dysphasia, anomia and auditory processing disorder impede the quality of auditory perception, and therefore, expression. Those who are deaf or hard of hearing may be considered to fall into this category.
  4. Articulatory problems, such as slurred speech, stuttering, lisping, cleft palate, ataxia, or nerve damage leading to problems in articulation. Tourette syndrome and tics can also affect speech. Various congenital and acquired tongue diseases can affect speech as can motor neuron disease.
  5. Psychiatric disorders have been shown to change speech acoustic features, where for instance, fundamental frequency of voice (perceived as pitch) tends to be significantly lower in major depressive disorder than in healthy controls. [21] Therefore, speech is being investigated as a potential biomarker for mental health disorders.

Speech and language disorders can also result from stroke, [22] brain injury, [23] hearing loss, [24] developmental delay, [25] a cleft palate, [26] cerebral palsy, [27] or emotional issues. [28]

Treatment

Speech-related diseases, disorders, and conditions can be treated by a speech-language pathologist (SLP) or speech therapist. SLPs assess levels of speech needs, make diagnoses based on the assessments, and then treat the diagnoses or address the needs. [29]

Brain physiology

Classical model

Broca's and Wernicke's areas BrocasAreaSmall.png
Broca's and Wernicke's areas

The classical or Wernicke-Geschwind model of the language system in the brain focuses on Broca's area in the inferior prefrontal cortex, and Wernicke's area in the posterior superior temporal gyrus on the dominant hemisphere of the brain (typically the left hemisphere for language). In this model, a linguistic auditory signal is first sent from the auditory cortex to Wernicke's area. The lexicon is accessed in Wernicke's area, and these words are sent via the arcuate fasciculus to Broca's area, where morphology, syntax, and instructions for articulation are generated. This is then sent from Broca's area to the motor cortex for articulation. [30]

Paul Broca identified an approximate region of the brain in 1861 which, when damaged in two of his patients, caused severe deficits in speech production, where his patients were unable to speak beyond a few monosyllabic words. This deficit, known as Broca's or expressive aphasia, is characterized by difficulty in speech production where speech is slow and labored, function words are absent, and syntax is severely impaired, as in telegraphic speech. In expressive aphasia, speech comprehension is generally less affected except in the comprehension of grammatically complex sentences. [31] Wernicke's area is named after Carl Wernicke, who in 1874 proposed a connection between damage to the posterior area of the left superior temporal gyrus and aphasia, as he noted that not all aphasic patients had had damage to the prefrontal cortex. [32] Damage to Wernicke's area produces Wernicke's or receptive aphasia, which is characterized by relatively normal syntax and prosody but severe impairment in lexical access, resulting in poor comprehension and nonsensical or jargon speech. [31]

Modern research

Modern models of the neurological systems behind linguistic comprehension and production recognize the importance of Broca's and Wernicke's areas, but are not limited to them nor solely to the left hemisphere. [33] Instead, multiple streams are involved in speech production and comprehension. Damage to the left lateral sulcus has been connected with difficulty in processing and producing morphology and syntax, while lexical access and comprehension of irregular forms (e.g. eat-ate) remain unaffected. [34] Moreover, the circuits involved in human speech comprehension dynamically adapt with learning, for example, by becoming more efficient in terms of processing time when listening to familiar messages such as learned verses. [35]

Animal communication

Some non-human animals can produce sounds or gestures resembling those of a human language. [36] Several species or groups of animals have developed forms of communication which superficially resemble verbal language, however, these usually are not considered a language because they lack one or more of the defining characteristics, e.g. grammar, syntax, recursion, and displacement. Researchers have been successful in teaching some animals to make gestures similar to sign language, [37] [38] although whether this should be considered a language has been disputed. [39]

See also

Related Research Articles

<span class="mw-page-title-main">Aphasia</span> Inability to comprehend or formulate language

In aphasia, a person may be unable to comprehend or unable to formulate language because of damage to specific brain regions. The major causes are stroke and head trauma; prevalence is hard to determine but aphasia due to stroke is estimated to be 0.1–0.4% in the Global North. Aphasia can also be the result of brain tumors, epilepsy, autoimmune neurological diseases, brain infections, or neurodegenerative diseases.

<span class="mw-page-title-main">Expressive aphasia</span> Language disorder involving inability to produce language

Expressive aphasia is a type of aphasia characterized by partial loss of the ability to produce language, although comprehension generally remains intact. A person with expressive aphasia will exhibit effortful speech. Speech generally includes important content words but leaves out function words that have more grammatical significance than physical meaning, such as prepositions and articles. This is known as "telegraphic speech". The person's intended message may still be understood, but their sentence will not be grammatically correct. In very severe forms of expressive aphasia, a person may only speak using single word utterances. Typically, comprehension is mildly to moderately impaired in expressive aphasia due to difficulty understanding complex grammar.

<span class="mw-page-title-main">Language center</span> Speech processing areas of the brain

In neuroscience and psychology, the term language center refers collectively to the areas of the brain which serve a particular function for speech processing and production. Language is a core system that gives humans the capacity to solve difficult problems and provides them with a unique type of social interaction. Language allows individuals to attribute symbols to specific concepts, and utilize them through sentences and phrases that follow proper grammatical rules. Finally, speech is the mechanism by which language is orally expressed.

<span class="mw-page-title-main">Receptive aphasia</span> Language disorder involving inability to understand language

Wernicke's aphasia, also known as receptive aphasia, sensory aphasia, fluent aphasia, or posterior aphasia, is a type of aphasia in which individuals have difficulty understanding written and spoken language. Patients with Wernicke's aphasia demonstrate fluent speech, which is characterized by typical speech rate, intact syntactic abilities and effortless speech output. Writing often reflects speech in that it tends to lack content or meaning. In most cases, motor deficits do not occur in individuals with Wernicke's aphasia. Therefore, they may produce a large amount of speech without much meaning. Individuals with Wernicke's aphasia are typically unaware of their errors in speech and do not realize their speech may lack meaning. They typically remain unaware of even their most profound language deficits.

<span class="mw-page-title-main">Broca's area</span> Speech production region in the dominant hemisphere of the hominid brain

Broca's area, or the Broca area, is a region in the frontal lobe of the dominant hemisphere, usually the left, of the brain with functions linked to speech production.

A communication disorder is any disorder that affects an individual's ability to comprehend, detect, or apply language and speech to engage in dialogue effectively with others. This also encompasses deficiencies in verbal and non-verbal communication styles. The delays and disorders can range from simple sound substitution to the inability to understand or use one's native language. This article covers subjects such as diagnosis, the DSM-IV, the DSM-V, and examples like sensory impairments, aphasia, learning disabilities, and speech disorders.

Aphasiology is the study of language impairment usually resulting from brain damage, due to neurovascular accident—hemorrhage, stroke—or associated with a variety of neurodegenerative diseases, including different types of dementia. These specific language deficits, termed aphasias, may be defined as impairments of language production or comprehension that cannot be attributed to trivial causes such as deafness or oral paralysis. A number of aphasias have been described, but two are best known: expressive aphasia and receptive aphasia.

<span class="mw-page-title-main">Brain damage</span> Destruction or degeneration of brain cells

Neurotrauma, brain damage or brain injury (BI) is the destruction or degeneration of brain cells. Brain injuries occur due to a wide range of internal and external factors. In general, brain damage refers to significant, undiscriminating trauma-induced damage.

<span class="mw-page-title-main">Anomic aphasia</span> Medical condition

Anomic aphasia is a mild, fluent type of aphasia where individuals have word retrieval failures and cannot express the words they want to say. By contrast, anomia is a deficit of expressive language, and a symptom of all forms of aphasia, but patients whose primary deficit is word retrieval are diagnosed with anomic aphasia. Individuals with aphasia who display anomia can often describe an object in detail and maybe even use hand gestures to demonstrate how the object is used, but cannot find the appropriate word to name the object. Patients with anomic aphasia have relatively preserved speech fluency, repetition, comprehension, and grammatical speech.

<span class="mw-page-title-main">Wernicke's area</span> Speech comprehension region in the dominant hemisphere of the hominid brain

Wernicke's area, also called Wernicke's speech area, is one of the two parts of the cerebral cortex that are linked to speech, the other being Broca's area. It is involved in the comprehension of written and spoken language, in contrast to Broca's area, which is primarily involved in the production of language. It is traditionally thought to reside in Brodmann area 22, which is located in the superior temporal gyrus in the dominant cerebral hemisphere, which is the left hemisphere in about 95% of right-handed individuals and 70% of left-handed individuals.

<span class="mw-page-title-main">Conduction aphasia</span> Medical condition

Conduction aphasia, also called associative aphasia, is an uncommon form of difficulty in speaking (aphasia). It is caused by damage to the parietal lobe of the brain. An acquired language disorder, it is characterised by intact auditory comprehension, coherent speech production, but poor speech repetition. Affected people are fully capable of understanding what they are hearing, but fail to encode phonological information for production. This deficit is load-sensitive as the person shows significant difficulty repeating phrases, particularly as the phrases increase in length and complexity and as they stumble over words they are attempting to pronounce. People have frequent errors during spontaneous speech, such as substituting or transposing sounds. They are also aware of their errors and will show significant difficulty correcting them.

<span class="mw-page-title-main">Arcuate fasciculus</span> Neural pathway connecting Brocas area and Wernickes area

In neuroanatomy, the arcuate fasciculus is a bundle of axons that generally connects the Broca's area and the Wernicke's area in the brain. It is an association fiber tract connecting caudal temporal cortex and inferior frontal lobe.

Transcortical sensory aphasia (TSA) is a kind of aphasia that involves damage to specific areas of the temporal lobe of the brain, resulting in symptoms such as poor auditory comprehension, relatively intact repetition, and fluent speech with semantic paraphasias present. TSA is a fluent aphasia similar to Wernicke's aphasia, with the exception of a strong ability to repeat words and phrases. The person may repeat questions rather than answer them ("echolalia").

<span class="mw-page-title-main">Language processing in the brain</span> How humans use words to communicate

In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.

<span class="mw-page-title-main">Brodmann area 22</span>

Brodmann area 22 is a Brodmann's area that is cytoarchitecturally located in the posterior superior temporal gyrus of the brain. In the left cerebral hemisphere, it is one portion of Wernicke's area. The left hemisphere BA22 helps with generation and understanding of individual words. On the right side of the brain, BA22 helps to discriminate pitch and sound intensity, both of which are necessary to perceive melody and prosody. Wernicke's area is active in processing language and consists of the left Brodmann area 22 and Brodmann area 40, the supramarginal gyrus.

<span class="mw-page-title-main">Lateralization of brain function</span> Specialization of some cognitive functions in one side of the brain

The lateralization of brain function is the tendency for some neural functions or cognitive processes to be specialized to one side of the brain or the other. The median longitudinal fissure separates the human brain into two distinct cerebral hemispheres, connected by the corpus callosum. Although the macrostructure of the two hemispheres appears to be almost identical, different composition of neuronal networks allows for specialized function that is different in each hemisphere.

Paraphasia is a type of language output error commonly associated with aphasia, and characterized by the production of unintended syllables, words, or phrases during the effort to speak. Paraphasic errors are most common in patients with fluent forms of aphasia, and come in three forms: phonemic or literal, neologistic, and verbal. Paraphasias can affect metrical information, segmental information, number of syllables, or both. Some paraphasias preserve the meter without segmentation, and some do the opposite. However, most paraphasias affect both partially.

<span class="mw-page-title-main">Speech repetition</span> Repeating something someone else said

Speech repetition occurs when individuals speak the sounds that they have heard another person pronounce or say. In other words, it is the saying by one individual of the spoken vocalizations made by another individual. Speech repetition requires the person repeating the utterance to have the ability to map the sounds that they hear from the other person's oral pronunciation to similar places and manners of articulation in their own vocal tract.

<span class="mw-page-title-main">Sign language in the brain</span>

Sign language refers to any natural language which uses visual gestures produced by the hands and body language to express meaning. The brain's left side is the dominant side utilized for producing and understanding sign language, just as it is for speech. In 1861, Paul Broca studied patients with the ability to understand spoken languages but the inability to produce them. The damaged area was named Broca's area, and located in the left hemisphere’s inferior frontal gyrus. Soon after, in 1874, Carl Wernicke studied patients with the reverse deficits: patients could produce spoken language, but could not comprehend it. The damaged area was named Wernicke's area, and is located in the left hemisphere’s posterior superior temporal gyrus.

<span class="mw-page-title-main">Verbal intelligence</span> The ability to understand concepts in words

Verbal intelligence is the ability to understand and reason using concepts framed in words. More broadly, it is linked to problem solving, abstract reasoning, and working memory. Verbal intelligence is one of the most g-loaded abilities.

References

  1. "Speech". American Heritage Dictionary. Archived from the original on 2020-08-07. Retrieved 2018-09-13.
  2. Hockett, Charles F. (1960). "The Origin of Speech" (PDF). Scientific American . 203 (3): 88–96. Bibcode:1960SciAm.203c..88H. doi:10.1038/scientificamerican0960-88. PMID   14402211. Archived from the original (PDF) on 2014-01-06. Retrieved 2014-01-06.
  3. Corballis, Michael C. (2002). From hand to mouth : the origins of language . Princeton: Princeton University Press. ISBN   978-0-691-08803-7. OCLC   469431753.
  4. Lieberman, Philip (1984). The biology and evolution of language. Cambridge, Massachusetts: Harvard University Press. ISBN   9780674074132. OCLC   10071298.
  5. Lieberman, Philip (2000). Human language and our reptilian brain : the subcortical bases of speech, syntax, and thought. Vol. 44. Cambridge, Massachusetts: Harvard University Press. pp. 32–51. doi:10.1353/pbm.2001.0011. ISBN   9780674002265. OCLC   43207451. PMID   11253303. S2CID   38780927.{{cite book}}: |journal= ignored (help)
  6. Abry, Christian; Boë, Louis-Jean; Laboissière, Rafael; Schwartz, Jean-Luc (1998). "A new puzzle for the evolution of speech?". Behavioral and Brain Sciences . 21 (4): 512–513. doi:10.1017/S0140525X98231268. S2CID   145180611.
  7. Kelemen, G. (1963). Comparative anatomy and performance of the vocal organ in vertebrates. In R. Busnel (ed.), Acoustic behavior of animals. Amsterdam: Elsevier, pp. 489–521.
  8. Riede, T.; Bronson, E.; Hatzikirou, H.; Zuberbühler, K. (Jan 2005). "Vocal production mechanisms in a non-human primate: morphological data and a model" (PDF). J Hum Evol . 48 (1): 85–96. doi:10.1016/j.jhevol.2004.10.002. PMID   15656937. Archived (PDF) from the original on 2022-08-12. Retrieved 2022-08-12.
  9. Riede, T.; Bronson, E.; Hatzikirou, H.; Zuberbühler, K. (February 2006). "Multiple discontinuities in nonhuman vocal tracts – A reply". Journal of Human Evolution. 50 (2): 222–225. doi:10.1016/j.jhevol.2005.10.005.
  10. 1 2 Fitch, W.Tecumseh (July 2000). "The evolution of speech: a comparative review". Trends in Cognitive Sciences. 4 (7): 258–267. CiteSeerX   10.1.1.22.3754 . doi:10.1016/S1364-6613(00)01494-7. PMID   10859570. S2CID   14706592.
  11. Levelt, Willem J. M. (1999). "Models of word production". Trends in Cognitive Sciences. 3 (6): 223–32. doi:10.1016/s1364-6613(99)01319-4. PMID   10354575. S2CID   7939521.
  12. Catford, J.C.; Esling, J.H. (2006). "Articulatory Phonetics". In Brown, Keith (ed.). Encyclopedia of Language & Linguistics (2nd ed.). Amsterdam: Elsevier Science. pp. 425–42.
  13. Fromkin, Victoria (1973). "Introduction". Speech Errors as Linguistic Evidence. The Hague: Mouton. pp. 11–46.
  14. Plunkett, Kim; Juola, Patrick (1999). "A connectionist model of english past tense and plural morphology". Cognitive Science. 23 (4): 463–90. CiteSeerX   10.1.1.545.3746 . doi:10.1207/s15516709cog2304_4.
  15. Nicoladis, Elena; Paradis, Johanne (2012). "Acquiring Regular and Irregular Past Tense Morphemes in English and French: Evidence From Bilingual Children". Language Learning. 62 (1): 170–97. doi:10.1111/j.1467-9922.2010.00628.x.
  16. Ullman, Michael T.; et al. (2005). "Neural correlates of lexicon and grammar: Evidence from the production,reading, and judgement of inflection in aphasia". Brain and Language. 93 (2): 185–238. doi:10.1016/j.bandl.2004.10.001. PMID   15781306. S2CID   14991615.
  17. Kennison, Shelia (2013). Introduction to Language Development. Los Angeles: Sage.
  18. Kishon-Rabin, Liat; Rotshtein, Shira; Taitelbaum, Riki (2002). "Underlying Mechanism for Categorical Perception: Tone-Onset Time and Voice-Onset Time Evidence of Hebrew Voicing". Journal of Basic and Clinical Physiology and Pharmacology. 13 (2): 117–34. doi:10.1515/jbcpp.2002.13.2.117. PMID   16411426. S2CID   9986779.
  19. "Speech and Language Developmental Milestones". National Institute on Deafness and Other Communication Disorders. National Insistitutes of Health. 13 October 2022.
  20. Masur, Elise (1995). "Infants' Early Verbal Imitation and Their Later Lexical Development". Merrill-Palmer Quarterly. 41 (3): 286–306.
  21. Low DM, Bentley KH, Ghosh, SS (2020). "Automated assessment of psychiatric disorders using speech: A systematic review". Laryngoscope Investigative Otolaryngology. 5 (1): 96–116. doi: 10.1002/lio2.354 . PMC   7042657 . PMID   32128436.
  22. Richards, Emma (June 2012). "Communication and swallowing problems after stroke". Nursing and Residential Care. 14 (6): 282–286. doi:10.12968/nrec.2012.14.6.282.
  23. Zasler, Nathan D.; Katz, Douglas I.; Zafonte, Ross D.; Arciniegas, David B.; Bullock, M. Ross; Kreutzer, Jeffrey S., eds. (2013). Brain injury medicine principles and practice (2nd ed.). New York: Demos Medical. pp. 1086–1104, 1111–1117. ISBN   9781617050572.
  24. Ching, Teresa Y. C. (2015). "Is early intervention effective in improving spoken language outcomes of children with congenital hearing loss?". American Journal of Audiology. 24 (3): 345–348. doi:10.1044/2015_aja-15-0007. PMC   4659415 . PMID   26649545.
  25. The Royal Children's Hospital, Melbourne. "Developmental Delay: An Information Guide for Parents" (PDF). The Royal Children's Hospital Melbourne. Archived (PDF) from the original on 29 March 2016. Retrieved 2 May 2016.
  26. Bauman-Waengler, Jacqueline (2011). Articulatory and phonological impairments: a clinical focus (4th ed., International ed.). Harlow: Pearson Education. pp. 378–385. ISBN   9780132719957.
  27. "Speech and Language Therapy". CerebralPalsy.org. Archived from the original on 8 May 2016. Retrieved 2 May 2016.
  28. Cross, Melanie (2011). Children with social, emotional and behavioural difficulties and communication problems: there is always a reason (2nd ed.). London: Jessica Kingsley Publishers.
  29. "Speech–Language Pathologists". ASHA.org. American Speech–Language–Hearing Association. Retrieved 6 April 2015.
  30. Kertesz, A. (2005). "Wernicke–Geschwind Model". In L. Nadel, Encyclopedia of cognitive science. Hoboken, NJ: Wiley.
  31. 1 2 Hillis, A.E., & Caramazza, A. (2005). "Aphasia". In L. Nadel, Encyclopedia of cognitive science. Hoboken, NJ: Wiley.
  32. Wernicke K. (1995). "The aphasia symptom-complex: A psychological study on an anatomical basis (1875)". In Paul Eling (ed.). Reader in the History of Aphasia: From sasi(Franz Gall to). Vol. 4. Amsterdam: John Benjamins Pub Co. pp. 69–89. ISBN   978-90-272-1893-3.
  33. Nakai, Y; Jeong, JW; Brown, EC; Rothermel, R; Kojima, K; Kambara, T; Shah, A; Mittal, S; Sood, S; Asano, E (2017). "Three- and four-dimensional mapping of speech and language in patients with epilepsy". Brain. 140 (5): 1351–70. doi:10.1093/brain/awx051. PMC   5405238 . PMID   28334963.
  34. Tyler, Lorraine K.; Marslen-Wilson, William (2009). "Fronto-temporal brain systems supporting spoken language comprehension". In Moore, Brian C.J.; Tyler, Lorraine K.; Marslen-Wilson, William D. (eds.). The Perception of Speech: from sound to meaning. Oxford: Oxford University Press. pp. 193–217. ISBN   978-0-19-956131-5.
  35. Cervantes Constantino, F; Simon, JZ (2018). "Restoration and Efficiency of the Neural Processing of Continuous Speech Are Promoted by Prior Knowledge". Frontiers in Systems Neuroscience. 12 (56): 56. doi: 10.3389/fnsys.2018.00056 . PMC   6220042 . PMID   30429778.
  36. "Can any animals talk and use language like humans?". BBC. 16 February 2015. Archived from the original on 31 January 2021. Retrieved 12 August 2022.
  37. Hillix, William A.; Rumbaugh, Duane M. (2004), "Washoe, the First Signing Chimpanzee", Animal Bodies, Human Minds: Ape, Dolphin, and Parrot Language Skills, Springer US, pp. 69–85, doi:10.1007/978-1-4757-4512-2_5, ISBN   978-1-4419-3400-0
  38. Hu, Jane C. (Aug 20, 2014). "What Do Talking Apes Really Tell Us?". Slate. Archived from the original on October 12, 2018. Retrieved Jan 19, 2020.
  39. Terrace, Herbert S. (December 1982). "Why Koko Can't Talk". The Sciences. 22 (9): 8–10. doi:10.1002/j.2326-1951.1982.tb02120.x. ISSN   0036-861X.

Further reading