Cohort model

Last updated

The cohort model in psycholinguistics and neurolinguistics is a model of lexical retrieval first proposed by William Marslen-Wilson in the late 1970s. [1] It attempts to describe how visual or auditory input (i.e., hearing or reading a word) is mapped onto a word in a hearer's lexicon. [2] According to the model, when a person hears speech segments real-time, each speech segment "activates" every word in the lexicon that begins with that segment, and as more segments are added, more words are ruled out, until only one word is left that still matches the input.

Contents

Background information

The cohort model relies on a number of concepts in the theory of lexical retrieval. The lexicon is the store of words in a person's mind; [3] it contains a person's vocabulary and is similar to a mental dictionary. A lexical entry is all the information about a word and the lexical storage is the way the items are stored for peak retrieval. Lexical access is the way that an individual accesses the information in the mental lexicon. A word's cohort is composed of all the lexical items that share an initial sequence of phonemes, [4] and is the set of words activated by the initial phonemes of the word.

Model

The cohort model is based on the concept that auditory or visual input to the brain stimulates neurons as it enters the brain, rather than at the end of a word. [5] This fact was demonstrated in the 1980s through experiments with speech shadowing, in which subjects listened to recordings and were instructed to repeat aloud exactly what they heard, as quickly as possible; Marslen-Wilson found that the subjects often started to repeat a word before it had actually finished playing, which suggested that the word in the hearer's lexicon was activated before the entire word had been heard. [6] Findings such as these led Marslen-Wilson to propose the cohort model in 1987. [7]

The cohort model consists of three stages: access, selection, and integration. [8] Under this model, auditory lexical retrieval begins with the first one or two speech segments, or phonemes, reach the hearer's ear, at which time the mental lexicon activates every possible word that begins with that speech segment. [9] This occurs during the access stage and all of the possible words are known as the cohort. [10] The words that are activated by the speech signal but are not the intended word are often called competitors. [11] Identification of the target word is more difficult with more competitors. [12] As more speech segments enter the ear and stimulate more neurons, causing the competitors that no longer match the input to be kicked out or to decrease in activation. [9] [13] The processes by which words are activated and competitors rejected in the cohort model are frequently called activation and selection or recognition and competition. These processes continue until an instant, called the recognition point, [9] at which only one word remains activated and all competitors have been kicked out. The recognition point process is initiated within the first 200 to 250 milliseconds of the onset of the given word. [4] This is also known as the uniqueness point and it is the point where the most processing occurs. [10] Moreover, there is a difference in the way a word is processed before it reaches its recognition point and afterwards. One can look at the process prior to reaching the recognition point as bottom-up, where the phonemes are used to access the lexicon. The post recognition point process is top-down, because the information concerning the chosen word is tested against the word that is presented. [14] The selection stage occurs when only one word is left from the set. [10] Finally, in the integration stage, the semantic and syntactic properties of activated words are incorporated into the high-level utterance representation. [8]

Increasing segments of the word, "candle" Candle cohort.JPG
Increasing segments of the word, "candle"

For example, in the auditory recognition of the word "candle", the following steps take place. When the hearer hears the first two phonemes /k/ and /æ/ ((1) and (2) in the image), he or she would activate the word "candle", along with competitors such as "candy", "canopy", "cattle", and numerous others. Once the phoneme /n/ is added ((3) in the image), "cattle" would be kicked out; with /d/, "canopy" would be kicked out; and this process would continue until the recognition point, the final /l/ of "candle", were reached ((5) in the image). [15] The recognition point need not always be the final phoneme of the word; the recognition point of "slander", for example, occurs at the /d/ (since no other English words begin "sland-"); [6] all competitors for "spaghetti" are ruled out as early as /spəɡ/; [15] Jerome Packard has demonstrated that the recognition point of the Chinese word huŏchē ("train") occurs before huŏch-; [16] and a landmark study by Pienie Zwitserlood demonstrated that the recognition point of the Dutch word kapitein (captain) was at the vowel before the final /n/. [17]

Since its original proposal, the model has been adjusted to allow for the role that context plays in helping the hearer rule out competitors, [9] and the fact that activation is "tolerant" to minor acoustic mismatches that arise because of coarticulation (a property by which language sounds are slightly changed by the sounds preceding and following them). [18]

Experimental evidence

Much evidence in favor of the cohort model has come from priming studies, in which a priming word is presented to a subject and then closely followed by a target word and the subject asked to identify if the target word is a real word or not; the theory behind the priming paradigm is that if a word is activated in the subject's mental lexicon, the subject will be able to respond more quickly to the target word. [19] If the subject does respond more quickly, the target word is said to be primed by the priming word. Several priming studies have found that when a stimulus that does not reach recognition point is presented, numerous target words were all primed, whereas if a stimulus past recognition point is presented, only one word is primed. For example, in Pienie Zwitserlood's study of Dutch compared the words kapitein ("captain") and kapitaal ("capital" or "money"); in the study, the stem kapit- primed both boot ("boat", semantically related to kapitein) and geld ("money", semantically related to kapitaal), suggesting that both lexical entries were activated; the full word kapitein, on the other hand, primed only boot and not geld. [17] Furthermore, experiments have shown that in tasks where subjects must differentiate between words and non-words, reaction times were faster for longer words with phonemic points of discrimination earlier in the word. For example, discriminating between Crocodile and Dial, the point of recognition to discriminate between the two words comes at the /d/ in Crocodile which is much earlier than the /l/ sound in Dial. [20]

Later experiments refined the model. For example, some studies showed that "shadowers" (subjects who listen to auditory stimuli and repeat it as quickly as possible) could not shadow as quickly when words were jumbled up so they didn't mean anything; those results suggested that sentence structure and speech context also contribute to the process of activation and selection. [6]

Research in bilinguals has found that word recognition is influenced by the number of neighbors in both languages. [21]

See also

Related Research Articles

Psycholinguistics or psychology of language is the study of the interrelation between linguistic factors and psychological aspects. The discipline is mainly concerned with the mechanisms by which language is processed and represented in the mind and brain; that is, the psychological and neurobiological factors that enable humans to acquire, use, comprehend, and produce language.

<span class="mw-page-title-main">Neurolinguistics</span> Neuroscience and linguistics-related studies

Neurolinguistics is the study of neural mechanisms in the human brain that control the comprehension, production, and acquisition of language. As an interdisciplinary field, neurolinguistics draws methods and theories from fields such as neuroscience, linguistics, cognitive science, communication disorders and neuropsychology. Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models in psycholinguistics and theoretical linguistics, and is focused on investigating how the brain can implement the processes that theoretical and psycholinguistics propose are necessary in producing and comprehending language. Neurolinguists study the physiological mechanisms by which the brain processes information related to language, and evaluate linguistic and psycholinguistic theories, using aphasiology, brain imaging, electrophysiology, and computer modeling.

Phonotactics is a branch of phonology that deals with restrictions in a language on the permissible combinations of phonemes. Phonotactics defines permissible syllable structure, consonant clusters and vowel sequences by means of phonotactic constraints.

<span class="mw-page-title-main">Language processing in the brain</span> How humans use words to communicate

In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.

The lexical decision task (LDT) is a procedure used in many psychology and psycholinguistics experiments. The basic procedure involves measuring how quickly people classify stimuli as words or nonwords.

A speech error, commonly referred to as a slip of the tongue or misspeaking, is a deviation from the apparently intended form of an utterance. They can be subdivided into spontaneously and inadvertently produced speech errors and intentionally produced word-plays or puns. Another distinction can be drawn between production and comprehension errors. Errors in speech production and perception are also called performance errors. Some examples of speech error include sound exchange or sound anticipation errors. In sound exchange errors the order of two individual morphemes is reversed, while in sound anticipation errors a sound from a later syllable replaces one from an earlier syllable. Slips of the tongue are a normal and common occurrence. One study shows that most people can make up to as much as 22 slips of the tongue per day.

Speech segmentation is the process of identifying the boundaries between words, syllables, or phonemes in spoken natural languages. The term applies both to the mental processes used by humans, and to artificial processes of natural language processing.

Tip of the tongue is the phenomenon of failing to retrieve a word or term from memory, combined with partial recall and the feeling that retrieval is imminent. The phenomenon's name comes from the saying, "It's on the tip of my tongue." The tip of the tongue phenomenon reveals that lexical access occurs in stages.

Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.

Language production is the production of spoken or written language. In psycholinguistics, it describes all of the stages between having a concept to express and translating that concept into linguistic forms. These stages have been described in two types of processing models: the lexical access models and the serial models. Through these models, psycholinguists can look into how speeches are produced in different ways, such as when the speaker is bilingual. Psycholinguists learn more about these models and different kinds of speech by using language production research methods that include collecting speech errors and elicited production tasks.

TRACE is a connectionist model of speech perception, proposed by James McClelland and Jeffrey Elman in 1986. It is based on a structure called "the TRACE," a dynamic processing structure made up of a network of units, which performs as the system's working memory as well as the perceptual processing mechanism. TRACE was made into a working computer program for running perceptual simulations. These simulations are predictions about how a human mind/brain processes speech sounds and words as they are heard in real time.

The logogen model of 1969 is a model of speech recognition that uses units called "logogens" to explain how humans comprehend spoken or written words. Logogens are a vast number of specialized recognition units, each able to recognize one specific word. This model provides for the effects of context on word recognition.

David Swinney was a prominent psycholinguist. His research on language comprehension contributed to methodological advances in his field.

Priming is the idea that exposure to one stimulus may influence a response to a subsequent stimulus, without conscious guidance or intention. The priming effect refers to the positive or negative effect of a rapidly presented stimulus on the processing of a second stimulus that appears shortly after. Generally speaking, the generation of priming effect depends on the existence of some positive or negative relationship between priming and target stimuli. For example, the word nurse might be recognized more quickly following the word doctor than following the word bread. Priming can be perceptual, associative, repetitive, positive, negative, affective, semantic, or conceptual. Priming effects involve word recognition, semantic processing, attention, unconscious processing, and many other issues, and are related to differences in various writing systems. Research, however, has yet to firmly establish the duration of priming effects, yet their onset can be almost instantaneous.

Speech shadowing is a psycholinguistic experimental technique in which subjects repeat speech at a delay to the onset of hearing the phrase. The time between hearing the speech and responding, is how long the brain takes to process and produce speech. The task instructs participants to shadow speech, which generates intent to reproduce the phrase while motor regions in the brain unconsciously process the syntax and semantics of the words spoken. Words repeated during the shadowing task would also imitate the parlance of the shadowed speech.

Bilingual interactive activation plus (BIA+) is a model for understanding the process of bilingual language comprehension and consists of two interactive subsystems: the word identification subsystem and task/decision subsystem. It is the successor of the Bilingual Interactive Activation (BIA) model which was updated in 2002 to include phonologic and semantic lexical representations, revise the role of language nodes, and specify the purely bottom-up nature of bilingual language processing.

The mental lexicon is defined as a mental dictionary that contains information regarding the word store of a language user, such as their meanings, pronunciations, and syntactic characteristics. The mental lexicon is used in linguistics and psycholinguistics to refer to individual speakers' lexical, or word, representations. However, there is some disagreement as to the utility of the mental lexicon as a scientific construct.

The dual-route theory of reading aloud was first described in the early 1970s. This theory suggests that two separate mental mechanisms, or cognitive routes, are involved in reading aloud, with output of both mechanisms contributing to the pronunciation of a written stimulus.

Bilingual lexical access is an area of psycholinguistics that studies the activation or retrieval process of the mental lexicon for bilingual people.

In psychology, the transposed letter effect is a test of how a word is processed when two letters within the word are switched.

References

  1. William D. Marslen-Wilson and Alan Welsh (1978) Processing Interactions and Lexical Access during Word Recognition in Continuous Speech. Cognitive Psychology,10,29–63
  2. Kennison, Shelia (2019). Psychology of language: Theories and applications. Red Globe Press. ISBN   978-1137545268.
  3. , The Free Dictionary
  4. 1 2 Fernandez, E.M. & Smith Cairns, H. (2011). Fundamentals of Psycholinguistics (Malden, MZ: Wiley-Blackwell). ISBN   978-1-4051-9147-0.
  5. Altmann, 71.
  6. 1 2 3 Altmann, 70.
  7. Marslen-Wilson, W. (1987). "Functional parallelism in spoken word recognition." Cognition, 25, 71-102.
  8. 1 2 Gaskell, M. Gareth; William D. Marslen-Wilson (1997). "Integrating Form and Meaning: A Distributed Model of Speech Perception". Language and Cognitive Processes. 12 (5/6): 613–656. doi:10.1080/016909697386646 . Retrieved 11 April 2013.
  9. 1 2 3 4 Packard, 288.
  10. 1 2 3 HARLEY, T. A. (2009). Psychology of language, from data to theory. New York: Psychology Pr.
  11. Ibrahim, Raphiq (2008). "Does Visual and Auditory Word Perception have a Language-Selective Input? Evidence from Word Processing in Semitic languages". The Linguistics Journal. 3 (2). Archived from the original on 5 December 2008. Retrieved 21 November 2008.
  12. http://www.inf.ed.ac.uk/teaching/courses/cm/lectures/cm19_wordrec-2x2.pdf, Goldwater, Sharon (2010).
  13. Altmann, 74.
  14. Taft, M., & Hambly, G. (1986). Exploring the cohort model of spoken word recognition. Cognition, 22(3), 259-282.
  15. 1 2 Brysbaert, Marc, and Ton Dijkstra (2006). "Changing views on word recognition in bilinguals." in Bilingualism and second language acquisition, eds. Morais, J. & d’Ydewalle, G. Brussels: KVAB.
  16. Packard, 289.
  17. 1 2 Altmann, 72.
  18. Altmann, 75.
  19. Packard, 295.
  20. Taft, 264.
  21. Van Heuven, W.J.B., Dijkstra, T., & Grainger, J. (1998). "Orthographic Neighborhood Effects in Bilingual Word Recognition." Journal of Memory and Language. pp. 458-483.