Bilingual interactive activation plus (BIA+) is a model for understanding the process of bilingual language comprehension and consists of two interactive subsystems: the word identification subsystem and task/decision subsystem. [1] It is the successor of the Bilingual Interactive Activation (BIA) model [2] which was updated in 2002 to include phonologic and semantic lexical representations, revise the role of language nodes, and specify the purely bottom-up nature of bilingual language processing.
The BIA+ is one of many models that was defined based on data from psycholinguistic or behavioral studies which investigate how the languages of bilinguals are manipulated during listening, reading and speaking each of them; however, BIA+ is now being supported by neuroimaging data linking this model to more neurally inspired ones which have a greater focus on the brain areas and mechanisms involved in these tasks.
The two basic tools in these studies are the event-related potential (ERP) which has high temporal resolution but low spatial resolution and the functional magnetic resonance imaging (fMRI) which typically has high spatial resolution and low temporal resolution. When used together, however, these two methods can generate a more complete picture of the time course and interactivity of bilingual language processing according to the BIA+ model. [1] These methods, however, do need to be considered carefully as overlapping activation areas in the brain do not imply that there is no functional separation between the two languages at the neuronal or higher-order level. [3]
According to the BIA+ model shown in the figure, during word identification, the visual input activates the sublexical orthographic representations which simultaneously activate both the orthographic whole-word lexical and the sublexical phonological representations. Both whole-word orthographic and phonological representations then activate the semantic representations and language nodes which indicate membership to a particular language. All of this information is then used in the task/decision subsystem to carry out the remainder of the task at hand. The two subsystems are further described by the assumptions associated with them below.
The integrated lexicon assumption describes the interactivity of the visual representation of word or word parts and orthography, the phonologic or auditory component of language processing, and the semantic or significance and meaning representations of words. [4] This theory was tested with orthographic neighbors, words of the same length that differ by one letter only (e.g. BALL and FALL). The number of target and non-target language neighbors influenced target word processing in both the primary language (L1) and the secondary language (L2). [5] This cross-language neighborhood effect was supposed to reflect a co-activation of words whatever the language they belong to, that is a lexical access that is language nonselective. Both target and nontarget languages can be automatically and unconsciously activated even in a pure monolingual mode. This does not imply, however, that there may not be features unique to one language (i.e. the use of different alphabets) or that, at the semantic level, there are no shared features.
This assumption states that language nodes/tags exist to provide a representation for the language of membership based on the information from upstream orthographic and phonological word ID processes. According to the BIA+ model, these tags have no effect on the activation level representation of words. [1] The focus of activation of these nodes is postlexical: the existence of these nodes enables bilingual individuals not to get too much interference from the nontarget language while they process one of their language.
Parallel access assumes that language is nonselective and that both potential word choices are activated in the bilingual brain when exposed to the same stimulus. For example, test subjects reading in their second language have been found to unconsciously translate to their primary language. [6] N400 stimulus response activation measurements show that semantic priming effects were seen in both languages and an individual cannot consciously focus their attention to only one language, even when told to ignore the second. [7] This language nonselective lexical access has been shown during semantic activation across languages, but also at the orthographic and phonological levels.
The temporal delay assumption is based on the principle of resting potential activation which reflects the frequency of word use by the bilingual such that high frequency words correlate to high resting level activation potentials, and words used with little frequency correlate to low resting level activation potentials. A high resting potential is one that is less negative or closer to zero, the point of activation, and therefore needs less stimuli in order to become activated. Because the less commonly used words of L2 have a lower resting level activation, L1 is likely to be activated before L2 as seen by N400 ERP patterns. [8]
This resting level activation of words also reflects the proficiency level of bilinguals and their frequency of usage of the two languages. When a bilingual’s language proficiency is lower in L2 than L1, the activation of L2 lexical representations will be further delayed as more extensive or higher-level brain activation is necessary for language control. [4] Both low and high proficiency bilinguals have parallel activation of the word representations, however the less proficient language, L2, becomes active more slowly contributing to the temporal delay assumption.
The locations of many of the word identification processing tasks have been determined with fMRI studies. Word retrieval is localized in Broca's area of the prefrontal cortex, [9] whereas storage of information is localized in the inferior temporal lobes. Globally, the same brain areas have been shown to be activated across the L1 and L2 in highly proficient bilinguals. Some subtle differences between L1 and L2 activations emerge though when testing lower proficient bilinguals.
The task/decision subsystem of the BIA+ model determines which actions must be executed for the task at hand based on the relevant information that becomes available after word identification processing. [1] This subsystem involves many of the executive processes including monitoring and control associated with the prefrontal cortex.
Action plans that meet the task at hand are executed by the task/decision system on the basis of activation information from the word identification subsystem. [7] Studies that tested bilinguals with homographs showed that conflicts between target and non-target language readings of the homographs still led to a difference in activation between it and a control, implying that bilinguals are not able to regulate activation in the word identification system. [10] Therefore, the action plans of the task/decision system have no direct influence on activations of word identification language subsystem.
The neural correlates of the task/ decision subsystem consist of multiple components that map onto different areas of prefrontal cortex responsible for executing control functions. For example, the general executive functions of language switching have been found to activate the anterior cingulate cortex and dorsolateral prefrontal cortex areas., [11] [12]
Translation, on the other hand, requires controlled actions in language representations and has been associated with the left basal ganglia, [12] [13] The left caudate nucleus has been associated with control of in-use language, [14] and the left mid-prefrontal cortex is responsible for monitoring interference and suppressing competing responses between languages., [13] [15]
According to the BIA+ model, when a bilingual with English as their primary language and Spanish as their secondary language translates the word advertencia from Spanish to English, several steps occur. The bilingual would use the orthographic and phonological cues to differentiate this word from the similar English word advertisement. At this point, however, the bilingual automatically derives the semantic meaning of the word, not only for the correct Spanish meaning of advertencia which is warning but also for the Spanish meaning of advertisement which is publicidad.
This information would then be stored in the bilingual’s working memory and used in the task/decision system to determine which of the two translations best fits the task at hand. Since the original instructions were to translate from Spanish to English, the bilingual would choose the correct translation of advertencia to be warning and not advertisement.
While the BIA+ models shares several similarities with its predecessor, the BIA model, there are a few distinct differences that exist between the two. First and most notable is the purely bottom-up nature of the BIA+ model which assumes that information from the task/decision subsystem cannot influence the word identification subsystem, while the BIA model assumes that the two systems can fully interact.
Second is that the language membership nodes of the BIA+ model do not affect the activation levels of the word identification system, whereas they play an inhibitory role in the BIA model.
Finally participant expectations could potentially affect the task/decision system in the BIA+ model; however the BIA model assumes there is no strong effect on the activation state of words based on expectations. [1]
The BIA+ model has been supported by many of the quantitative neuroimaging studies but more research needs to be completed in order to strengthen the model as a frontrunner in the accepted models for bilingual language processing. In the task/decision system, the task components are well-defined (e.g. translation, language switching) but the decision components involved in the execution of these tasks in the subsystem are underspecified. The relationship of the components in this subsystem need further exploration in order to be fully understood.
Scientists are also considering the use of magnetoencephalography (MEG) in future studies. This technology would link the spatial activation processes with the temporal patterns of brain response more accurately than simultaneously considering the response data from ERP and fMRI which are more limited.
Not only have studies suggested that the executive functioning of bilingualism extends beyond the language system, but bilinguals have also been shown to be faster processors who display fewer conflict effects than monolinguals in attentional tasks [16] This research implies that there may be some spillover effects of learning a second language on other areas of cognitive function that could be explored.
One future direction theories on bilingual word recognition should take is the investigation of developmental aspects of bilingual lexical access. [17] Most studies have investigated highly proficient bilinguals, but not many have looked at low-proficient bilinguals or even L2 learners. This new direction should prove to bring a lot of educational applications.
Neurolinguistics is the study of the neural mechanisms in the human brain that control the comprehension, production, and acquisition of language. As an interdisciplinary field, neurolinguistics draws methods and theories from fields such as neuroscience, linguistics, cognitive science, communication disorders and neuropsychology. Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models in psycholinguistics and theoretical linguistics, and is focused on investigating how the brain can implement the processes that theoretical and psycholinguistics propose are necessary in producing and comprehending language. Neurolinguists study the physiological mechanisms by which the brain processes information related to language, and evaluate linguistic and psycholinguistic theories, using aphasiology, brain imaging, electrophysiology, and computer modeling.
Brodmann area 45 (BA45), is part of the frontal cortex in the human brain. It is situated on the lateral surface, inferior to BA9 and adjacent to BA46.
Semantic memory is one of the two types of explicit memory. Semantic memory refers to general world knowledge that we have accumulated throughout our lives. This general knowledge is intertwined in experience and dependent on culture. Semantic memory is distinct from episodic memory, which is our memory of experiences and specific events that occur during our lives, from which we can recreate at any given point. For instance, semantic memory might contain information about what a cat is, whereas episodic memory might contain a specific memory of petting a particular cat. We can learn about new concepts by applying our knowledge learned from things in the past. The counterpart to declarative or explicit memory is nondeclarative memory or implicit memory.
Language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.
The Levels of Processing model, created by Fergus I. M. Craik and Robert S. Lockhart in 1972, describes memory recall of stimuli as a function of the depth of mental processing. Deeper levels of analysis produce more elaborate, longer-lasting, and stronger memory traces than shallow levels of analysis. Depth of processing falls on a shallow to deep continuum. Shallow processing leads to a fragile memory trace that is susceptible to rapid decay. Conversely, deep processing results in a more durable memory trace.
Tip of the tongue is the phenomenon of failing to retrieve a word or term from memory, combined with partial recall and the feeling that retrieval is imminent. The phenomenon's name comes from the saying, "It's on the tip of my tongue." The tip of the tongue phenomenon reveals that lexical access occurs in stages.
Bimodal bilingualism is an individual or community's bilingual competency in at least one oral language and at least one sign language. A substantial number of bimodal bilinguals are Children of Deaf Adults or other hearing people who learn sign language for various reasons. Deaf people as a group have their own sign language and culture, but invariably live within a larger hearing culture with its own oral language. Thus, "most deaf people are bilingual to some extent in [an oral] language in some form". In discussions of multilingualism in the United States, bimodal bilingualism and bimodal bilinguals have often not been mentioned or even considered, in part because American Sign Language, the predominant sign language used in the U.S., only began to be acknowledged as a natural language in the 1960s. However, bimodal bilinguals share many of the same traits as traditional bilinguals, as well as differing in some interesting ways, due to the unique characteristics of the Deaf community. Bimodal bilinguals also experience similar neurological benefits as do unimodal bilinguals, with significantly increased grey matter in various brain areas and evidence of increased plasticity as well as neuroprotective advantages that can help slow or even prevent the onset of age-related cognitive diseases, such as Alzheimer's and dementia.
Negative priming is an implicit memory effect in which prior exposure to a stimulus unfavorably influences the response to the same stimulus. It falls under the category of priming, which refers to the change in the response towards a stimulus due to a subconscious memory effect. Negative priming describes the slow and error-prone reaction to a stimulus that is previously ignored. For example, a subject may be imagined trying to pick a red pen from a pen holder. The red pen becomes the target of attention, so the subject responds by moving their hand towards it. At this time, they mentally block out all other pens as distractors to aid in closing in on just the red pen. After repeatedly picking the red pen over the others, switching to the blue pen results in a momentary delay picking the pen out. The slow reaction due to the change of the distractor stimulus to target stimulus is called the negative priming effect.
Language production is the production of spoken or written language. In psycholinguistics, it describes all of the stages between having a concept to express and translating that concept into linguistic form. These stages have been described in two types of processing models: the lexical access models and the serial models. Through these models, psycholinguists can look into how speech is produced in different ways, such as when the speaker is bilingual. Psycholinguists learn more about these models and different kinds of speech by using language production research methods that include collecting speech errors and elicited production tasks.
Deep dyslexia is a form of dyslexia that disrupts reading processes. Deep dyslexia may occur as a result of a head injury, stroke, disease, or operation. This injury results in the occurrence of semantic errors during reading and the impairment of nonword reading.
Priming is a phenomenon whereby exposure to one stimulus influences a response to a subsequent stimulus, without conscious guidance or intention. For example, the word NURSE is recognized more quickly following the word DOCTOR than following the word BREAD. Priming can be perceptual, associative, repetitive, positive, negative, affective, semantic, or conceptual. Research, however, has yet to firmly establish the duration of priming effects, yet their onset can be almost instantaneous.
Various aspects of multilingualism have been studied in the field of neurology. These include the representation of different language systems in the brain, the effects of multilingualism on the brain's structural plasticity, aphasia in multilingual individuals, and bimodal bilinguals. Neurological studies of multilingualism are carried out with functional neuroimaging, electrophysiology, and through observation of people who have suffered brain damage.
The mental lexicon is defined as a mental dictionary that contains information regarding a word's meaning, pronunciation, syntactic characteristics, and so on.
Serial memory processing is the act of attending to and processing one item at a time. This is usually contrasted against parallel memory processing, which is the act of attending to and processing all items simultaneously.
Embodied cognition occurs when an organism's sensorimotor capacities, body and environment play an important role in thinking. The way in which a person's body and their surroundings interacts also allows for specific brain functions to develop and in the future to be able to act. This means that not only does the mind influence the body's movements, but the body also influences the abilities of the mind, also termed the bi-directional hypothesis. There are three generalizations that are assumed to be true relating to embodied cognition. A person's motor system is activated when (1) they observe manipulable objects, (2) process action verbs, and (3) observe another individual's movements.
The visual word form area (VWFA) is a functional region of the left fusiform gyrus and surrounding cortex that is hypothesized to be involved in identifying words and letters from lower-level shape images, prior to association with phonology or semantics. Because the alphabet is relatively new in human evolution, it is unlikely that this region developed as a result of selection pressures related to word recognition per se; however, this region may be highly specialized for certain types of shapes that occur naturally in the environment and are therefore likely to surface within written language.
Bilingual lexical access is an area of psycholinguistics that studies the activation or retrieval process of the mental lexicon for bilingual people.
The neurocircuitry that underlies executive function processes and emotional and motivational processes are known to be distinct in the brain. However, there are brain regions that show overlap in function between the two cognitive systems. Brain regions that exist in both systems are interesting mainly for studies on how one system affects the other. Examples of such cross-modal functions are emotional regulation strategies such as emotional suppression and emotional reappraisal, the effect of mood on cognitive tasks, and the effect of emotional stimulation of cognitive tasks.
Charles Perfetti is the director of, and Senior Scientist for, the Learning and Research Development Center at the University of Pittsburgh. His research is centered on the cognitive science of language and reading processes, including but not limited to lower- and higher-level lexical and syntactic processes and the nature of reading proficiency. He conducts cognitive behavioral studies involving ERP, fMRI and MEG imaging techniques. His goal is to develop a richer understanding of how language is processed in the brain.
Embodied bilingual language, also known as L2 embodiment, is the idea that people mentally simulate their actions, perceptions, and emotions when speaking and understanding a second language (L2) as with their first language (L1). It is closely related to embodied cognition and embodied language processing, both of which only refer to native language thinking and speaking. An example of embodied bilingual language would be situation in which a L1 English speaker learning Spanish as a second language hears the word rápido ("fast") in Spanish while taking notes and then proceeds to take notes more quickly.