Bimodal bilingualism

Last updated

Bimodal bilingualism is an individual or community's bilingual competency in at least one oral language and at least one sign language, which utilize two different modalities. An oral language consists of an vocal-aural modality versus a signed language which consists of a visual-spatial modality. [1] A substantial number of bimodal bilinguals are children of deaf adults (CODA) or other hearing people who learn sign language for various reasons. Deaf people as a group have their own sign language(s) and culture that is referred to as Deaf, [2] but invariably live within a larger hearing culture with its own oral language. Thus, "most deaf people are bilingual to some extent in [an oral] language in some form" [3] In discussions of multilingualism in the United States, bimodal bilingualism and bimodal bilinguals have often not been mentioned or even considered, in part because American Sign Language, the predominant sign language used in the U.S., only began to be acknowledged as a natural language in the 1960s (in discussions of bimodal bilingualism in the U.S., the two languages involved are generally ASL and English). However, bimodal bilinguals share many of the same traits as traditional bilinguals (those with competency in at least two spoken languages), as well as differing in some interesting ways, due to the unique characteristics of the Deaf community. Bimodal bilinguals also experience similar neurological benefits as do unimodal bilinguals, with significantly increased grey matter in various brain areas and evidence of increased plasticity as well as neuroprotective advantages that can help slow or even prevent the onset of age-related cognitive diseases, such as Alzheimer's and dementia.

Contents

Neurological implications and effects of bimodal bilingualism

Most modern neurological studies of bilingualism employ functional neuroimaging techniques to uncover the neurological underpinnings of multilingualism and how multilingualism is beneficial to the brain. Neuroimaging and other neurological studies have demonstrated in recent years that multilingualism has a significant impact on the human brain. The mechanisms required by bilinguals to code-switch (that is, alternate rapidly between multiple languages within a conversation), not only demonstrate increased connectivity and density of the neural network in multilinguals, but also appear to provide protection against damage due to age and age-related pathologies, such as Alzheimer's. [4] Multilingualism, especially bimodal multilingualism, can help slow to process of cognitive decline in aging. It is thought that this is a result of the increased work load that the executive system, housed mostly in the frontal cortex, must assume in order to successfully control the use of multiple languages at once. This means that the cortex must be more finely tuned, which results in a "neural reserve" that then has neuroprotective benefits.

Gray matter volume (GMV) has been shown to be significantly preserved in bimodal bilinguals as compared to monolinguals in multiple brain areas, including the hippocampus, amygdala, anterior temporal lobes, and left insula. Similarly, neuroimaging studies that have compared monolinguals, unimodal bilinguals, and bimodal bilinguals provide evidence that deaf signers exhibit brain activation in patterns different from those of hearing signers, especially in regards to the left superior temporal sulcus. In deaf signers, activation of the superior temporal sulcus is highly lateralized to the left side during facial recognition tasks, while this lateralization was not present in hearing, bimodal signers. [5] Bilinguals also require an effective and fast neural control system to allow them to select and control their languages even while code switching rapidly. Evidence indicates that the left caudate nucleus—a centrally located brain feature that is near the thalamus and the basal ganglia—is an important part of this mechanism, as bilinguals tend to have significantly increased GMV and activation in this region as compared to monolinguals, especially during active code switching tasks. [6] As implied by the significant preservation of gray matter in the hippocampi (an area of the brain largely associated with memory consolidation and higher cognitive function, such as decision-making) of bimodal bilinguals, areas of the brain that help control phonological working memory tend to also have higher activation in those individuals who are proficient in two or more languages. There is also evidence that suggests that the age at which an individual acquires a second language may play a significant role in the varying brain functions associated with bilingualism. For example, individuals who acquired their second language early (before the age of 10) tend to have drastically different activation patterns than do late learners. However, late learners who achieve full proficiency in their second language tend to show similar patterns of activation during auditory tasks regardless of which language is being used, whereas early learners tend to activate different brain areas depending upon which language is being used. [7]

Along with the neuroprotective benefits that help to prevent onset of age-related cognitive issues such as dementia, bimodal bilinguals also experience a slightly different pattern of organization of language in the brain. While hearing bimodal bilinguals showed less parietal activation than deaf signers when asked to use only sign language, those same bimodal bilinguals demonstrated greater left parietal activation than did monolinguals. [8] Parietal activation is not typically associated with language production but rather with motor activity. Therefore, it is logical that bimodal bilinguals, when switching between speech- and sign-based language, stimulate their left parietal areas as a result of their increased need to combine both motor action and language production. Moreover, it has been proven that there is a parallel or simultaneous language activation during language use. This activation occurs when the bilingual uses language, regardless of the L1 or L2, being used. The dominance or lack of dominance of a language does not impact the stimulation of the brain when a language is being used. This same activation happens with any language modality, meaning the brain is activated whether the language is written, signed, or spoken. [9]

A 2021 study used event-related potential (ERP) to track the brain's language switch in bimodal bilinguals and measured a brain response pattern not found in unimodal bilinguals. [10]

Similarities to oral-language bilingualism

Diverse range of language competency

To be defined as bilingual, an individual need not have perfect fluency or equal skill in both languages. [11] Bimodal bilinguals, like oral-language bilinguals, exhibit a wide range of language competency in their first and second languages. For Deaf people (the majority of bimodal bilinguals in the U.S.), level of competency in ASL and English may be influenced by factors such as degree of hearing loss, whether the individual is prelingually or post-lingually deaf, style of and language used in their education, and whether the individual comes from a hearing or Deaf family. [12] Historically, assessment of bilingual children would only measure proficiency in one of their languages. In more recent research, linguists and educators have identified this design flaw. It can be concluded that most bilingual children achieve phonological, lexical, and grammatical milestones at the same rate as monolingual children. This same phenomenon has been found in comparing unimodal bilinguals and bimodal bilinguals. In a study by Fish & Morford (2012), bimodal bilingual CODAs have demonstrated the same rate of success in these areas as their unimodal bilingual peers. [13]

Regardless of English competency in other areas, no Deaf individual is likely to comprehend English in the same way as a hearing person when others are speaking it because only a small percentage of English phonemes are clearly visible through lip reading. Additionally, many Deaf bilinguals who have fluency in written English choose not to speak it because of the general social unacceptability of their voices, or because they are unable to monitor factors like pitch and volume. [12] The simultaneous production of speech and sign is referred to as code-switching. Using the fundamental concepts of Minimalism and Distributed Morphology, the research examined the phenomenon of code-switching and captured the true understanding and meaning of code-switching. The Synthesis model and WH-question were used in the study. The second modality, sign language, is recognized by the model. With any two language pairs, this method is intended to capture a variety of data. Their goal is to propose bilingual effects for any two language pairs. [14]

Denial of their own bilingualism

Like hearing oral-language bilinguals, Deaf bimodal bilinguals generally "do not judge themselves to be bilingual". [15] Whether because they do not believe the sign language to be a legitimate and separate language from the majority oral language, or because they don't consider themselves sufficiently fluent in one of their languages, denial of one's bilingualism is a common and well-known phenomenon among bilinguals, be they hearing or Deaf. [15]

Everyday shifts along the language mode continuum

Deaf or bimodal bilinguals, in their day-to-day lives, move among and between various points on the language mode continuum depending on the situation and the language competency and skills of those with whom they are interacting. For example, when conversing with a monolingual, all bilinguals will restrict themselves to the language of the individual with whom they're conversing. However, when interacting with another bilingual, all bilinguals can use a mixture of the two common languages. [15] While early aged bimodal bilinguals have more than one mode to communicate a language, they are just as susceptible as unimodal bilinguals to confusing domains and using the "wrong" language in a given situation. [16] Code-switching is a common phenomenon found among bilinguals; for bimodal bilinguals, another equivalent phenomenon is code-blending, which "involves simultaneous production of parts of an utterance in speech and sign." Examples of code-blending would be using ASL word order in a spoken English utterance, or conversing by showing an ASL classifier and speaking the English equivalent phrase simultaneously. [16] Like unimodal bilinguals, bimodal bilinguals will activate, deactivate or adjust their use of each language according to their domain. For ASL-English bilingualism, "deaf students' vocabulary knowledge in each language will be related to the contexts where the two languages are used." That is, vocabulary and topics learned and discussed in ASL will be recognized and recalled in ASL, and "English vocabulary will reflect the contexts where English is accessible to deaf students." [17]

Unequal social status of the languages involved

As is the case in many situations of oral-language bilingualism, bimodal bilingualism in the U.S. involves two languages with vastly different social status. ASL has traditionally not even had the status of being considered a legitimate language, and Deaf children have been prevented from learning it through such "methods" as having their hands tied together. Hearing parents of Deaf children have historically been advised not to allow their children to learn ASL, as they were informed it would prevent the acquisition of English. Despite the fact that Deaf children's early exposure to ASL has now been shown to enhance their aptitude for acquiring English competency, the unequal social status of ASL and English, and of sign languages and oral languages, remains. [12] [18] Consequently, CODAs have a wide range of both positive and negative impacts based on their individual experiences.

Differences from oral-language bilingualism

Lack of societal acknowledgment of bilingual community status

Since linguists didn't recognize ASL as a true language until the second half of the twentieth century, there has been very little acknowledgment of, or attention or study devoted to, the bilingual status of the American Deaf community. [15] Deaf people are often "still seen by many as monolingual in the majority language whereas in fact many are bilingual in that language and in sign". [15]

Bilingual language mode: Contact signing

Because almost all members of the American Deaf community are to some extent bilingual in ASL and English, it is rare that a Deaf person will find themselves conversing with a person who is monolingual in ASL. Therefore, unless an American Deaf person is communicating with someone who is monolingual in English (the majority language), he or she can expect to be conversing in a "bilingual language mode". [15] The result of this prolonged bilingual contact and mixing between a sign language and an oral language is known as contact sign. [12] Deaf children and their parents communicated using several modalities, such as oral-aural and visual-gestural. The mixed use of ASL and spoken English in bilinguals is discussed in this article. It led to the contact signing. The author of the paper covered many contact sign approaches in complex layers of ASL and spoken English expressions. In the deaf community, contact signing is a common occurrence. [19]

Unlikelihood of large-scale language shift

Language shift "occurs when speakers in a community give up speaking their language and take up the use of another in its place". [3] ASL in particular, and sign languages in general, are undeniably influenced by their close contact with English or other oral languages, as evidenced by phenomena such as "loan signs" or lexicalized fingerspelling (the sign language equivalent of loanwords), and through the influence of Contact Sign. However, due to the physical fact of deafness or hearing loss, deaf people generally cannot acquire and speak the majority language in the same way or with the same competency that the hearing population does. Simultaneously, Deaf people still often have a need or desire to learn some form of English in order to communicate with family members and the majority culture. [18] Thus, Deaf communities and individuals, in contrast to many hearing bilingual communities and individuals, will tend to "remain bilingual throughout their lives and from generation to generation". [15]

Comprehension and Expression of Language

Bimodal bilinguals are able to produce and perceive a spoken and a signed language simultaneously compared to those who are unimodal. Thus, these individuals are only able to perceive a spoken language at a given time and would not be able to process a signed language at the same time unless one is proficient in ASL. [20] However, those who are able to produce and perceive a spoken and signed language simultaneously demonstrated a slower speech rate, decreased lexical richness, and lower syntactic complexity when compared to the results of the speech-only condition. [20] In addition, ASL users rely more on pragmatic inferences and background context versus syntactic information. [21]

Bimodal Bilingual Education

In more recent research related to bilingualism and ASL, early exposure and adequate access to a first language has been imperative to children's development of language, academic and social opportunities, and critical thinking and reasoning skills - all of which can be "applied to literacy development in a spoken language (such as English)." [13] This conclusive research emphasizes the need for more additive models of bilingual education, as opposed to subtractive or transitional models of education, which are designed to shift the learner away from the native language for the goal of complete use and reliance of the majority language. For deaf children, subtractive models of bilingual education, combined with the lack of foundation of a native language, typically result in language deprivation and delay in cognitive development. In comparison, the aim the maintenance model, an additive model is "to support the development of the native language while also fostering acquisition and use of the majority language." This model is embedded in a bimodal, bilingual education and may include "comparative and integrative pedagogic strategies such as translation, fingerspelling, and chaining/sandwiching strategies." [22]

Simultaneous communication, or SimCom, which is a method of signing that represents English in its structure and elements, typically following English word order but still using one sign per word. However, research has shown this method of communication is not ideal for bilingual language learning. In a study about bimodal bilingual teachers and students' vocabulary levels, the results revealed a "slower speech rate, lower lexical richness, and lower syntactic complexity in the SimCom [teaching] condition compared with the speech-only condition." These findings suggest that "the [teachers'] production of the less dominant language (ASL) during SimCom entails inhibition of the dominant [spoken English] language relative to the speech-only condition." This study also acknowledges that SimCom is a "complex communication unit that cannot be reduced to the combination of two languages." [20]

Methodologies, strategies and support in bimodal bilingual education, as well as the language background and linguistic capital of bimodal bilingual educators are key aspects to the achievements of language competence of bimodal bilingual learners.

Sign–print bilingualism

The written forms of language can be considered another modality. Sign languages do not have widely accepted written forms, so deaf individuals learn to read and write an oral language. This is known as sign–print bilingualism—a deaf individual has fluency in (at least) one sign language as their primary language and has literacy skills in the written form of (at least) one oral language, without access to other resources of the oral language that are gained through auditory stimuli. [23] Orthographic systems employ the morphology, syntax, lexical choices, and often phonetic representation of their target language in at least superficial ways; one must learn these new features of the target language in order to read or write. In communities where there is standardized education for the deaf, such as the United States and the Netherlands, deaf individuals do gain skill sets in reading and writing in the oral language of the community. In such a state, bilingualism is achieved between a sign language and the written form of the community's oral language. In this view, all sign–print bilinguals are bimodal bilinguals, but all bimodal bilinguals may not be sign–print bilinguals.

How deaf children learn to read

Children who are deaf and employ a sign language as their primary language learn to read in slightly different ways than their hearing counterparts. Much as speakers of oral languages most frequently achieve spoken fluency before they learn to read and write, the most successful profoundly deaf readers first learn to communicate in a sign language. [24] Research suggests that there is a mapping process, in which features from the sign language are accessed as a basis for the written language, Similar to the way hearing unimodal bilinguals access their primary language when communicating in their second language. [25] [26] Profoundly deaf ASL signers show that fluency in ASL is the best predictor of high reading skills in predicting proficiency in written English. [24] In addition, highly proficient signing deaf children use more evaluative devices when writing than less proficient signing deaf children, and the relatively frequent omission of articles when writing in English by proficient signers may suggest a stage in which the transfer effect (that normally facilitates deaf children in reading) facilitates a mix of the morphosyntactic systems of written English and ASL. [25] Deaf children then appear to map the new morphology, syntax, and lexical choices of their written language onto the existing structures of their primary sign language. The study examined deaf and hearing readers' reading processes. The study examined deaf readers' responses to syntactic manipulations with using self-pacing methods. The experimental included of animate and inanimate subjects, actives and passives, as well as subject and object relatives. Hearing readers had higher comprehension accuracy than deaf readers, according to the findings of the study. Deaf readers, on the other hand, can read and grasp sentences faster than hearing readers, according to the study. Self-pacing is a better method for deaf readers, according to the study. [17]

Using phonological information

There are mixed results in how important phonological information is to deaf individuals when reading and when that information is obtained. Alphabets, abugidas, abjads, and syllabaries all seem to require the reader/writer to know something about the phonology of their target language prior to learning the system. Profoundly deaf children do not have access to the same auditory base that hearing children do. [24] Orally trained deaf children do not always use phonological information in reading tasks, word recognition tasks or homophonic tasks; however, deaf signers who are not orally trained do utilize phonological information in word-rhyming tasks. [24] Furthermore, when performing on tasks with phonologically confusable initial sounds, hearing readers made more errors than deaf readers. [27] Yet when given sentences that are sublexically confusable when translated into ASL, deaf readers made more errors than hearing readers. [27] The body of literature clearly shows that skilled deaf readers can employ phonological skills, even if they don't all the time; without additional longitudinal studies it is uncertain if a profoundly deaf person must know something about the phonology of the target language to become a skilled reader (less than 75% of the deaf population) or if by becoming a skilled reader a deaf person learns how to employ phonological skills of the target language. [24]

Pedagogical challenges for sign–print bilinguals

In 2012, "About one in five deaf students who graduate from high school have reading skills at or below the second grade level; about one in three deaf students who graduate from high school have reading skills between the second and fourth grade level. Compared to deaf students, hard of hearing students (i.e., those with mild to moderate hearing loss) fare better overall, but even mild hearing losses can create significant challenges for developing reading skills". [28] These concerning numbers are generally the result of varying levels of early language exposure. Most deaf children are born to hearing parents, which usually leaves a deficiency in their language exposure and development compared to children and parents who use the same modality to communicate. This group of children acquire a wide range of proficiency in a first language, which then impacts their ability to become proficient in a second (though sometimes possibly a first) language in the written modality. [24] Children exposed to Manually Coded English (MCE) as their primary form of communication show lower literary levels than their ASL signing peers. However, in countries such as Sweden which have adopted a bilingual–bicultural policy in their schools for the deaf, one sees a higher literacy rate compared to school systems favoring an oral tradition. [23]

See also

Related Research Articles

<span class="mw-page-title-main">American Sign Language</span> Sign language used predominately in the United States

American Sign Language (ASL) is a natural language that serves as the predominant sign language of Deaf communities in the United States of America and most of Anglophone Canada. ASL is a complete and organized visual language that is expressed by employing both manual and nonmanual features. Besides North America, dialects of ASL and ASL-based creoles are used in many countries around the world, including much of West Africa and parts of Southeast Asia. ASL is also widely learned as a second language, serving as a lingua franca. ASL is most closely related to French Sign Language (LSF). It has been proposed that ASL is a creole language of LSF, although ASL shows features atypical of creole languages, such as agglutinative morphology.

Cued speech is a visual system of communication used with and among deaf or hard-of-hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes, known as cues, in different locations near the mouth to convey spoken language in a visual format. The National Cued Speech Association defines cued speech as "a visual mode of communication that uses hand shapes and placements in combination with the mouth movements and speech to make the phonemes of spoken language look different from each other." It adds information about the phonology of the word that is not visible on the lips. This allows people with hearing or language difficulties to visually access the fundamental properties of language. It is now used with people with a variety of language, speech, communication, and learning needs. It is not a sign language such as American Sign Language (ASL), which is a separate language from English. Cued speech is considered a communication modality but can be used as a strategy to support auditory rehabilitation, speech articulation, and literacy development.

Simultaneous communication, SimCom, or sign supported speech (SSS) is a technique sometimes used by deaf, hard-of-hearing or hearing sign language users in which both a spoken language and a manual variant of that language are used simultaneously. While the idea of communicating using two modes of language seems ideal in a hearing/deaf setting, in practice the two languages are rarely relayed perfectly. Often the native language of the user is the language that is strongest, while the non-native language degrades in clarity. In an educational environment this is particularly difficult for deaf children as a majority of teachers who teach the deaf are hearing. Results from surveys taken indicate that communication for students is indeed signing, and that the signing leans more toward English rather than ASL.

A contact sign language, or contact sign, is a variety or style of language that arises from contact between deaf individuals using a sign language and hearing individuals using an oral language. Contact languages also arise between different sign languages, although the term pidgin rather than contact sign is used to describe such phenomena.

Simultaneous bilingualism is a form of bilingualism that takes place when a child becomes bilingual by learning two languages from birth. According to Annick De Houwer, in an article in The Handbook of Child Language, simultaneous bilingualism takes place in "children who are regularly addressed in two spoken languages from before the age of two and who continue to be regularly addressed in those languages up until the final stages" of language development. Both languages are acquired as first languages. This is in contrast to sequential bilingualism, in which the second language is learned not as a native language but a foreign language.

Metalinguistics is the branch of linguistics that studies language and its relationship to other cultural behaviors. It is the study of dialogue relationships between units of speech communication as manifestations and enactments of co-existence. Jacob L. Mey in his book, Trends in Linguistics, describes Mikhail Bakhtin's interpretation of metalinguistics as "encompassing the life history of a speech community, with an orientation toward a study of large events in the speech life of people and embody changes in various cultures and ages."

Bilingualism, a subset of multilingualism, means having proficiency in two or more languages. A bilingual individual is traditionally defined as someone who understands and produces two or more languages on a regular basis. A bilingual individual's initial exposure to both languages may start in early childhood, e.g. before age 3, but exposure may also begin later in life, in monolingual or bilingual education. Equal proficiency in a bilingual individuals' languages is rarely seen as it typically varies by domain. For example, a bilingual individual may have greater proficiency for work-related terms in one language, and family-related terms in another language.

Ellen Bialystok, OC, FRSC is a Canadian psychologist and professor. She carries the rank of Distinguished Research Professor at York University, in Toronto, where she is director of the Lifespan Cognition and Development Lab, and is also an associate scientist at the Rotman Research Institute of the Baycrest Centre for Geriatric Care.

Bilingual–Bicultural or Bi-Bi deaf education programs use sign language as the native, or first, language of Deaf children. In the United States, for example, Bi-Bi proponents claim that American Sign Language (ASL) should be the natural first language for deaf children in the United States, although the majority of deaf and hard of hearing being born to hearing parents. In this same vein, the spoken or written language used by the majority of the population is viewed as a secondary language to be acquired either after or at the same time as the native language.

<span class="mw-page-title-main">Laura-Ann Petitto</span> American psychologist and neuroscientist (born c. 1954)

Laura-Ann Petitto is a cognitive neuroscientist and a developmental cognitive neuroscientist known for her research and scientific discoveries involving the language capacity of chimpanzees, the biological bases of language in humans, especially early language acquisition, early reading, and bilingualism, bilingual reading, and the bilingual brain. Significant scientific discoveries include the existence of linguistic babbling on the hands of deaf babies and the equivalent neural processing of signed and spoken languages in the human brain. She is recognized for her contributions to the creation of the new scientific discipline, called educational neuroscience. Petitto chaired a new undergraduate department at Dartmouth College, called "Educational Neuroscience and Human Development" (2002-2007), and was a Co-Principal Investigator in the National Science Foundation and Dartmouth's Science of Learning Center, called the "Center for Cognitive and Educational Neuroscience" (2004-2007). At Gallaudet University (2011–present), Petitto led a team in the creation of the first PhD in Educational Neuroscience program in the United States. Petitto is the Co-Principal Investigator as well as Science Director of the National Science Foundation and Gallaudet University’s Science of Learning Center, called the "Visual Language and Visual Learning Center (VL2)". Petitto is also founder and Scientific Director of the Brain and Language Laboratory for Neuroimaging (“BL2”) at Gallaudet University.

Neuroscience of multilingualism is the study of multilingualism within the field of neurology. These studies include the representation of different language systems in the brain, the effects of multilingualism on the brain's structural plasticity, aphasia in multilingual individuals, and bimodal bilinguals. Neurological studies of multilingualism are carried out with functional neuroimaging, electrophysiology, and through observation of people who have suffered brain damage.

<span class="mw-page-title-main">Bilingual memory</span>

Bilingualism is the regular use of two fluent languages, and bilinguals are those individuals who need and use two languages in their everyday lives. A person's bilingual memories are heavily dependent on the person's fluency, the age the second language was acquired, and high language proficiency to both languages. High proficiency provides mental flexibility across all domains of thought and forces them to adopt strategies that accelerate cognitive development. People who are bilingual integrate and organize the information of two languages, which creates advantages in terms of many cognitive abilities, such as intelligence, creativity, analogical reasoning, classification skills, problem solving, learning strategies, and thinking flexibility.

Viorica Marian is a Moldovan-born American Psycholinguist, Cognitive Scientist, and Psychologist known for her research on bilingualism and multilingualism. She is the Ralph and Jean Sundin Endowed Professor of Communication Sciences and Disorders, and Professor of Psychology at Northwestern University. Marian is the Principal Investigator of the Bilingualism and Psycholinguistics Research Group. She received her PhD in Psychology from Cornell University, and master's degrees from Emory University and from Cornell University. Marian studies language, cognition, the brain, and the consequences of knowing more than one language for linguistic, cognitive, and neural architectures.

The sociolinguistics of sign languages is the application of sociolinguistic principles to the study of sign languages. The study of sociolinguistics in the American Deaf community did not start until the 1960s. Until recently, the study of sign language and sociolinguistics has existed in two separate domains. Nonetheless, now it is clear that many sociolinguistic aspects do not depend on modality and that the combined examination of sociolinguistics and sign language offers countless opportunities to test and understand sociolinguistic theories. The sociolinguistics of sign languages focuses on the study of the relationship between social variables and linguistic variables and their effect on sign languages. The social variables external from language include age, region, social class, ethnicity, and sex. External factors are social by nature and may correlate with the behavior of the linguistic variable. The choices made of internal linguistic variant forms are systematically constrained by a range of factors at both the linguistic and the social levels. The internal variables are linguistic in nature: a sound, a handshape, and a syntactic structure. What makes the sociolinguistics of sign language different from the sociolinguistics of spoken languages is that sign languages have several variables both internal and external to the language that are unique to the Deaf community. Such variables include the audiological status of a signer's parents, age of acquisition, and educational background. There exist perceptions of socioeconomic status and variation of "grassroots" deaf people and middle-class deaf professionals, but this has not been studied in a systematic way. "The sociolinguistic reality of these perceptions has yet to be explored". Many variations in dialects correspond or reflect the values of particular identities of a community.

Sign language refers to any natural language which uses visual gestures produced by the hands and body language to express meaning. The brain's left side is the dominant side utilized for producing and understanding sign language, just as it is for speech. In 1861, Paul Broca studied patients with the ability to understand spoken languages but the inability to produce them. The damaged area was named Broca's area, and located in the left hemisphere’s inferior frontal gyrus. Soon after, in 1874, Carl Wernicke studied patients with the reverse deficits: patients could produce spoken language, but could not comprehend it. The damaged area was named Wernicke's area, and is located in the left hemisphere’s posterior superior temporal gyrus.

Sign languages such as American Sign Language (ASL) are characterized by phonological processes analogous to, yet dissimilar from, those of oral languages. Although there is a qualitative difference from oral languages in that sign-language phonemes are not based on sound, and are spatial in addition to being temporal, they fulfill the same role as phonemes in oral languages.

Language acquisition is a natural process in which infants and children develop proficiency in the first language or languages that they are exposed to. The process of language acquisition is varied among deaf children. Deaf children born to deaf parents are typically exposed to a sign language at birth and their language acquisition follows a typical developmental timeline. However, at least 90% of deaf children are born to hearing parents who use a spoken language at home. Hearing loss prevents many deaf children from hearing spoken language to the degree necessary for language acquisition. For many deaf children, language acquisition is delayed until the time that they are exposed to a sign language or until they begin using amplification devices such as hearing aids or cochlear implants. Deaf children who experience delayed language acquisition, sometimes called language deprivation, are at risk for lower language and cognitive outcomes. However, profoundly deaf children who receive cochlear implants and auditory habilitation early in life often achieve expressive and receptive language skills within the norms of their hearing peers; age at implantation is strongly and positively correlated with speech recognition ability. Early access to language through signed language or technology have both been shown to prepare children who are deaf to achieve fluency in literacy skills.

<span class="mw-page-title-main">Black American Sign Language</span> Dialect of American Sign Language

Black American Sign Language (BASL) or Black Sign Variation (BSV) is a dialect of American Sign Language (ASL) used most commonly by deaf African Americans in the United States. The divergence from ASL was influenced largely by the segregation of schools in the American South. Like other schools at the time, schools for the deaf were segregated based upon race, creating two language communities among deaf signers: black deaf signers at black schools and white deaf signers at white schools. As of the mid 2010s, BASL is still used by signers in the South despite public schools having been legally desegregated since 1954.

Language deprivation in deaf and hard-of-hearing children is a delay in language development that occurs when sufficient exposure to language, spoken or signed, is not provided in the first few years of a deaf or hard of hearing child's life, often called the critical or sensitive period. Early intervention, parental involvement, and other resources all work to prevent language deprivation. Children who experience limited access to language—spoken or signed—may not develop the necessary skills to successfully assimilate into the academic learning environment. There are various educational approaches for teaching deaf and hard of hearing individuals. Decisions about language instruction is dependent upon a number of factors including extent of hearing loss, availability of programs, and family dynamics.

Karen Denise Emmorey is a linguist and cognitive neuroscientist known for her research on the neuroscience of sign language and what sign languages reveal about the brain and human languages more generally. Emmorey holds the position of Distinguished Professor in the School of Speech, Language, and Hearing Sciences at San Diego State University, where she directs the Laboratory for Language and Cognitive Neuroscience and the Center for Clinical and Cognitive Neuroscience.

References

  1. "What is sign language and what is not". www.handspeak.com. Retrieved 2022-03-13.
  2. "The Deaf Community: An Introduction". National Deaf Center. 2017-09-27. Retrieved 2022-03-13.
  3. 1 2 Ann J (2001). "Bilingualism and Language Contact." (PDF). In Lucas C (ed.). The Sociolinguistics of Sign Languages. Cambridge: Cambridge University Press. pp. 33–60. ISBN   978-0-521-79137-3. OCLC   441595914.
  4. Li L, Abutalebi J, Emmorey K, Gong G, Yan X, Feng X, et al. (August 2017). "How bilingualism protects the brain from aging: Insights from bimodal bilinguals". Human Brain Mapping. 38 (8): 4109–4124. doi:10.1002/hbm.23652. PMC   5503481 . PMID   28513102.
  5. Emmorey K, McCullough S (2009). "The bimodal bilingual brain: effects of sign language experience". Brain and Language. 109 (2–3): 124–132. doi:10.1016/j.bandl.2008.03.005. PMC   2680472 . PMID   18471869.
  6. Zou L, Ding G, Abutalebi J, Shu H, Peng D (October 2012). "Structural plasticity of the left caudate in bimodal bilinguals". Cortex; A Journal Devoted to the Study of the Nervous System and Behavior. 48 (9): 1197–1206. doi:10.1016/j.cortex.2011.05.022. PMID   21741636. S2CID   206984201.
  7. Abutalebi J, Cappa SF, Perani D (August 2001). "The bilingual brain as revealed by functional neuroimaging". Bilingualism: Language and Cognition. 4 (2): 179–190. doi:10.1017/s136672890100027x. S2CID   96477886.
  8. Kovelman I, Shalinsky MH, Berens MS, Petitto LA (2014). "Words in the bilingual brain: an fNIRS brain imaging investigation of lexical processing in sign-speech bimodal bilinguals". Frontiers in Human Neuroscience. 8: 606. doi: 10.3389/fnhum.2014.00606 . PMC   4139656 . PMID   25191247.
  9. Kroll JF, Dussias PE, Bice K, Perrotti L (2015-01-01). "Bilingualism, Mind, and Brain". Annual Review of Linguistics. 1 (1): 377–394. doi:10.1146/annurev-linguist-030514-124937. PMC   5478196 . PMID   28642932.
  10. Declerck M, Meade G, Midgley KJ, Holcomb PJ, Roelofs A, Emmorey K (October 2021). "Language control in bimodal bilinguals: Evidence from ERPs". Neuropsychologia. 161: 108019. doi: 10.1016/j.neuropsychologia.2021.108019 . PMID   34487737. S2CID   237407166.
  11. Savic J (1996). Code–Switching: Theoretical and Methodological Issues. Belgrade: Belgrade University Press. ISBN   978-8680267210.
  12. 1 2 3 4 Lucas C, Valli C (1992). Language contact in the American deaf community. San Diego: Academic Press. ISBN   978-0-12-458040-4.
  13. 1 2 Traxler MJ, Corina DP, Morford JP, Hafer S, Hoversten LJ (January 2014). "Deaf readers' response to syntactic complexity: evidence from self-paced reading". Memory & Cognition. 42 (1): 97–111. doi:10.3758/s13421-013-0346-1. PMC   3864115 . PMID   23868696.
  14. Lillo-Martin D, de Quadros RM, Pichler DC (2016-12-31). "The Development of Bimodal Bilingualism: Implications for Linguistic Theory". Linguistic Approaches to Bilingualism. 6 (6): 719–755. doi:10.1075/lab.6.6.01lil. PMC   5461974 . PMID   28603576.
  15. 1 2 3 4 5 6 7 Grosjean F (1992). "The Bilingual and Bicultural Person in the Hearing and Deaf World" (PDF). Sign Language Studies. 77: 307–320. doi:10.1353/sls.1992.0020. S2CID   144263426.
  16. 1 2 Hill JC, Lillo-Martin DC, Wood SK (December 2018). Sign Languages: Structures and Contexts (1 ed.). New York: Routledge. doi:10.4324/9780429020872. ISBN   978-0-429-02087-2. S2CID   189700971.
  17. 1 2 Traxler MJ, Corina DP, Morford JP, Hafer S, Hoversten LJ (January 2014). "Deaf readers' response to syntactic complexity: evidence from self-paced reading". Memory & Cognition. 42 (1): 97–111. doi:10.3758/s13421-013-0346-1. PMC   3864115 . PMID   23868696.
  18. 1 2 Davis J (January 1989). "Distinguishing language contact phenomena in ASL interpretation.". In Lucas C (ed.). The sociolinguistics of the Deaf community. New York: Academic Press. pp. 85–102. doi:10.1016/B978-0-12-458045-9.50010-0. ISBN   978-0-12-458045-9.
  19. Berent GP (2012-11-07). "Sign Language–Spoken Language Bilingualism and the Derivation of Bimodally Mixed Sentences". In Bhatia TK, Ritchie WC (eds.). The Handbook of Bilingualism and Multilingualism (1st ed.). Wiley. pp. 351–374. doi:10.1002/9781118332382.ch14. ISBN   978-1-4443-3490-6.
  20. 1 2 3 Rozen-Blay O, Novogrodsky R, Degani T (February 2022). "Talking While Signing: The Influence of Simultaneous Communication on the Spoken Language of Bimodal Bilinguals". Journal of Speech, Language, and Hearing Research. 65 (2): 785–796. doi:10.1044/2021_JSLHR-21-00326. PMID   35050718. S2CID   246145393.
  21. Piñar P, Carlson MT, Morford JP, Dussias PE (November 2017). "Bilingual deaf readers' use of semantic and syntactic cues in the processing of English relative clauses". Bilingualism. 20 (5): 980–998. doi:10.1017/S1366728916000602. PMC   5754007 . PMID   29308049.
  22. Strong M (1988-01-29). "A bilingual approach to the education of young deaf children: ASL and English". Language Learning and Deafness. Cambridge University Press. pp. 113–130. doi:10.1017/cbo9781139524483.007. ISBN   9780521340465.
  23. 1 2 Neuroth-Gimbrone, C., & Logiodic, C. M. (1992). A Cooperative Bilingual Language Program for Deaf Adolescents. Language Studies (74), 79–91.
  24. 1 2 3 4 5 6 Goldin-Meadow S, Mayberry IR (2001). "How Do Profoundly Deaf Children Learn to Read?". Learning Disabilities Research & Practice. 16 (4): 222–229. doi:10.1111/0938-8982.00022.
  25. 1 2 van Beijsterveldt LM, van Hell J (2010). "Lexical noun phrases in texts written by deaf children and adults with different proficiency levels in sign language". International Journal of Bilingual Education and Bilingualism. 13 (4): 439–486. doi:10.1080/13670050903477039. S2CID   214654221.
  26. Mayer C, Leigh G (2010). "The changing context for sign bilingual education programs: issues in language and the development of literacy". International Journal of Bilingual Education and Bilingualism. 13 (2): 175–186. doi:10.1080/13670050903474085. S2CID   145185707.
  27. 1 2 Treiman R, Hirsh-Pasek K (January 1983). "Silent reading: insights from second-generation deaf readers". Cognitive Psychology. 15 (1): 39–65. doi:10.1016/0010-0285(83)90003-8. PMID   6831857. S2CID   10156902.
  28. "Search Funded Research Grants and Contracts - Details". ies.ed.gov. Retrieved 2022-03-13.