Classifier constructions in sign languages

Last updated

In sign languages, the term classifier construction (also known as classifier predicates) refers to a morphological system that can express events and states. [1] They use handshape classifiers to represent movement, location, and shape. Classifiers differ from signs in their morphology, namely that signs consist of a single morpheme. Signs are composed of three meaningless phonological features: handshape, location, and movement. Classifiers, on the other hand, consist of many morphemes. Specifically, the handshape, location, and movement are all meaningful on their own. [2] The handshape represents an entity and the hand's movement iconically represents the movement of that entity. The relative location of multiple entities can be represented iconically in two-handed constructions.

Contents

Classifiers share some limited similarities with the gestures of hearing non-signers. Those who do not know the sign language can often guess the meaning of these constructions. This is because they are often iconic (non-arbitrary). [3] It has also been found that many unrelated sign languages use similar handshapes for specific entities. Children master these constructions by the age of 8 or 9. [4] Two-handed classifier constructions have a figure-ground relationship. Specifically, the first classifier represents the background whereas the second one represents the entity in focus. The right hemisphere of the brain is involved in using classifiers. They may also be used creatively for storytelling and poetic purposes.

Frishberg coined the word "classifier" in this context in her 1975 paper on American Sign Language. Various connections have been made to classifiers in spoken languages. Linguists have since debated how best to analyze these constructions. Analyses differ in how much they rely on morphology to explain them. Some have questioned their linguistic status, as well as the very use of the term "classifier". [5] Not much is known about their syntax or phonology.

Description

In classifier constructions, the handshape is the classifier representing an entity, such as a horse. [6] The signer can represent its movement and/or speed in an iconic fashion. This means that the meaning of the movement can be guessed by its form. [6] [7] A horse jumping over a fence may be represented by having the stationary hand be the fence and the moving hand be the horse. [8] However, not all combinations of handshape and movement are possible. [6] Classifier constructions act as verbs. [9]

The handshape, movement and relative location in these constructions are meaningful on their own. [2] This is in contrast to two-handed lexical signs, in which the two hands do not contribute to the meaning of the sign on their own. [10] The handshapes in a two-handed classifier construction are signed in a specific order if they represent an entity's location. The first sign usually represents the unmoving ground (for example a surface). The second sign represents the smaller figure in focus (for example a person walking). [11] [12] [13] While the handshape is usually determined by the visual aspects of the entity in question, [14] there are other factors. The way in which the doer interacts with the entity [15] or the entity's movement [16] can also affect the handshape choice. Classifiers also often co-occur with verbs. [13] Not much is known yet about their syntax [17] or phonology. [18]

Classifier constructions are produced from the perspective of the signer. This means that the addressee must mentally flip the construction horizontally to understand it correctly. For example, if the addressee sees the signer place an object on the right side from the addressee's perspective, it means that they (the addressee) must mentally flip the construction to understand that it was placed on the left side. Native signers seem to be able to do this automatically. [19]

Two-handed lexical signs are limited in form by two constraints. The Dominance Condition states that the non-dominant hand cannot move and that its handshape comes from a restricted set. The Symmetry Condition states that both hands must have the same handshape, movement and orientation. [20] Classifier constructions, on the other hand, can break both of these restrictions. This further exemplifies the difference in phonology and morphology between lexical signs and classifiers. [21]

Unlike spoken language, sign languages have two articulators that can move independently. [22] The more active hand is termed the dominant hand whereas the less active hand is non-dominant. [23] The active hand is the same as the signer's dominant hand, although it is possible to switch the hands' role. [24] The two hands allow signers to represent two entities at the same time, although with some limitations. For example, a woman walking past a zigzagging car cannot be signed at the same time. This is because two simultaneous constructions cannot have differing movements; one would have to sign them sequentially. [22]

Argument structure

Classifiers constructions may show agreement with various arguments in its domain. In the example below, the handshape agrees with the direct object, using a "thin object" handshape for flowers and a "round object" handshape for apples. Agreement between subject and indirect object is marked with a path movement from the former to the latter. This manner of marking agreement is shared with some lexical signs. [25]

CHILD1

MOTHER2

FLOWER

Clthin-object-1GIVE2

CHILD1 MOTHER2 FLOWER Clthin-object-1GIVE2

The child gives a flower to the mother.

CHILD1

MOTHER2

APPLE

Clround-object-1GIVE2

CHILD1 MOTHER2 APPLE Clround-object-1GIVE2

The child gives an apple to the mother.

There are also correlations in American Sign Language (ASL) between specific types of classifier constructions and the kind of argument structure they have: [26]

  1. Predicates with a handling classifier are transitive (with an external and an internal argument)
  2. Predicates with a whole entity classifier are intransitive unaccusative (one single internal argument)
  3. Predicates with a body part classifier are intransitive unergative (one single external argument)

Classification

There have been many attempts at classifying the types of classifiers. The number of proposed types have ranged from two to seven. [27] Overlap in terminology across the classifications systems can cause confusion. [28] In 1993, Engberg-Pedersen grouped the handshapes used in classifier constructions in four categories: [29] [30]

The handshape's movement is grouped similarly: [29] [30]

Whole entity classifiers and handling classifiers are the most established classifier types. [33] The former occur with intransitive verbs, the latter occur with transitive verbs. [35] Most linguists don't consider extension and surface classifiers to be true classifiers. [33] This is because they appear in a larger range of syntactic positions. They also cannot be referred back to anaphorically in the discourse, nor can they be combined with motion verbs. [33]

Certain types of classifiers and movements cannot be combined for grammatical reasons. For example, in ASL manner of motion cannot be combined with limb classifiers. To indicate a person limping in a circle, one must first sign the manner of motion (limping), then the limb classifiers (the legs). [36]

There is little research on the differences in classifier constructions across sign languages. [37] Most seem to have them and can be described in similar terms. [37] Many unrelated languages encode the same entity with similar handshapes. [38] This is even the case for children not exposed to language who use a home sign system to communicate. [38] Handling classifiers along with extension and surface classifiers are especially likely to be the same across languages. [38]

Relation to gestures

Gestures are manual structures that are not as conventionalized as linguistics signs. [39] Hearing non-signers use forms similar to classifiers when asked to communicate through gesture. There is a 70% overlap in how signers and non-signers use movement and location, but only a 25% overlap for handshapes. Non-signers use a greater amount of handshapes, but the signers' have more complex phonology. [40] Non-signers also do not constrain their gestures to a morphological system as with sign language users. [38]

Lexicalization

Certain classifier constructions may also, over time, lose their general meaning and become fully-fledged signs. This process is referred to as lexicalization. [41] [42] These types of signs are referred to as frozen signs. [43] For example, the ASL sign FALL seems to have come from a classifier construction. This classifier construction consists of a V-shaped hand, which represents the legs, moving down. As it became more like a sign, it could also be used with non-animate referents, like apples or boxes. As a sign, the former classifier construction now conforms to the usual constraints of a word, such as consisting of one syllable. [44] The resulting sign must not be a simple sum of its combined parts, but can have a different meaning entirely. [45] They may serve as the root morpheme that serves as the base for aspectual and derivational affixes. Classifiers cannot take these types of affixes. [46]

History

It wasn't until the 1960s that sign languages were being studied seriously. [47] Initially, classifier constructions were not regarded as full linguistic systems. [8] [48] This was due to their high degree of apparent variability and iconicity. [48] Consequently, early analyses described them in terms of visual imagery. [37] Linguists started focusing on proving that sign languages were real languages. They started paying less attention to their iconic properties and more to the way they are organized. [47]

Frishberg was the first [49] [50] to use the term "classifier" in her 1975 paper on arbitrariness and iconicity in ASL to refer to the handshape unit used in classifier constructions. [51]

The start of the study of sign language classifier coincided with a renewed interest in spoken language classifiers. [52] In 1977, Allan performed a survey of classifier systems in spoken languages. He compared classifier constructions to the "predicate classifiers" used in the Athabaskan languages. [53] These are a family of oral indigenous languages spoken throughout North America. [54] Reasons for comparing them included standardizing terminology and proving that sign languages are similar to spoken languages. [55] Allan described predicate classifiers as separate verbal morphemes that denote some salient aspect of the associated noun. [53] However, Schembri pointed out the "terminological confusion" surrounding classifiers. [56] Allan's description and comparison came to draw criticism. Later analyses showed that these predicate classifiers did not constitute separate morphemes. Instead, they were better described as classificatory verbs stems rather than classifiers. [57] [58] [59]

In 1982, Supalla showed that classifier constructions were part of a complex morphological system in ASL. [60] [61] [48] He split the classifier handshapes into two main categories: semantic classifiers (also called "entity classifiers") and size and shape specifiers (SASSes). [62] SASS categories use handshapes to describe the visual properties of an entity. Entity classifiers are less iconic. they refer to a general semantic class of objects such as "thin and straight" or "flat and round". [63] Handling classifiers would be the third type of classifier to be described. This classifier imitates the hand holding or handling an instrument. [63] A fourth type, the body-part classifier, represents a human or animal body parts, usually the limbs. [64] Linguist adopted and modified Supalla's morphological analysis for other sign languages. [28]

In the 1990s, a renewed interested in the relation between sign languages and gesture took place. [47] Some linguists, such as Liddell (2000), called the linguistic status of classifier constructions into question, especially the location and movement. [65] There were two reasons for doing so. First, the imitative gestures of non-signers are similar to classifiers. [47] Second, very many types of movement and locations can be used in these constructions. Liddell suggested that it would be more accurate to consider them to be a mixture of linguistic and extra-linguistic elements, such as gesture. [66] [67] [68] Schembri and colleagues similarly suggested in 2005 that classifier constructions are "blends of linguistic and gestural elements". [69] Regardless of the high degree of variability, Schembri and colleagues argue that classifier constructions are still grammatically restrained by various factors. For example, they are more abstract and categorical than the gestural forms made by non-signers. [38] It is now generally accepted that classifiers have both linguistic and gestural properties. [70]

Similar to Allan, Grinevald also compared sign language classifiers to spoken classifiers in 2000. [71] Specifically, she focused on verbal classifiers, which act as verbal affixes. [72] She lists the following example from Cayuga, an Iroquoian language: [73]

Skitu

skidoo

ake’-treht-ae’

I-CL(vehicle)-have

Skitu ake’-treht-ae’

skidoo I-CL(vehicle)-have

‘I have a car.’

The classifier for the word vehicle in Cayuga, -treht-, is similar to whole entity classifiers in sign languages. Similar examples have been found in Digueño, which has morphemes that act like extension and surface classifiers in sign languages. Both examples are attached to the verb and cannot stand alone. [74] It is now accepted that classifiers in spoken and signed languages are similar, contrary to what was previously believed. [75] They both track references grammatically, can form new words and may emphasize a salient aspect of an entity. [75] The main difference is that sign language only have verbal classifiers. [75] The classifiers systems in spoken languages are more diverse in function and distribution. [76]

Despite the many proposed alternative names to the term classifier, [77] and questionable relationship to spoken language classifiers, [78] it continues to be a commonly used term in sign language research. [78]

Linguistic analyses

There is no consensus on how to analyze classifier constructions. [3] Linguistic analyses can be divided into three major categories: representational, morphological, and lexical. Representational analyses were the first attempt at describing classifiers. [8] This analysis views them as manual representations of movements in the world. Because classifier constructions are highly iconic, representational analyses argue that this form-meaning connection should be the basis for linguistic analysis. This was argued because finite sets of morphemes or parameters cannot account for all potentially meaningful classifier constructions. [79] [80] This view has been criticized because it predicts impossible constructions. For example, in ASL, a walking classifier handshape cannot be used to represent the movement of an animal in the animal noun class, even though it is an iconic representation of the event. [81] [ clarification needed ]

Lexical analyses view classifiers as partially lexicalized words. [82]

A morphological analysis views classifiers as a series of morphemes, [83] [60] and this is currently the predominant school of thought. [84] [85] In this analyses, classifier verbs are combinations of verbal roots with numerous affixes. [86] If the handshape is taken to consist of several morphemes, it is not clear how they should be segmented or analyzed. [8] [87] For example, the fingertips in Swedish Sign Language can be bent in order to represent the front of a car getting damaged in a crash; this led Supalla to posit that each finger might act as a separate morpheme. [87] The morphological analysis has been criticized for its complexity. [86] Liddell found that to analyze a classifier construction in ASL where one person walks to another would require anywhere between 14 and 28 morphemes. [88] Other linguists, however, consider the handshape to consist of one, solitary morpheme. [89] In 2003, Schembri stated that there is no convincing evidence that all handshapes are multi-morphemic. This was based on grammaticality judgments from native signers. [89]

Morphological analyses differ in what aspect of the construction they consider the root. Supalla argued that the morpheme which expresses motion or location is the verbal root to which the handshape morpheme is affixed. [60] Engberg-Pedersen disagreed with Supalla, arguing that the choice of handshape can fundamentally change how the movement is interpreted. Therefore, she claims the movement should be the root. For example, putting a book on a shelf and a cat jumping on a shelf both use the same movement in ASL, despite being fundamentally different acts. [90] [91] [9] Classifiers are affixes, meaning that they cannot occur alone and must be bound. [92] Classifiers on their own are not specified for place of articulation or movement. This might explain why they are bound: this missing information is filled in by the root. [92]

Certain classifiers are similar to pronouns. [9] [91] [93] Like pronouns, the signer has to first introduce the referent, usually by signing or fingerspelling the noun. [94] The classifier is then taken to refer to this referent. [9] Signers do not have to re-introduce the same referent in later constructions; it is understood to still refer to the that referent. [9] Some classifiers also denote a specific group the same way that the pronoun "she" can refer to women or waitresses. [94] Similarly, ASL has a classifier which refers to vehicles, but not people or animals. [94] In this view, verbal classifiers may be seen as agreement markers for their referents with the movement as its root. [9]

Acquisition

The gestures of speaking children sometimes resemble classifier constructions. [95] However, signing children learn these constructions as part of a grammatical system, not as iconic representations of events. Owing to their complexity, it takes a long time to master them. [96] [97] Children do not master the use of classifier constructions until the age of eight or nine. [98] There are many reasons for this relatively late mastery. Children must learn to express different viewpoints correctly, select the correct handshape and order the construction properly. [96] Schick found that the handling classifiers were the most difficult ones to master. This was followed by the extension and surface classifier. The whole entity classifiers had the fewest production errors. [99] Young children prefer to substitute complex classifiers with simpler, more general ones. [98]

Children start using classifiers at the age of two. [96] These early forms are mostly handling and whole entity classifiers. [96] Simple movements are produced correctly as early as 2.6 years of age. [100] Complex movements, such as arcs, are more difficult for children to express. The acquisition of location in classifier constructions depends on the complexity between the referents and the related spatial locations. [100] Simple extension and surface classifiers are produced correctly at 4.5 years of age. [100] By the age of five to six, children usually select the correct handshape. [101] [96] At age six to seven, children still make mistakes in representing spatial relationships. In signs with a figure-ground relationship, these children will sometimes omit the ground entirely. [96] This could be because mentioning them together requires proper coordination of both hands. Another explanation is that children have more trouble learning optional structures in general. [100] Although mostly mastered, children aged nine still have difficulty understanding the locative relations between classifiers. [97]

It is widely accepted that iconicity helps in learning spoken languages, although the picture is less clear for sign languages. [102] [103] Some have argued that iconicity plays no role in acquiring classifier construction. This is claimed because constructions are highly complex and are not mastered until late childhood. [102] Other linguists claim that children as young as three years old can produce adult-like constructions, [102] although only with one hand. [104] Slobin found that children under three years of age seem to "bootstrap" natural gesture to make learning the handshape easier. [105] Most young children do not seem to represent spatial situations iconically. [98] They also do not express complex path movements at once, but rather do so sequentially. [98] In adults, it has been shown that iconicity can help in learning lexical signs. [39] [40]

Brain structures

As with spoken languages, the left hemisphere of the brain is dominant for sign language production. [106] However, the right hemisphere is superior in some aspects. It is better at processing concrete words, like bed or flower, compared to abstract ones. [107] It is also important in showing spatial relations between entities iconically. [106] It is especially important in using and understanding classifier constructions. [108] Signers with damage to the right hemisphere cannot properly describe items in a room. They can remember the items themselves, but cannot use classifiers to express their location. [107]

The parietal cortex is activated in both hemispheres when perceiving the spatial location of objects. [107] For spoken languages, describing spatial relationships only engages the left parietal cortex. For sign languages, both the left and right parietal cortex are needed when using classifier constructions. [107] This might explain why people with right hemisphere damage have trouble with expressing these constructions. Namely, they cannot encode external spatial relations and use them while signing. [109]

In order to use certain classifier constructions, the signer must be able to visualize the entity and its shape, orientation and location. [110] It has been shown that deaf signers are better at generating spatial mental images than hearing non-signers. [110] The spatial memory span of deaf signers is also superior. [111] This is linked to their use of sign language, rather than being deaf. [111] This suggests that using sign language might change the way the brain organizes non-linguistic information. [110]

Stylistic and creative use

It is possible for a signer to "hold" the non-dominant hand in a classifier construction. This is usually the background. This may serve the function of keeping relevant information present during the conversation. [112] During the hold, the dominant hand might also articulate other signs that are relevant to the first classifier. [113]

In performative story-telling and poetry, classifiers may also serve creative purposes. [114] [115] Just as in spoken language, skilled language use can indicate eloquence. It has been observed in ASL poetry that skilled signers may combine classifiers and lexical signs. [115] The sign for BAT and DARK are identical in British Sign Language; they're also both articulated at the face. This may be used for poetic effect. For example, likening bats with darkness by using an entity classifier showing a bat flying at the face. [116] Classifiers may also be used in expressively characterizing animals or non-human objects. [117]

Citations

  1. Sandler & Lillo-Martin 2006, p. 76.
  2. 1 2 Hill, Lillo-Martin & Wood 2019, p. 49.
  3. 1 2 Brentari 2010, p. 254.
  4. Emmorey 2008, p. 194-195.
  5. Brentari 2010, p. 253-254.
  6. 1 2 3 4 Emmorey 2008, p. 74.
  7. Kimmelman, Pfau & Aboh 2019.
  8. 1 2 3 4 Zwitserlood 2012, p. 159.
  9. 1 2 3 4 5 6 Zwitserlood 2012, p. 166.
  10. Sandler & Lillo-Martin 2006, p. 78-79.
  11. Hill, Lillo-Martin & Wood 2019, p. 51.
  12. Emmorey 2008, p. 86.
  13. 1 2 Zwitserlood 2012, p. 164.
  14. Schembri 2003, p. 22.
  15. Schembri 2003, p. 22-23.
  16. Schembri 2003, p. 24.
  17. Marschark & Spencer 2003, p. 316.
  18. Zwitserlood 2012, p. 169.
  19. Brozdowski, Secora & Emmorey 2019.
  20. Emmorey 2008, p. 36-38.
  21. Sandler & Lillo-Martin 2006, p. 90.
  22. 1 2 Emmorey 2008, p. 85-86.
  23. Hill, Lillo-Martin & Wood 2019, p. 34.
  24. Crasborn 2006, p. 69.
  25. Carlo 2014, p. 49-50.
  26. Carlo 2014, p. 52.
  27. Schembri 2003, p. 9-10.
  28. 1 2 3 Zwitserlood 2012, p. 161.
  29. 1 2 Engberg-Pedersen 1993.
  30. 1 2 Emmorey 2008, p. 76.
  31. Emmorey 2008, p. 78.
  32. Zwitserlood 2012, p. 163.
  33. 1 2 3 4 Zwitserlood 2012, p. 162.
  34. Emmorey 2008, p. 80.
  35. Zwitserlood 2012, p. 167.
  36. Emmorey 2008, p. 81.
  37. 1 2 3 Zwitserlood 2012, p. 158.
  38. 1 2 3 4 5 Schembri 2003, p. 26.
  39. 1 2 Ortega, Schiefner & Özyürek 2019.
  40. 1 2 Marshall & Morgan 2015.
  41. Brentari 2010, p. 260.
  42. Sandler & Lillo-Martin 2006, p. 87.
  43. Zwitserlood 2012, p. 169-170.
  44. Aronoff et al. 2003, p. 69-70.
  45. Zwitserlood 2012, p. 179.
  46. Zwitserlood 2012, p. 170.
  47. 1 2 3 4 Brentari, Fenlon & Cormier 2018.
  48. 1 2 3 Schembri 2003, p. 11.
  49. Brentari 2010, p. 252.
  50. Emmorey 2008, p. 9.
  51. Frishberg 1975.
  52. Zwitserlood 2012, p. 160.
  53. 1 2 Keith 1977.
  54. Fernald & Platero 2000, p. 3.
  55. Schembri 2003, p. 10-11.
  56. Schembri 2003, p. 15.
  57. Schembri 2003, p. 13-14.
  58. Emmorey 2008, p. 88.
  59. Zwitserlood 2012, p. 175.
  60. 1 2 3 Supalla 1982.
  61. Zwitserlood 2012, p. 161; 165.
  62. Sandler & Lillo-Martin 2006, p. 77.
  63. 1 2 Sandler & Lillo-Martin 2006, p. 77-78.
  64. Hill, Lillo-Martin & Wood 2019, p. 50.
  65. Crasborn 2006, p. 68.
  66. Liddell 2000.
  67. Schembri 2003, p. 9.
  68. Brentari 2010, p. 256.
  69. Schembri, Jones & Burnham 2005.
  70. Cormier, Schembri & Woll 2010, p. 2664-2665.
  71. Grinevald 2000.
  72. Aronoff et al. 2003, p. 63-64.
  73. Grinevald 2000, p. 67.
  74. Sandler & Lillo-Martin 2006, p. 84.
  75. 1 2 3 Zwitserlood 2012, p. 180.
  76. Zwitserlood 2012, p. 175-176.
  77. Schembri 2003, p. 4.
  78. 1 2 Emmorey 2008, p. 90.
  79. DeMatteo 1977.
  80. Brentari 2010, p. 256-257.
  81. Brentari 2010, p. 258-259.
  82. Liddell 2003a.
  83. Benedicto & Brentari 2004.
  84. Zwitserlood 2012, p. 159; 165.
  85. Schembri 2003, p. 18.
  86. 1 2 Zwitserlood 2012, p. 165.
  87. 1 2 Schembri 2003, p. 18-20.
  88. Liddell 2003b, p. 205-206.
  89. 1 2 Schembri 2003, p. 19.
  90. Schembri 2003, p. 21-22.
  91. 1 2 Emmorey 2008, p. 88-91.
  92. 1 2 Zwitserlood 2012, p. 168.
  93. Marschark & Spencer 2003, p. 321.
  94. 1 2 3 Baker-Shenk & Cokely 1981, p. 287.
  95. Emmorey 2008, p. 198.
  96. 1 2 3 4 5 6 Marschark & Spencer 2003, p. 223.
  97. 1 2 Zwitserlood 2012, p. 174.
  98. 1 2 3 4 Zwitserlood 2012, p. 173.
  99. Schick 1990.
  100. 1 2 3 4 Emmorey 2008, p. 196.
  101. Morgan & Woll 2003, p. 300.
  102. 1 2 3 Ortega 2017.
  103. Thompson 2011, p. 609.
  104. Slobin 2003, p. 275.
  105. Slobin 2003, p. 272.
  106. 1 2 Marschark & Spencer 2003, p. 365.
  107. 1 2 3 4 Marschark & Spencer 2003, p. 370.
  108. Marschark & Spencer 2003, p. 373.
  109. Marschark & Spencer 2003, p. 371.
  110. 1 2 3 Emmorey 2008, p. 266.
  111. 1 2 Emmorey 2008, p. 266-267.
  112. Sandler & Lillo-Martin 2006, p. 88.
  113. Marschark & Spencer 2003, p. 334.
  114. Sutton-Spence 2012, p. 1003.
  115. 1 2 Sandler & Lillo-Martin 2006, p. 88-89.
  116. Sutton-Spence 2012, p. 1011.
  117. Sutton-Spence 2012, p. 1012.

Related Research Articles

<span class="mw-page-title-main">American Sign Language</span> Sign language used predominately in the United States

American Sign Language (ASL) is a natural language that serves as the predominant sign language of Deaf communities in the United States and most of Anglophone Canada. ASL is a complete and organized visual language that is expressed by employing both manual and nonmanual features. Besides North America, dialects of ASL and ASL-based creoles are used in many countries around the world, including much of West Africa and parts of Southeast Asia. ASL is also widely learned as a second language, serving as a lingua franca. ASL is most closely related to French Sign Language (LSF). It has been proposed that ASL is a creole language of LSF, although ASL shows features atypical of creole languages, such as agglutinative morphology.

<span class="mw-page-title-main">Fingerspelling</span> Form of communication using one or both hands

Fingerspelling is the representation of the letters of a writing system, and sometimes numeral systems, using only the hands. These manual alphabets have often been used in deaf education and have subsequently been adopted as a distinct part of a number of sign languages. There are about forty manual alphabets around the world. Historically, manual alphabets have had a number of additional applications—including use as ciphers, as mnemonics and in silent religious settings.

<span class="mw-page-title-main">Sign language</span> Language that uses manual communication and body language to convey meaning

Sign languages are languages that use the visual-manual modality to convey meaning, instead of spoken words. Sign languages are expressed through manual articulation in combination with non-manual markers. Sign languages are full-fledged natural languages with their own grammar and lexicon. Sign languages are not universal and are usually not mutually intelligible, although there are also similarities among different sign languages.

Sutton SignWriting, or simply SignWriting, is a system of writing sign languages. It is highly featural and visually iconic, both in the shapes of the characters, which are abstract pictures of the hands, face, and body, and in their spatial arrangement on the page, which does not follow a sequential order like the letters that make up written English words. It was developed in 1974 by Valerie Sutton, a dancer who had, two years earlier, developed DanceWriting. Some newer standardized forms are known as the International Sign Writing Alphabet (ISWA).

<span class="mw-page-title-main">International Sign</span> Sign language, used particularly at international meetings

International Sign (IS) is a pidgin sign language which is used in a variety of different contexts, particularly as an international auxiliary language at meetings such as the World Federation of the Deaf (WFD) congress, in some European Union settings, and at some UN conferences, at events such as the Deaflympics, the Miss & Mister Deaf World, and Eurovision, and informally when travelling and socialising.

In functional-cognitive linguistics, as well as in semiotics, iconicity is the conceived similarity or analogy between the form of a sign and its meaning, as opposed to arbitrariness. The principle of iconicity is also shared by the approach of linguistic typology.

Signing Exact English is a system of manual communication that strives to be an exact representation of English language vocabulary and grammar. It is one of a number of such systems in use in English-speaking countries. It is related to Seeing Essential English (SEE-I), a manual sign system created in 1945, based on the morphemes of English words. SEE-II models much of its sign vocabulary from American Sign Language (ASL), but modifies the handshapes used in ASL in order to use the handshape of the first letter of the corresponding English word.

The American Manual Alphabet (AMA) is a manual alphabet that augments the vocabulary of American Sign Language.

Home sign is a gestural communication system, often invented spontaneously by a deaf child who lacks accessible linguistic input. Home sign systems often arise in families where a deaf child is raised by hearing parents and is isolated from the Deaf community. Because the deaf child does not receive signed or spoken language input, these children are referred to as linguistically isolated.

<span class="mw-page-title-main">Stokoe notation</span> Phonemic script for sign languages

Stokoe notation is the first phonemic script used for sign languages. It was created by William Stokoe for American Sign Language (ASL), with Latin letters and numerals used for the shapes they have in fingerspelling, and iconic glyphs to transcribe the position, movement, and orientation of the hands. It was first published as the organizing principle of Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf (1960), and later also used in A Dictionary of American Sign Language on Linguistic Principles, by Stokoe, Casterline, and Croneberg (1965). In the 1965 dictionary, signs are themselves arranged alphabetically, according to their Stokoe transcription, rather than being ordered by their English glosses as in other sign-language dictionaries. This made it the only ASL dictionary where the reader could look up a sign without first knowing how to translate it into English. The Stokoe notation was later adapted to British Sign Language (BSL) in Kyle et al. (1985) and to Australian Aboriginal sign languages in Kendon (1988). In each case the researchers modified the alphabet to accommodate phonemes not found in ASL.

The grammar of American Sign Language (ASL) has rules just like any other sign language or spoken language. ASL grammar studies date back to William Stokoe in the 1960s. This sign language consists of parameters that determine many other grammar rules. Typical word structure in ASL conforms to the SVO/OSV and topic-comment form, supplemented by a noun-adjective order and time-sequenced ordering of clauses. ASL has large CP and DP syntax systems, and also doesn't contain many conjunctions like some other languages do.

American Sign Language literature is one of the most important shared cultural experiences in the American deaf community. Literary genres initially developed in residential Deaf institutes, such as American School for the Deaf in Hartford, Connecticut, which is where American Sign Language developed as a language in the early 19th century. There are many genres of ASL literature, such as narratives of personal experience, poetry, cinematographic stories, folktales, translated works, original fiction and stories with handshape constraints. Authors of ASL literature use their body as the text of their work, which is visually read and comprehended by their audience viewers. In the early development of ASL literary genres, the works were generally not analyzed as written texts are, but the increased dissemination of ASL literature on video has led to greater analysis of these genres.

ASL-phabet, or the ASL Alphabet, is a writing system developed by Samuel Supalla for American Sign Language (ASL). It is based on a system called SignFont, which Supalla modified and streamlined for use in an educational setting with Deaf children.

Nepalese Sign Language or Nepali Sign Language is the main sign language of Nepal. It is a partially standardized language based informally on the variety used in Kathmandu, with some input from varieties from Pokhara and elsewhere. As an indigenous sign language, it is not related to oral Nepali. The Nepali Constitution of 2015 specifically mentions the right to have education in Sign Language for the deaf. Likewise, the newly passed Disability Rights Act of 2072 BS defined language to include "spoken and sign languages and other forms of speechless language." in practice it is recognized by the Ministry of Education and the Ministry of Women, Children and Social Welfare, and is used in all schools for the deaf. In addition, there is legislation underway in Nepal which, in line with the UN Convention on the Rights of Persons with Disabilities (UNCRPD) which Nepal has ratified, should give Nepalese Sign Language equal status with the oral languages of the country.

In sign language, an initialized sign is one that is produced with a handshape(s) that corresponds to the fingerspelling of its equivalent in the locally dominant oral language, based on the respective manual alphabet representing that oral language's orthography. The handshape(s) of these signs then represent the initial letter of their written equivalent(s). In some cases, this is due to the local oral language having more than one equivalent to a basic sign. For example, in ASL, the signs for "class" and "family" are the same, except that "class" is signed with a 'C' handshape, and "family" with an 'F' handshape. In other cases initialization is required for disambiguation, though the signs are not semantically related. For example, in ASL, "water" it signed with a 'W' handshape touching the mouth, while "dentist" is similar apart from using a 'D' handshape. In other cases initialization is not used for disambiguation; the ASL sign for "elevator", for example, is an 'E' handshape moving up and down along the upright index finger of the other hand.

<span class="mw-page-title-main">Inuit Sign Language</span> Indigenous sign language isloate

Inuit Sign Language is one of the Inuit languages and the indigenous sign language of the Inuit people. It is a language isolate native to Inuit communities in the Canadian Arctic. It is currently only attested within certain communities in Nunavut, particularly Baker Lake and Rankin Inlet. Although there is a possibility that it may be used in other places where Inuit live in the Arctic, this has not been confirmed.

si5s is a writing system for American Sign Language that resembles a handwritten form of SignWriting. It was devised in 2003 in New York City by Robert Arnold, with an unnamed collaborator. In July 2010 at the Deaf Nation World Expo in Las Vegas, Nevada, it was presented and formally announced to the public. Soon after its release, si5s development split into two branches: the "official" si5s track monitored by Arnold and a new set of partners at ASLized, and the "open source" ASLwrite. In 2015, Arnold had a falling out with his ASLized partners, took down the si5s.org website, and made his Twitter account private. ASLized has since removed any mention of si5s from their website.

Sign languages such as American Sign Language (ASL) are characterized by phonological processes analogous to, yet dissimilar from, those of oral languages. Although there is a qualitative difference from oral languages in that sign-language phonemes are not based on sound, and are spatial in addition to being temporal, they fulfill the same role as phonemes in oral languages.

Ted Supalla is a deaf linguist whose research centers on sign language in its developmental and global context, including studies of the grammatical structure and evolution of American Sign Language and other sign languages.

Karen Denise Emmorey is a linguist and cognitive neuroscientist known for her research on the neuroscience of sign language and what sign languages reveal about the brain and human languages more generally. Emmorey holds the position of Distinguished Professor in the School of Speech, Language, and Hearing Sciences at San Diego State University, where she directs the Laboratory for Language and Cognitive Neuroscience and the Center for Clinical and Cognitive Neuroscience.

References