Speech-generating device

Last updated

Stephen Hawking (1942-2018), astrophysicist and prominent SGD user Stephen Hawking 050506.jpg
Stephen Hawking (1942–2018), astrophysicist and prominent SGD user

Speech-generating devices (SGDs), also known as voice output communication aids, are electronic augmentative and alternative communication (AAC) systems used to supplement or replace speech or writing for individuals with severe speech impairments, enabling them to verbally communicate. [1] SGDs are important for people who have limited means of interacting verbally, as they allow individuals to become active participants in communication interactions. They are particularly helpful for patients with amyotrophic lateral sclerosis (ALS) but recently have been used for children with predicted speech deficiencies. [2]

Contents

There are several input and display methods for users of varying abilities to make use of SGDs. Some SGDs have multiple pages of symbols to accommodate a large number of utterances, and thus only a portion of the symbols available are visible at any one time, with the communicator navigating the various pages. Speech-generating devices can produce electronic voice output by using digitized recordings of natural speech or through speech synthesis—which may carry less emotional information but can permit the user to speak novel messages. [3]

The content, organization, and updating of the vocabulary on an SGD is influenced by a number of factors, such as the user's needs and the contexts that the device will be used in. [4] The development of techniques to improve the available vocabulary and rate of speech production is an active research area. Vocabulary items should be of high interest to the user, be frequently applicable, have a range of meanings, and be pragmatic in functionality. [5]

There are multiple methods of accessing messages on devices: directly or indirectly, or using specialized access devices—although the specific access method will depend on the skills and abilities of the user. [1] SGD output is typically much slower than speech, although rate enhancement strategies can increase the user's rate of output, resulting in enhanced efficiency of communication. [6]

The first known SGD was prototyped in the mid-1970s, and rapid progress in hardware and software development has meant that SGD capabilities can now be integrated into devices like smartphones. Notable users of SGDs include Stephen Hawking, Roger Ebert, Tony Proudfoot, and Pete Frates (founder of the ALS Ice Bucket Challenge).

Speech-generating systems may be dedicated devices developed solely for AAC, or non-dedicated devices such as computers running additional software to allow them to function as AAC devices. [7] [8]

History

The patient-operated selector mechanism (POSM or POSSUM), was developed in the early 1960s Patient Operated Selector Mechanism.jpg
The patient-operated selector mechanism (POSM or POSSUM), was developed in the early 1960s

SGDs have their roots in early electronic communication aids. The first such aid was a sip-and-puff typewriter controller named the patient-operated selector mechanism (POSSUM) prototyped by Reg Maling in the United Kingdom in 1960. [9] [10] POSSUM scanned through a set of symbols on an illuminated display. [9] Researchers at Delft University in the Netherlands created the lightspot operated typewriter (LOT) in 1970, which made use of small movements of the head to point a small spot of light at a matrix of characters, each equipped with a photoelectric cell. Although it was commercially unsuccessful, the LOT was well received by its users. [11]

In 1966, Barry Romich, a freshman engineering student at Case Western Reserve University, and Ed Prentke, an engineer at Highland View Hospital in Cleveland, Ohio formed a partnership, creating the Prentke Romich Company. [12] In 1969, the company produced its first communication device, a typing system based on a discarded Teletype machine.

During the 1970s and early 1980s, several other companies began to emerge that have since become prominent manufacturers of SGDs. Toby Churchill founded Toby Churchill Ltd in 1973, after losing his speech following encephalitis. [13] In the US, Dynavox (then known as Sentient Systems Technology) grew out of a student project at Carnegie-Mellon University, created in 1982 to help a young woman with cerebral palsy to communicate. [14] Beginning in the 1980s, improvements in technology led to a greatly increased number, variety, and performance of commercially available communication devices, and a reduction in their size and price. Alternative methods of access such as Target Scanning (also known as eye pointing) calibrate the movement of a user's eyes to direct an SGD to produce the desired speech phase. Scanning, in which alternatives are presented to the user sequentially, became available on communication devices. [10] [15] Speech output possibilities included both digitized and synthesized speech. [10]

Rapid progress in hardware and software development continued, including projects funded by the European Community. The first commercially available dynamic screen speech-generating devices were developed in the 1990s. Software programs were developed that allowed the computer-based production of communication boards. [10] [15] High-tech devices have continued to become smaller and lighter, [15] while increasing accessibility and capability; communication devices can be accessed using eye-tracking systems, perform as a computer for word-processing and Internet use, and as an environmental control device for independent access to other equipment such as TV, radio and telephones. [16]

Stephen Hawking came to be associated with the unique voice of his particular synthesis equipment. Hawking was unable to speak due to a combination of disabilities caused by ALS, and an emergency tracheotomy. [17] In the past 20 or so years SGD have gained popularity amongst young children with speech deficiencies, such as autism, Down syndrome, and predicted brain damage due to surgery.

Starting in the early 2000s, specialists saw the benefit of using SGDs not only for adults but for children, as well. Neuro-linguists found that SGDs were just as effective in helping children who were at risk for temporary language deficits after undergoing brain surgery as it is for patients with ALS. In particular, digitized SGDs have been used as communication aids for pediatric patients during the recovery process.

Access methods

There are many methods of accessing messages on devices: directly, indirectly, and with specialized access devices. Direct access methods involve physical contact with the system, by using a keyboard or a touch screen. Users accessing SGDs indirectly and through specialized devices must manipulate an object in order to access the system, such as maneuvering a joystick, head mouse, optical head pointer, light pointer, infrared pointer, or switch access scanner. [1]

The specific access method will depend on the skills and abilities of the user. With direct selection a body part, pointer, adapted mouse, joystick, or eye tracking could be used, [18] whereas switch access scanning is often used for indirect selection. [8] [19] Unlike direct selection (e.g., typing on a keyboard, touching a screen), users of Target Scanning can only make selections when the scanning indicator (or cursor) of the electronic device is on the desired choice. [20] Those who are unable to point typically calibrate their eyes to use eye gaze as a way to point and blocking as a way to select desired words and phrases. The speed and pattern of scanning, as well as the way items are selected, are individualized to the physical, visual and cognitive capabilities of the user. [20]

Message construction

A screenshot of the Dasher rate enhancement program Dasher.png
A screenshot of the Dasher rate enhancement program

Augmentative and alternative communication is typically much slower than speech, [6] with users generally producing 8–10 words per minute. [21] Rate enhancement strategies can increase the user's rate of output to around 12–15 words per minute, [21] and as a result enhance the efficiency of communication.

In any given SGD there may be a large number of vocal expressions that facilitate efficient and effective communication, including greetings, expressing desires, and asking questions. [22] Some SGDs have multiple pages of symbols to accommodate a large number of vocal expressions, and thus only a portion of the symbols available are visible at any one time, with the communicator navigating the various pages. [23] Speech-generating devices generally display a set of selections either using a dynamically changing screen, or a fixed display. [24]

There are two main options for increasing the rate of communication for an SGD: encoding, a translator such as Nicole Schatzmann. and prediction. [6]

Encoding permits a user to produce a word, sentence or phrase using only one or two activations of their SGD. [6] Iconic encoding strategies such as Semantic compaction combine sequences of icons (picture symbols) to produce words or phrases. [25] In numeric, alpha-numeric, and letter encoding (also known as Abbreviation-Expansion), words and sentences are coded as sequences of letters and numbers. For example, typing "HH" or "G1" (for Greeting 1) may retrieve "Hello, how are you?". [25]

Prediction is a rate enhancement strategy in which the SGD attempts to reduce the number of keystrokes used by predicting the word or phrase being written by the user. The user can then select the correct prediction without needing to write the entire word. Word prediction software may determine the choices to be offered based on their frequency in language, association with other words, past choices of the user, or grammatical suitability. [6] [25] [26] However, users have been shown to produce more words per minute (using a scanning interface) with a static keyboard layout than with a predictive grid layout, suggesting that the cognitive overhead of reviewing a new arrangement cancels out the benefits of the predictive layout when using a scanning interface. [27]

Another approach to rate-enhancement is Dasher, [28] which uses language models and arithmetic coding to present alternative letter targets on the screen with size relative to their likelihood given the history. [29] [30]

The rate of words produced can depend greatly on the conceptual level of the system: the TALK system, which allows users to choose between large numbers of sentence-level utterances, demonstrated output rates in excess of 60 wpm. [31]

Fixed and dynamic display devices

Fixed display devices

A speech-generation device with a fixed display Gotalk.jpg
A speech-generation device with a fixed display

Fixed display devices refer to those in which the symbols and items are "fixed" in a particular format; some sources refer to these as "static" displays. [32] Such display devices have a simpler learning curve than some other devices.

Fixed display devices replicate the typical arrangement of low-tech AAC devices (low-tech is defined as those devices that do not need batteries, electricity or electronics), like communication boards. They share some of disadvantages; for example they are typically restricted to a limited number of symbols and hence messages. [24] It is important to note that with technological advances made in the twenty-first century, fixed-display SGDs are not commonly used anymore.

Dynamic display devices

Dynamic displays devices are usually also touchscreen devices. They typically generate electronically produced visual symbols that, when pressed, change the set of selections that is displayed. The user can change the symbols available using page links to navigate to appropriate pages of vocabulary and messages.

A speech-generating device with dynamic display, capable of outputting both synthesized and digitized speech Minimo.jpg
A speech-generating device with dynamic display, capable of outputting both synthesized and digitized speech

The "home" page of a dynamic display device may show symbols related to many different contexts or conversational topics. Pressing any one of these symbols may open a different screen with messages related to that topic. [24] For example, when watching a volleyball game, a user may press the "sport" symbol to open a page with messages relating to sport, then press the symbol showing a scoreboard to utter the phrase "What's the score?".

Advantages of dynamic display devices include the availability of a much larger vocabulary, and the ability to see the sentence under construction [22] A further advantage of dynamic display devices is that the underlying operating system is capable of providing options for multiple communication channels, including cell phone, text messaging and e-mail. [33] Work by Linköping University has shown that such email writing practices allowed children who were SGD users to develop new social skills and increase their social participation. [34]

Talking Keyboards

Keyboard used to create speech over a telephone using a Text to Speech converter. TextSpeak-AAC-Speech-Talking-Keyboard.png
Keyboard used to create speech over a telephone using a Text to Speech converter.

Low cost systems can also include a keyboard and audio speaker combination without a dynamic display or visual screen. This type of keyboard sends typed text direct to an audio speaker. It can permit any phrase to be spoken without the need for a visual screen that is not always required. One simple benefit is that a talking keyboard, when used with a standard telephone or speakerphone can enable a voice impaired individual have 2 way conversation over a telephone.[ citation needed ]

Output

The output of a SGD may be digitized and/or synthesized: digitized systems play directly recorded words or phrases while synthesized speech uses text-to-speech software that can carry less emotional information but permits the user to speak novel messages by typing new words. [35] [36] Today, individuals use a combination of recorded messages and text-to-speech techniques on their SGDs. [36] However, some devices are limited to only one type of output.

Digitized speech

Simple switch-operated speech-generating device Cheaptalk and switch.jpg
Simple switch-operated speech-generating device

Words, phrases or entire messages can be digitised and stored onto the device for playback to be activated by the user. [1] This process is formally known as Voice Banking. [37] Advantages of recorded speech include that it (a) provides natural prosody and speech naturalness for the listener [3] (e.g., person of the same age and gender as the AAC user can be selected to record the messages), [3] and (b) it provides for additional sounds that may be important for the user such as laughing or whistling. Moreover, Digitized SGDs is that they provide a degree of normalcy both for the patient and for their families when they lose their ability to speak on their own.

A major disadvantage of using only recorded speech is that users are unable to produce novel messages; they are limited to the messages pre-recorded into the device. [3] [38] Depending on the device, there may be a limit to the length of the recordings. [3] [38]

Synthesized speech

SGDs that use synthesized speech apply the phonetic rules of the language to translate the user's message into voice output (speech synthesis). [1] [36] Users have the freedom to create novel words and messages and are not limited to those that have been pre-recorded on their device by others. [36]

The use of synthesized speech has increased due to the creation of software that takes advantage of the user's existing computers and smartphones. AAC apps like Spoken or Avaz are available on Android and iOS, providing a way to use a speech-generating device without having to visit a doctor's office or learn to use specialized machinery. In many cases, these options are also more affordable than a dedicated device.

Synthesized SGDs may allow multiple methods of message creation that can be used individually or in combination: messages can be created from letters, words, phrases, sentences, pictures, or symbols. [1] [38] With synthesized speech there is virtually unlimited storage capacity for messages with few demands on memory space. [3]

Synthesized speech engines are available in many languages, [36] [38] and the engine's parameters, such as speech rate, pitch range, gender, stress patterns, pauses, and pronunciation exceptions can be manipulated by the user. [38]

Keyboard text-to-speech generating device Dynawrite.jpg
Keyboard text-to-speech generating device

Selection set and vocabulary

The selection set of a SGD is the set of all messages, symbols and codes that are available to a person using that device. [39] The content, organisation, and updating of this selection set are areas of active research and are influenced by a number of factors, including the user's ability, interests and age. [4] The selection set for an AAC system may include words that the user does not know yet – they are included for the user to "grow into". [4] The content installed on any given SGD may include a large number of preset pages provided by the manufacturer, with a number of additional pages produced by the user or the user's care team depending on the user's needs and the contexts that the device will be used in. [4]

Initial content selection

Researchers Beukelman and Mirenda list a number of possible sources (such as family members, friends, teachers, and care staff) for the selection of initial content for a SGD. A range of sources is required because, in general, one individual would not have the knowledge and experience to generate all the vocal expressions needed in any given environment. [4] For example, parents and therapists might not think to add slang terms, such as "innit". [40]

Previous work has analyzed both vocabulary use of typically developing speakers and word use of AAC users to generate content for new AAC devices. Such processes work well for generating a core set of utterances or vocal expressions but are less effective in situations where a particular vocabulary is needed (for example, terms related directly to a user's interest in horse riding). The term "fringe vocabulary" refers to vocabulary that is specific or unique to the individual's personal interests or needs. A typical technique to develop fringe vocabulary for a device is to conduct interviews with multiple "informants": siblings, parents, teachers, co-workers and other involved persons. [4]

Other researchers, such as Musselwhite and St. Louis suggest that initial vocabulary items should be of high interest to the user, be frequently applicable, have a range of meanings and be pragmatic in functionality. [5] These criteria have been widely used in the AAC field as an ecological check of SGD content. [4]

Automatic content maintenance

AAC user with custom-built device CustomAAC.jpg
AAC user with custom-built device

Beukelman and Mirenda emphasize that vocabulary selection also involves ongoing vocabulary maintenance; [4] however, a difficulty in AAC is that users or their carers must program in any new utterances manually (e.g. names of new friends or personal stories) and there are no existing commercial solutions for automatically adding content. [21] A number of research approaches have attempted to overcome this difficulty, [41] these range from "inferred input", such as generating content based on a log of conversation with a user's friends and family, [42] to data mined from the Internet to find language materials, such as the Webcrawler Project. [43] Moreover, by making use of Lifelogging based approaches, a device's content can be changed based on events that occur to a user during their day. [41] [44] By accessing more of a user's data, more high-quality messages can be generated at a risk of exposing sensitive user data. [41] For example, by making use of global positioning systems, a device's content can be changed based on geographical location. [45] [46]

Ethical concerns

Many recently developed SGDs include performance measurement and analysis tools to help monitor the content used by an individual. This raises concerns about privacy, and some argue that the device user should be involved in the decision to monitor use in this way. [47] [48] Similar concerns have been raised regarding the proposals for devices with automatic content generation, [44] and privacy is increasingly a factor in design of SGDs. [40] [49] As AAC devices are designed to be used in all areas of a user's life, there are sensitive legal, social, and technical issues centred on a wide family of personal data management problems that can be found in contexts of AAC use. For example, SGDs may have to be designed so that they support the user's right to delete logs of conversations or content that has been added automatically. [50]

Challenges

Programming of Dynamic Speech Generating devices is usually done by augmentative communication specialists. Specialists are required to cater to the needs of the patients because the patients usually choose what kinds of words/ phrases they want. For example, patients use different phrases based on their age, disability, interests, etc. Therefore, content organization is extremely time-consuming. Additionally, SGDs are rarely covered by health insurance companies. As a result, resources are very limited with regards to both funding and staffing. Dr. John Costello of Boston Children's Hospital has been the driving force soliciting donations to keep these program running and well-staffed both within his hospital and in hospitals across the country.

See also

Related Research Articles

<span class="mw-page-title-main">Assistive technology</span> Assistive devices for people with disabilities

Assistive technology (AT) is a term for assistive, adaptive, and rehabilitative devices for people with disabilities and the elderly. Disabled people often have difficulty performing activities of daily living (ADLs) independently, or even with assistance. ADLs are self-care activities that include toileting, mobility (ambulation), eating, bathing, dressing, grooming, and personal device care. Assistive technology can ameliorate the effects of disabilities that limit the ability to perform ADLs. Assistive technology promotes greater independence by enabling people to perform tasks they were formerly unable to accomplish, or had great difficulty accomplishing, by providing enhancements to, or changing methods of interacting with, the technology needed to accomplish such tasks. For example, wheelchairs provide independent mobility for those who cannot walk, while assistive eating devices can enable people who cannot feed themselves to do so. Due to assistive technology, disabled people have an opportunity of a more positive and easygoing lifestyle, with an increase in "social participation", "security and control", and a greater chance to "reduce institutional costs without significantly increasing household expenses." In schools, assistive technology can be critical in allowing students with disabilities to access the general education curriculum. Students who experience challenges writing or keyboarding, for example, can use voice recognition software instead. Assistive technologies assist people who are recovering from strokes and people who have sustained injuries that affect their daily tasks.

<span class="mw-page-title-main">Expressive aphasia</span> Language disorder involving inability to produce language

Expressive aphasia is a type of aphasia characterized by partial loss of the ability to produce language, although comprehension generally remains intact. A person with expressive aphasia will exhibit effortful speech. Speech generally includes important content words but leaves out function words that have more grammatical significance than physical meaning, such as prepositions and articles. This is known as "telegraphic speech". The person's intended message may still be understood, but their sentence will not be grammatically correct. In very severe forms of expressive aphasia, a person may only speak using single word utterances. Typically, comprehension is mildly to moderately impaired in expressive aphasia due to difficulty understanding complex grammar.

Makaton is a communication tool with speech, signs, and symbols to enable people with disabilities or learning disabilities to communicate. Makaton supports the development of essential communication skills such as attention, listening, comprehension, memory and expressive speech and language. The Makaton language programme has been used with individuals who have cognitive impairments, autism, Down syndrome, specific language impairment, multisensory impairment and acquired neurological disorders that have negatively affected the ability to communicate, including stroke and dementia patients.

<span class="mw-page-title-main">Screen reader</span> Assistive technology that converts text or images to speech or Braille

A screen reader is a form of assistive technology (AT) that renders text and image content as speech or braille output. Screen readers are essential to people who are blind, and are useful to people who are visually impaired, illiterate, or have a learning disability. Screen readers are software applications that attempt to convey what people with normal eyesight see on a display to their users via non-visual means, like text-to-speech, sound icons, or a braille device. They do this by applying a wide variety of techniques that include, for example, interacting with dedicated accessibility APIs, using various operating system features, and employing hooking techniques.

<span class="mw-page-title-main">Autocomplete</span> Computing feature predicting ending to a word a user is typing

Autocomplete, or word completion, is a feature in which an application predicts the rest of a word a user is typing. In Android and iOS smartphones, this is called predictive text. In graphical user interfaces, users can typically press the tab key to accept a suggestion or the down arrow key to accept one of several.

Reading for special needs has become an area of interest as the understanding of reading has improved. Teaching children with special needs how to read was not historically pursued due to perspectives of a Reading Readiness model. This model assumes that a reader must learn to read in a hierarchical manner such that one skill must be mastered before learning the next skill. This approach often led to teaching sub-skills of reading in a decontextualized manner. This style of teaching made it difficult for children to master these early skills, and as a result, did not advance to more advanced literacy instruction and often continued to receive age-inappropriate instruction.

<span class="mw-page-title-main">Augmentative and alternative communication</span> Techniques used for those with communication impairments

Augmentative and alternative communication (AAC) encompasses the communication methods used to supplement or replace speech or writing for those with impairments in the production or comprehension of spoken or written language. AAC is used by those with a wide range of speech and language impairments, including congenital impairments such as cerebral palsy, intellectual impairment and autism, and acquired conditions such as amyotrophic lateral sclerosis and Parkinson's disease. AAC can be a permanent addition to a person's communication or a temporary aid. Stephen Hawking, probably the best-known user of AAC, had amyotrophic lateral sclerosis, and communicated through a speech-generating device.

The Phraselator is a weatherproof handheld language translation device developed by Applied Data Systems and VoxTec, a former division of the military contractor Marine Acoustics, located in Annapolis, Maryland, USA. It was designed to serve as a handheld computer device that translates English into one of 40 different languages.

Tangible symbols are a type of augmentative and alternative communication (AAC) that uses objects or pictures that share a perceptual relationship with the items they represent as symbols. A tangible symbol's relation to the item it represents is perceptually obvious and concrete – the visual or tactile properties of the symbol resemble the intended item. Tangible Symbols can easily be manipulated and are most strongly associated with the sense of touch. These symbols can be used by individuals who are not able to communicate using speech or other abstract symbol systems, such as sign language. However, for those who have the ability to communicate using speech, learning to use tangible symbols does not hinder further developing acquisition of natural speech and/or language development, and may even facilitate it.

Mobile translation is any electronic device or software application that provides audio translation. The concept includes any handheld electronic device that is specifically designed for audio translation. It also includes any machine translation service or software application for hand-held devices, including mobile telephones, Pocket PCs, and PDAs. Mobile translation provides hand-held device users with the advantage of instantaneous and non-mediated translation from one human language to another, usually against a service fee that is, nevertheless, significantly smaller than a human translator charges.

David R. Beukelman was an American speech-language pathologist who specialized in augmentative and alternative communication and communication disorders associated with neurological conditions. He was the Barkley Professor Emeritus of Communication Disorders at the University of Nebraska-Lincoln and used to be the Director of Research and Education of the Communication Disorders Division, Munroe/Meyer Institute of Genetics and Rehabilitation, University of Nebraska Medical Center, Omaha, Nebraska.

Speech and language impairment are basic categories that might be drawn in issues of communication involve hearing, speech, language, and fluency.

A letter board may refer to two devices.

The International Society for Augmentative and Alternative Communication (ISAAC) was founded in May 1983 in East Lansing, Michigan, United States. Its stated purpose is to improve the communication abilities and quality of life of individuals with complex communication needs who use augmentative and alternative communication (AAC). ISAAC provides information about AAC services, policies and activities around the world thorough various publications and their website. The society publishes a journal and various other publications, organizes biennial conferences, promotes research on AAC use and AAC development as well as implements various projects.

Partner-assisted scanning or listener-assisted scanning is an augmentative and alternative communication technique used to enable a person with severe speech impairments to communicate. The approach is used with individuals who, due to sickness or disability, have severe motor impairments and good memory and attention skills. It is used as an alternative to direct access to symbols, pictures, or speech generating devices when these are not used.

Semantic compaction, (Minspeak), conceptually described as polysemic (multi-meaning) iconic encoding, is one of the three ways to represent language in Augmentative and alternative communication (AAC). It is a system utilized in AAC devices in which sequences of icons are combined in order to form a word or a phrase. The goal is to increase independent communication in individuals who cannot use speech. Minspeak is the only patented system for Semantic Compaction and is based on multi-meaning icons that code vocabulary in short sequences determined by rule-driven patterns. Minspeak has been used with both children and adults with various disabilities, including cerebral palsy, motor speech disorders, developmental disabilities, autism spectrum disorder, and adult onset disabilities such as Amyotrophic Lateral Sclerosis (ALS).

Switch access scanning is an indirect selection technique, used with switch access by an assistive technology user, including those who use augmentative and alternative communication (AAC), to choose items from the selection set. Unlike direct selection, a scanner can only make selections when the scanning indicator of the electronic device is on the desired choice. The scanning indicator moves through items by highlighting each item on the screen, or by announcing each item via voice output, and then the user activates a switch to select the item. The speed and pattern of scanning, as well as the way items are selected, are individualized to the physical, visual and cognitive capabilities of the user. While there may be different reasons for using scanning, the most common is a physical disability resulting in reduced motor control for direct selection. Communication during scanning is slower and less efficient than direct selection and scanning requires more cognitive skill. Scanning using technology has an advantage allows the user to be independent in controlling the assistive technology for those with only one voluntary movement.

<span class="mw-page-title-main">Lightwriter</span>

Lightwriters are a type of speech-generating device. The person who cannot speak types a message on the keyboard, and this message is displayed on two displays, one facing the user and a second outfacing display facing the communication partner or partners. A speech synthesiser is also used to provide speech output, and some models offer the facility to connect to a printer to provide printed output.

Key word signing is a technique of simultaneous communication whereby the communication partner of the user will use both natural speech and also produce signs for the words that carry the most important information. Key word signing puts emphasis on the pertinent words in a sentence or a phrase, rather than signing every word like you would for American Sign Language (ASL). For example, if someone said, "Go wash your hands" the key words that would be signed would be "wash" and "hand".

<span class="mw-page-title-main">Spoken (app)</span> Assistive technology app

Spoken is a mobile application and augmentative and alternative communication (AAC) tool launched in 2019. The app was designed to aid individuals with speech and language impairments like aphasia or nonverbal autism, using a combination of symbols, text, and voice output.

References

  1. 1 2 3 4 5 6 Aetna Inc. (2010)
  2. Blischak et al (2003)
  3. 1 2 3 4 5 6 Glennen & Decoste pp. 88–90
  4. 1 2 3 4 5 6 7 8 Beukelman & Mirenda, Chapter 2
  5. 1 2 Musselwhite & Louis
  6. 1 2 3 4 5 University of Washington (2009)
  7. Glennen, pp. 62–63.
  8. 1 2 Jans & Clark (1998), pp. 37–38.
  9. 1 2 Vanderheide (2002)
  10. 1 2 3 4 Zangari (1994)
  11. Stassen et al., p. 127
  12. PRC History
  13. Toby Churchill (About Us)
  14. Dynavox (Company History)
  15. 1 2 3 Hourcade (2004).
  16. Robitaille, pp. 151–153.
  17. Stephen Hawking and ALS
  18. Mathy (2000)
  19. Glennen & Decoste pp 62–63
  20. 1 2 Beukelman & Mirenda, pp. 97–101
  21. 1 2 3 Higginbotham et al (2007)
  22. 1 2 Beukelman & Mirenda
  23. Hochstein et al (2004)
  24. 1 2 3 Beukelman & Mirenda p. 84-85
  25. 1 2 3 Venkatagiri (1995)
  26. Augmentative Communication, Incorporated
  27. Johansen et al (2003)
  28. Ward et al (2000)
  29. Roark et al (2010)
  30. MacKey (2003), p 119
  31. Todman (2000)
  32. Hochstein et al (2003)
  33. Dynavox at www.speechbubble.org.uk
  34. Sundqvist & Rönnberg (2010)
  35. Schlosser, Blischak & Koul (2003)
  36. 1 2 3 4 5 Beukelman & Mirenda p. 105-106
  37. Beukelman & Mirenda, p. 105.
  38. 1 2 3 4 5 Radomski et al (2007)
  39. Beukelman & Mirenda p. 83
  40. 1 2 Wickenden, M. (2011). "Whose Voice is That?: Issues of Identity, Voice and Representation Arising in an Ethnographic Study of the Lives of Disabled Teenagers who use Augmentative and Alternative Communication (AAC)". Disability Studies Quarterly. 31 (4). doi: 10.18061/dsq.v31i4.1724 .
  41. 1 2 3 Reddington & Tintarev (2011)
  42. Ashraf et al. (2002)
  43. Luo et al (2007)
  44. 1 2 Black et al (2010)
  45. Dominowska et al
  46. Patel & Radhakrishnan
  47. Beukelman & Mirenda, p. 30
  48. Blackstone et al. (2002)
  49. Rackensperger et al. (2005)
  50. Reddington & Coles-Kemp (2011)

Bibliography