Electropalatography

Last updated

Electropalatography (EPG) is a technique used to monitor contacts between the tongue and hard palate, particularly during articulation and speech. [1]

Contents

A custom-made artificial palate is moulded to fit against a speaker's hard palate. The artificial palate contains electrodes exposed to the lingual surface. When contact occurs between the tongue surface and any of the electrodes, particularly between the lateral margins of the tongue and the borders of the hard palate, electronic signals are sent to an external processing unit. [2] EPG provides dynamic real-time visual feedback of the location and timing of tongue contacts with the hard palate.

This procedure can record details of tongue activity during speech. It can provide direct articulatory information that children can use in therapy to monitor and improve their articulation patterns. Visual feedback is very important in the success of treating deaf children.

History

Fig.1 Example of Electropalate Electropalate-vertical.jpg
Fig.1 Example of Electropalate

Electropalatography was originally conceptualized and developed as a tool for phonetics research to improve upon traditional palatography methods. Both military and academic language researchers used early electropalatography tools to obtain accurate information regarding tongue-to-palate contact in a number of foreign languages.

Early EPG devices used direct current electricity to power the sensors, which were activated by moisture sensors on mouthpieces. Mouthpieces (electropalates) originally closely resembled modern dental impression plates. Mouthpieces became more customized over time, which allowed for more accurate research. Fig. 1 shows a typical electropalate from the Reading system.

EPG added significant insight into academic understanding of articulatory phonetics. In the 1960s and 1970s a number of independent individuals and companies recognized EPG's potential for pedagogical and therapeutic applications. Despite the multiple attempts to reverse engineer EPG tools for speech therapy, most companies failed to commercialize EPG effectively. EPG tools remain fairly expensive tools for speech therapy and phonetics research, though the information they provide are difficult to obtain using other methods of visual feedback of articulation. [3]

In phonetic research

Although much of the development of EPG has been dedicated to clinical applications, it has been used in a number of laboratory phonetic investigations. Stone (1997) lists three main areas of research:

Fig.2 Electropalatography printout Epg-frames.JPG
Fig.2 Electropalatography printout

When electropalatography is used for speech research, the data from tongue-palate contact is sampled by the controlling computer at up to 100 frames per second. In the early days (when digital displays were less ubiquitous and more limited), the data was printed out on paper for analysis. An example of the printout can be seen in Fig.2, where the sequence runs from top to bottom, and where the 'O' symbol indicates contact and '.' indicates no contact. The utterance shown is 'catkin' /kæt.kɪn/; the sample numbered 344 shows when the /t/ closure is complete, and at frame 350 there is a complete velar closure. The alveolar closure is released at 351. The articulatory overlap (which is inaudible) is thus clearly shown. [5] Individual frames of EPG contact data may be used to illustrate descriptions of consonant articulations, and this is done by Cruttenden for all the English (RP) consonants. [6] In some research, multiple repetitions may be summed to produce graphical representations of tongue-palate contact in a way that minimizes effects of random variation on single tokens. This was done by Farnetani in studies of Italian and French coarticulation. [7]

Providers

Three primary manufacturers of EPG tools exist: CompleteSpeech in the United States, Articulate Instruments, and icSpeech in Great Britain. Completespeech is a private company that specializes in speech therapy oriented EPG tools, branded as The SmartPalate System. The SmartPalate System uses a standard size sensor sheet with 126 sensors that is fitted to individual mouthpiece molds. [8] Articulate Instruments provides both speech therapy and research oriented EPG mouthpieces, branded as The Reading Palate and The Articulate Palate. Articulate Instrument EPG sensors are places by hand for individual users' mouthpieces. [9] The icSpeech, branded as LinguaGraph, [10] also uses the Reading Palate.

In therapy

Electropalatography has been studied in a variety of populations, including children with cleft palate, children with Down syndrome, children who are deaf, children with cochlear implants, children with cerebral palsy and adults with Parkinson's disease. Therapy has proved to be successful in tested populations.[ citation needed ] Longitudinal studies with large sample sizes are needed to determine the long-term success of therapy.

Related Research Articles

Manner of articulation Configuration and interaction of the articulators when making a speech sound

In articulatory phonetics, the manner of articulation is the configuration and interaction of the articulators when making a speech sound. One parameter of manner is stricture, that is, how closely the speech organs approach one another. Others include those involved in the r-like sounds, and the sibilancy of fricatives.

Phonetics is a branch of linguistics that studies how humans produce and perceive sounds, or in the case of sign languages, the equivalent aspects of sign. Phoneticians—linguists who specialize in phonetics—study the physical properties of speech. The field of phonetics is traditionally divided into three sub-disciplines based on the research questions involved such as how humans plan and execute movements to produce speech, how various movements affect the properties of the resulting sound, or how humans convert sound waves to linguistic information. Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language—which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones.

Place of articulation Place in the mouth consonants are articulated

In articulatory phonetics, the place of articulation of a consonant is the point of contact where an obstruction occurs in the vocal tract between an articulatory gesture, an active articulator, and a passive location. Along with the manner of articulation and the phonation, it gives the consonant its distinctive sound.

The field of articulatory phonetics is a subfield of phonetics that studies articulation and ways that humans produce speech. Articulatory phoneticians explain how humans produce speech sounds via the interaction of different physiological structures. Generally, articulatory phonetics is concerned with the transformation of aerodynamic energy into acoustic energy. Aerodynamic energy refers to the airflow through the vocal tract. Its potential form is air pressure; its kinetic form is the actual dynamic airflow. Acoustic energy is variation in the air pressure that can be represented as sound waves, which are then perceived by the human auditory system as sound.

The velar ejective is a type of consonantal sound, used in some spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨⟩.

Voiced palatal plosive Consonantal Sound

The voiced palatal plosive or stop is a type of consonantal sound in some vocal languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨ɟ⟩, a barred dotless ⟨j⟩ that was initially created by turning the type for a lowercase letter ⟨f⟩. The equivalent X-SAMPA symbol is J\.

Voiceless alveolo-palatal fricative Consonant used in some oral languages

The voiceless alveolo-palatal sibilant fricative is a type of consonantal sound, used in some spoken languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨ɕ⟩. It is the sibilant equivalent of the voiceless palatal fricative, and as such it can be transcribed in IPA with ⟨ç˖⟩.

The voiceless palatal plosive or stop is a type of consonantal sound used in some vocal languages. The symbol in the International Phonetic Alphabet that represents this sound is ⟨c⟩, and the equivalent X-SAMPA symbol is c.

In phonetics, a trill is a consonantal sound produced by vibrations between the active articulator and passive articulator. Standard Spanish ⟨rr⟩ as in perro, for example, is an alveolar trill.

Alveolo-palatal consonant Type of consonant

In phonetics, alveolo-palatal consonants, sometimes synonymous with pre-palatal consonants, are intermediate in articulation between the coronal and dorsal consonants, or which have simultaneous alveolar and palatal articulation. In the official IPA chart, alveolo-palatals would appear between the retroflex and palatal consonants but for "lack of space". Ladefoged and Maddieson characterize the alveolo-palatals as palatalized postalveolars (palato-alveolars), articulated with the blade of the tongue behind the alveolar ridge and the body of the tongue raised toward the palate, whereas Esling describes them as advanced palatals (pre-palatals), the furthest front of the dorsal consonants, articulated with the body of the tongue approaching the alveolar ridge. These descriptions are essentially equivalent, since the contact includes both the blade and body of the tongue. They are front enough that the fricatives and affricates are sibilants, the only sibilants among the dorsal consonants.

Laminal consonant

A laminal consonant is a phone produced by obstructing the air passage with the blade of the tongue, the flat top front surface just behind the tip of the tongue in contact with upper lip, teeth, alveolar ridge, to possibly, as far back as the prepalatal arch, although in the last contact may involve as well parts behind the blade. It contrasts with an apical consonant, produced by creating an obstruction with the tongue apex only. Sometimes laminal is used exclusively for an articulation that involves only the blade of the tongue with the tip being lowered and apicolaminal for an articulation that involves both the blade of the tongue and the raised tongue tip. The distinction applies only to coronal consonants, which use the front of the tongue.

Apical consonant

An apical consonant is a phone produced by obstructing the air passage with the tip of the tongue (apex) in conjunction with upper articulators from lips to postalveolar, and possibly prepalatal. It contrasts with laminal consonants, which are produced by creating an obstruction with the blade of the tongue, just behind the tip. Sometimes apical is used exclusively for an articulation that involves only the tip of the tongue and apicolaminal for an articulation that involves both the tip and the blade of the tongue. However, the distinction is not always made and the latter one may be called simply apical, especially when describing an apical dental articulation. As there is some laminal contact in the alveolar region, the apicolaminal dental consonants are also labelled as denti-alveolar.

Voiced postalveolar affricate Consonantal sound

The voiced palato-alveolar sibilant affricate, voiced post-alveolar affricate or voiced domed postalveolar sibilant affricate, is a type of consonantal sound, used in some spoken languages. The sound is transcribed in the International Phonetic Alphabet with ⟨d͡ʒ⟩, or in some broad transcriptions ⟨ɟ⟩, and the equivalent X-SAMPA representation is dZ. Alternatives commonly used in linguistic works, particularly in older or American literature, are ⟨ǰ⟩, ⟨ǧ⟩, ⟨ǯ⟩, and ⟨dž⟩. It is familiar to English speakers as the pronunciation of ⟨j⟩ in jump.

Coarticulation in its general sense refers to a situation in which a conceptually isolated speech sound is influenced by, and becomes more like, a preceding or following speech sound. There are two types of coarticulation: anticipatory coarticulation, when a feature or characteristic of a speech sound is anticipated (assumed) during the production of a preceding speech sound; and carryover or perseverative coarticulation, when the effects of a sound are seen during the production of sound(s) that follow. Many models have been developed to account for coarticulation. They include the look-ahead, articulatory syllable, time-locked, window, coproduction and articulatory phonology models.

Laryngeal consonants are consonants with their primary articulation in the larynx. The laryngeal consonants comprise the pharyngeal consonants, the glottal consonants, and for some languages uvular consonants.

Patricia Ann Keating is an American linguist and noted phonetician. She is Distinguished Professor and Chair of UCLA Linguistics Department.

Catherine Phebe Browman was an American linguist and speech scientist. She received her Ph.D. in linguistics from the University of California, Los Angeles (UCLA) in 1978. Browman was a research scientist at Bell Laboratories in New Jersey (1967–1972). While at Bell Laboratories, she was known for her work on speech synthesis using demisyllables. She later worked as researcher at Haskins Laboratories in New Haven, Connecticut (1982–1998). She was best known for developing, with Louis Goldstein, of the theory of articulatory phonology, a gesture-based approach to phonological and phonetic structure. The theoretical approach is incorporated in a computational model that generates speech from a gesturally-specified lexicon. Browman was made an honorary member of the Association for Laboratory Phonology.

In phonetics, the basis of articulation, also known as articulatory setting, is the default position or standard settings of a speaker's organs of articulation when ready to speak. Different languages each have their own basis of articulation, which means that native speakers will share a certain position of tongue, lips, jaw, possibly even uvula or larynx, when preparing to speak. These standard settings enable them to produce the sounds and prosody of their native language more efficiently. Beatrice Honikman suggests thinking of it in terms of having a "gear" for English, another for French, and so on depending on which language is being learned; in the classroom, when working on pronunciation, the first thing the learner must do is to think themselves into the right gear before starting on pronunciation exercises. Jenner (2001) gives a detailed account of how this idea arose and how Honikman has been credited with its invention despite a considerable history of prior study.

Electromagnetic articulography Method to measure position of mouth parts

Electromagnetic articulography (EMA) is a method of measuring the position of parts of the mouth. EMA uses sensor coils placed on the tongue and other parts of the mouth to measure their position and movement over time during speech and swallowing. Induction coils around the head produce an electromagnetic field that creates, or induces, a current in the sensors in the mouth. Because the current induced is inversely proportional to the cube of the distance, a computer is able to analyse the current produced and determine the sensor coil's location in space.

References

  1. Zanuy, Marcos Faúndez (2005). Nonlinear analyses and algorithms for speech processing: International Conference on Non-Linear Speech Processing, NOLISP 2005, Barcelona, Spain, April 19–22, 2005 : revised selected papers. Springer Science & Business. p. 186. ISBN   3-540-31257-9.
  2. Baken, R.J. (1987). Clinical MEasurement of Speech and Voice. Taylor and Francis. p. 442.
  3. "A history of EPG". Articulate Technologies.
  4. Stone, M. (1999) Laboratory Techniques for Investigating Speech Articulation, in Hardcastle, W.J. and Laver, J. The Handbook of Phonetic Sciences, pp 28–31, Blackwell
  5. Hardcastle, W.J. and Roach, P.J. (1979) `An instrumental investigation of coarticulation in stop consonant sequences' in H.H. and P. Hollien (eds.) Current Issues in the Phonetic Sciences, pp. 533–550, Amsterdam, John Benjamins.
  6. Cruttenden, A., ed. Gimson (2014). Gimson's Pronunciation of English (8th ed.). Arnold.{{cite book}}: CS1 maint: multiple names: authors list (link)
  7. Farnetani, E.(1989)'V-C-V lingual coarticulation and its spatiotemporal domain', in Hardcastle, W.J. and Marchal, A. (1989) Speech Production and Speech Modelling, NATO ASI Series, 55, Kluwer ( ISBN   0-7923-0746-1), pp 98–100 and 112–116
  8. Plauche, Tanner. "What is SmartPalate". CompleteSpeech. Archived from the original on 2014-05-14. Retrieved 2014-05-13.
  9. Hardcastle, William. "Making plaster models for EPG palates" (PDF). Articulate Instruments. Retrieved 2014-05-13.
  10. "The LinguaGraph Electropalatography System". icSpeech. Retrieved 2022-02-20.