Phonetotopy

Last updated

Phonetotopy is the concept that articulatory features as well as perceptual features of speech sounds are ordered in the brain in a similar way as tone (tonotopy), articulation and its somatosensory feedback (somatotopy), or visual location of an object (retinotopy). It is assumed that a phonetotopic ordering of speech sounds as well as of syllables can be found at a supramodal speech processing level (i.e. at a phonetic speech processing level) within the brain.

In physiology, tonotopy is the spatial arrangement of where sounds of different frequency are processed in the brain. Tones close to each other in terms of frequency are represented in topologically neighbouring regions in the brain. Tonotopic maps are a particular case of topographic organization, similar to retinotopy in the visual system.

Retinotopy

Retinotopy is the mapping of visual input from the retina to neurons, particularly those neurons within the visual stream. For clarity, 'retinotopy' can be replaced with 'retinal mapping', and 'retinotopic' with 'retinally mapped'.

The concept of phonetotopy was introduced in Kröger et al. (2009) on the basis of modeling speech production, speech perception, as well as speech acquisition. [1] Moreover, fMRI measurements on ordering of vowels with respect to phonetic features [2] as well as EEG-array measurements on vowel and syllable articulation [3] support this concept.

The concept of phonetotopy at least underpins the concept of distinctive features, which are phonetically based features of speech sounds (i.e. based in perceptual as well as in articulatory domain), but which as well are linguistically (or phonologically) relevant, and thus are realized in a language specific way in humans.

Phonology is a branch of linguistics concerned with the systematic organization of sounds in spoken languages and signs in sign languages. It used to be only the study of the systems of phonemes in spoken languages, but it may also cover any linguistic analysis either at a level beneath the word or at all levels of language where sound or signs are structured to convey linguistic meaning.

Related Research Articles

Approximants are speech sounds that involve the articulators approaching each other but not narrowly enough nor with enough articulatory precision to create turbulent airflow. Therefore, approximants fall between fricatives, which do produce a turbulent airstream, and vowels, which produce no turbulence. This class of sounds includes lateral approximants like, non-lateral approximants like, and semivowels like and.

Consonant sound in spoken language, articulated with complete or partial closure of the vocal tract

In articulatory phonetics, a consonant is a speech sound that is articulated with complete or partial closure of the vocal tract. Examples are, pronounced with the lips;, pronounced with the front of the tongue;, pronounced with the back of the tongue;, pronounced in the throat; and, pronounced by forcing air through a narrow channel (fricatives); and and, which have air flowing through the nose (nasals). Contrasting with consonants are vowels.

The International Phonetic Alphabet (IPA) is an alphabetic system of phonetic notation based primarily on the Latin alphabet. It was devised by the International Phonetic Association in the late 19th century as a standardized representation of the sounds of spoken language. The IPA is used by lexicographers, foreign language students and teachers, linguists, speech-language pathologists, singers, actors, constructed language creators and translators.

Phonetics is a branch of linguistics that studies the sounds of human speech, or—in the case of sign languages—the equivalent aspects of sign. It is concerned with the physical properties of speech sounds or signs (phones): their physiological production, acoustic properties, auditory perception, and neurophysiological status. Phonology, on the other hand, is concerned with the abstract, grammatical characterization of systems of sounds or signs.

Theoretical linguistics, or general linguistics, is the branch of linguistics which inquires into the nature of language itself and seeks to answer fundamental questions as to what language is; how it works; how universal grammar (UG) as a domain-specific mental organ operates, if it exists at all; what are its unique properties; how does language relate to other cognitive processes, etc. Theoretical linguists are most concerned with constructing models of linguistic knowledge, and ultimately developing a linguistic theory.

Voice or voicing is a term used in phonetics and phonology to characterize speech sounds. Speech sounds can be described as either voiceless or voiced.

In phonetics, vowel reduction is any of various changes in the acoustic quality of vowels, which are related to changes in stress, sonority, duration, loudness, articulation, or position in the word, and which are perceived as "weakening". It most often makes the vowels shorter as well.

In linguistics, a segment is "any discrete unit that can be identified, either physically or auditorily, in the stream of speech". The term is most used in phonetics and phonology to refer to the smallest elements in a language, and this usage can be synonymous with the term phone.

In speech communication, intelligibility is a measure of how comprehensible speech is in given conditions. Intelligibility is affected by the level and quality of the speech signal, the type and level of background noise, reverberation, and, for speech over communication devices, the properties of the communication system. A common standard measurement for the quality of the intelligibility of speech is the Speech Transmission Index (STI). The concept of speech intelligibility is relevant to several fields, including phonetics, human factors, acoustical engineering, and audiometry.

Speech production of a spoken language

Speech is human vocal communication using language. Each language uses phonetic combinations of a limited set of perfectly articulated and individualized vowel and consonant sounds that form the sound of its words, and using those words in their semantic character as words in the lexicon of a language according to the syntactic constraints that govern lexical words' function in a sentence. In speaking, speakers perform many different intentional speech acts, e.g., informing, declaring, asking, persuading, directing, and can use enunciation, intonation, degrees of loudness, tempo, and other non-representational or paralinguistic aspects of vocalization to convey meaning. In their speech speakers also unintentionally communicate many aspects of their social position such as sex, age, place of origin, physical states, psychic states, physico-psychic states, education or experience, and the like.

Speech perception is the process by which the sounds of language are heard, interpreted and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.

Articulatory synthesis

Articulatory synthesis refers to computational techniques for synthesizing speech based on models of the human vocal tract and the articulation processes occurring there. The shape of the vocal tract can be controlled in a number of ways which usually involves modifying the position of the speech articulators, such as the tongue, jaw, and lips. Speech is created by digitally simulating the flow of air through the representation of the vocal tract.

Speech production is the process by which thoughts are translated into speech. This includes the selection of words, the organization of relevant grammatical forms, and then the articulation of the resulting sounds by the motor system using the vocal apparatus. Speech production can be spontaneous such as when a person creates the words of a conversation, reactive such as when they name a picture or read aloud a written word, or imitative, such as in speech repetition. Speech production is not the same as language production since language can also be produced manually by signs.

Apraxia of speech (AOS) is an acquired oral motor speech disorder affecting an individual's ability to translate conscious speech plans into motor plans, which results in limited and difficult speech ability. By the definition of apraxia, AOS affects volitional movement patterns, however AOS usually also affects automatic speech.

Motor theory of speech perception

The motor theory of speech perception is the hypothesis that people perceive spoken words by identifying the vocal tract gestures with which they are pronounced rather than by identifying the sound patterns that speech generates. It originally claimed that speech perception is done through a specialized module that is innate and human-specific. Though the idea of a module has been qualified in more recent versions of the theory, the idea remains that the role of the speech motor system is not only to produce speech articulations but also to detect them.

Speech repetition Repeating something someone else said

Speech repetition is when one individual speaks the sounds they've heard another person pronounce or say. In other words, it is the saying by one individual of the spoken vocalizations made by another individual. Speech repetition requires person repeating the utterance to have the ability to map the sounds they hear from the other person's oral pronunciation to similar places and manners of articulation in their own vocal tract.

Neurocomputational speech processing is computer-simulation of speech production and speech perception by referring to the natural neuronal processes of speech production and speech perception, as they occur in the human nervous system. This topic is based on neuroscience and computational neuroscience.

Speech acquisition focuses on the development of spoken language by a child. Speech consists of an organized set of sounds or phonemes that are used to convey meaning while language is an arbitrary association of symbols used according to prescribed rules to convey meaning. While grammatical and syntactic learning can be seen as a part of language acquisition, speech acquisition focuses on the development of speech perception and speech production over the first years of a child's lifetime. There are several models to explain the norms of speech sound or phoneme acquisition in children.

Bernd J. Kröger is a German phonetician and professor at RWTH Aachen University. He is known for his contributions in the field of neurocomputational speech processing, in particular the ACT model.

References

  1. Kröger BJ, Kannampuzha J, Lowit A, Neuschaefer-Rube C (2009) Phonetotopy within a neurocomputational model of speech production and speech acquisition. In: Fuchs S, Loevenbruck H, Pape D, Perrier P (eds.) Some aspects of speech and the brain. (Peter Lang, Berlin) pp. 59-90
  2. Obleser J, Boecker H, Drzezga A, Haslinger B, Hennenlotter A, Roettinger M, Eulitz C, Rauschecker JP (2006) Vowel sound extraction in anterior superior temporal cortex. Human Brain Mapping 27, 562–571
  3. Kristofer E. Bouchard KE, Mesgarani N, Johnson K, Chang EF (2013) Functional organization of human sensorimotor cortex for speech articulation. Nature 495, 327–332