Holger Mitterer

Last updated
Holger Mitterer
Born (1973-01-04) 4 January 1973 (age 51)
Education University of Maastricht (PhD)
University of Bielefeld
University of Leiden
Known forcomputational architecture of spoken-word recognition, applied psycholinguistics
Awards Excellence Initiative (2017)
DFG grant
Scientific career
Fields Cognitive neuroscience, linguistics, experimental psychology
Institutions University of Malta
Max Planck Institute for Psycholinguistics
University of Tübingen
Thesis Understanding 'gardem bench': Studies on the perception of assimilated words forms  (2003)
Academic advisorsOdmar Neumann
Alexander van der Heijden
Website http://www.holgermitterer.eu/

Holger Mitterer (born 4 January 1973) is a German cognitive scientist and linguist and associate professor at the University of Malta. He is known for his works on applied psycholinguistics. [1] [2] [3] Mitterer is co-editor-in-chief with Cynthia Clopper of Language and Speech . [4] [5] [6] He is a former associate editor of Laboratory Phonology (2013-2018) and a member of the editorial board of the Journal of Phonetics .

Contents

Select publications

Related Research Articles

Phonetics is a branch of linguistics that studies how humans produce and perceive sounds or, in the case of sign languages, the equivalent aspects of sign. Linguists who specialize in studying the physical properties of speech are phoneticians. The field of phonetics is traditionally divided into three sub-disciplines based on the research questions involved such as how humans plan and execute movements to produce speech, how various movements affect the properties of the resulting sound or how humans convert sound waves to linguistic information. Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones and it is also defined as the smallest unit that discerns meaning between sounds in any given language.

Phonology is the branch of linguistics that studies how languages systematically organize their phones or, for sign languages, their constituent parts of signs. The term can also refer specifically to the sound or sign system of a particular language variety. At one time, the study of phonology related only to the study of the systems of phonemes in spoken languages, but may now relate to any linguistic analysis either:

An affricate is a consonant that begins as a stop and releases as a fricative, generally with the same place of articulation. It is often difficult to decide if a stop and fricative form a single phoneme or a consonant pair. English has two affricate phonemes, and, often spelled ch and j, respectively.

In sociolinguistics, an accent is a way of pronouncing a language that is distinctive to a country, area, social class, or individual. An accent may be identified with the locality in which its speakers reside, the socioeconomic status of its speakers, their ethnicity, their caste or social class, or influence from their first language.

Lip reading, also known as speechreading, is a technique of understanding a limited range of speech by visually interpreting the movements of the lips, face and tongue without sound. Estimates of the range of lip reading vary, with some figures as low as 30% because lip reading relies on context, language knowledge, and any residual hearing. Although lip reading is used most extensively by deaf and hard-of-hearing people, most people with normal hearing process some speech information from sight of the moving mouth.

<span class="mw-page-title-main">Voiced alveolar and postalveolar approximants</span> Consonantal sounds represented by ⟨ɹ⟩ / ⟨ð̠˕⟩ and ⟨ɹ̠⟩ in IPA

The voiced alveolar approximant is a type of consonantal sound used in some spoken languages. The symbol in the International Phonetic Alphabet that represents the alveolar and postalveolar approximants is ⟨ɹ⟩, a lowercase letter r rotated 180 degrees. The equivalent X-SAMPA symbol is r\.

<span class="mw-page-title-main">Language processing in the brain</span> How humans use words to communicate

In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.

The language module or language faculty is a hypothetical structure in the human brain which is thought to contain innate capacities for language, originally posited by Noam Chomsky. There is ongoing research into brain modularity in the fields of cognitive science and neuroscience, although the current idea is much weaker than what was proposed by Chomsky and Jerry Fodor in the 1980s. In today's terminology, 'modularity' refers to specialisation: language processing is specialised in the brain to the extent that it occurs partially in different areas than other types of information processing such as visual input. The current view is, then, that language is neither compartmentalised nor based on general principles of processing. It is modular to the extent that it constitutes a specific cognitive skill or area in cognition.

The phonology of Japanese features a phonemic inventory including five vowels and 12 or more consonants. The phonotactics are relatively simple, allowing for few consonant clusters. Japanese phonology has been affected by the presence of several layers of vocabulary in the language: in addition to native Japanese vocabulary, Japanese has a large amount of Chinese-based vocabulary and loanwords from other languages.

<span class="mw-page-title-main">Speech</span> Human vocal communication using spoken language

Speech is a human vocal communication using language. Each language uses phonetic combinations of vowel and consonant sounds that form the sound of its words, and using those words in their semantic character as words in the lexicon of a language according to the syntactic constraints that govern lexical words' function in a sentence. In speaking, speakers perform many different intentional speech acts, e.g., informing, declaring, asking, persuading, directing, and can use enunciation, intonation, degrees of loudness, tempo, and other non-representational or paralinguistic aspects of vocalization to convey meaning. In their speech, speakers also unintentionally communicate many aspects of their social position such as sex, age, place of origin, physical states, psychological states, physico-psychological states, education or experience, and the like.

Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.

TRACE is a connectionist model of speech perception, proposed by James McClelland and Jeffrey Elman in 1986. It is based on a structure called "the TRACE," a dynamic processing structure made up of a network of units, which performs as the system's working memory as well as the perceptual processing mechanism. TRACE was made into a working computer program for running perceptual simulations. These simulations are predictions about how a human mind/brain processes speech sounds and words as they are heard in real time.

Phonological development refers to how children learn to organize sounds into meaning or language (phonology) during their stages of growth.

Speech shadowing is a psycholinguistic experimental technique in which subjects repeat speech at a delay to the onset of hearing the phrase. The time between hearing the speech and responding, is how long the brain takes to process and produce speech. The task instructs participants to shadow speech, which generates intent to reproduce the phrase while motor regions in the brain unconsciously process the syntax and semantics of the words spoken. Words repeated during the shadowing task would also imitate the parlance of the shadowed speech.

<span class="mw-page-title-main">Speech repetition</span> Repeating something someone else said

Speech repetition occurs when individuals speak the sounds that they have heard another person pronounce or say. In other words, it is the saying by one individual of the spoken vocalizations made by another individual. Speech repetition requires the person repeating the utterance to have the ability to map the sounds that they hear from the other person's oral pronunciation to similar places and manners of articulation in their own vocal tract.

Phonemic contrast refers to a minimal phonetic difference, that is, small differences in speech sounds, that makes a difference in how the sound is perceived by listeners, and can therefore lead to different mental lexical entries for words. For example, whether a sound is voiced or unvoiced matters for how a sound is perceived in many languages, such that changing this phonetic feature can yield a different word ; see Phoneme. Another example in English of a phonemic contrast would be the difference between leak and league; the minimal difference of voicing between [k] and [g] does lead to the two utterances being perceived as different words. On the other hand, an example that is not a phonemic contrast in English is the difference between and. In this case the minimal difference of vowel length is not a contrast in English and so those two forms would be perceived as different pronunciations of the same word seat.

Patrice (Pam) Speeter Beddor is John C. Catford Collegiate Professor of Linguistics at the University of Michigan, focusing on phonology and phonetics. Her research has dealt with phonetics, including work in coarticulation, speech perception, and the relationship between perception and production.

Alejandrina Cristia is an Argentinian linguist known for research on infant-directed speech, daylong audio recordings of children's diverse linguistic environments, and language acquisition across cultures. Cristia is interested in how phonetic and phonological representations are formed during infancy and their interactions with other linguistic formats and cognitive mechanisms. She holds the position of Research Director of the Laboratoire de Sciences Cognitives et Psycholinguistique (LSCP) at the Paris Sciences et Lettres University.

Mirjam Ernestus is professor of psycholinguistics and scientific director of the Centre for Language Studies at Radboud University Nijmegen in the Netherlands.

Georgia Zellou is an American linguistics professor at the University of California-Davis. Her research focuses on topics in phonetics and laboratory phonology.

References

  1. "Filmuntertitel verbessern das Sprachenlernen" (PDF). SpektrumDirekt.
  2. "Foreign Subtitles Improve Speech Perception". ScienceDaily .
  3. "DVDs: Film-Untertitel helfen beim Sprachenlernen". Der Spiegel (in German). 11 November 2009.
  4. "Holger Mitterer". Google Scholar.
  5. Coleman, John (2012). "Review of Laboratory phonology 10". Phonology. 29 (2): 331–336. doi:10.1017/S0952675712000140. ISSN   0952-6757. JSTOR   23325579. S2CID   62551704.
  6. "Effects of native-language on compensation for coarticulation" (PDF).{{cite journal}}: Cite journal requires |journal= (help)