Georg Heike

Last updated

Georg Heike (German: [ˈɡeːɔʁk ˈhaɪkə] ; born July 21, 1933) is a German phonetician and linguist. [1]


He studied musicology, phonetics, communication science and psychology at the University of Bonn and finished his doctoral thesis in 1960 at the Department of Phonetics and Communication Research headed by Prof. Dr. Werner Meyer-Eppler. He was senior scientist at Marburg University before he moved to the University of Cologne. From 1969 to 1998 he headed the Departement of Phonetics at the University of Cologne. Among his topics of research are phonetics, phonology, articulatory synthesis, musicology.

Georg Heike has also been intensively involved in composing and performing contemporary music, violin making, and musical acoustics.

Major research topics of Georg Heike

Selected references on his work in phonetics

Related Research Articles

Phonology is a branch of linguistics that studies how languages or dialects systematically organize their sounds. The term also refers to the sound system of any particular language variety. At one time, the study of phonology only related to the study of the systems of phonemes in spoken languages. Now it may relate to

Ripuarian language German dialect group

Ripuarian is a German dialect group, part of the West Central German language group. Together with the Moselle Franconian which includes the Luxembourgish language, Ripuarian belongs to the larger Central Franconian dialect family and also to the Rhinelandic linguistic continuum with the Low Franconian languages.

<i>Sj</i>-sound Voiceless fricative phoneme of Swedish

The sj-sound is a voiceless fricative phoneme found in most dialects of the sound system of Swedish. It has a variety of realisations, whose precise phonetic characterisation is a matter of debate, but which usually feature distinct labialization. The sound is represented in Swedish orthography by a number of spellings, including the digraph ⟨sj⟩ from which the common Swedish name for the sound is derived, as well as ⟨stj⟩, ⟨skj⟩, and ⟨sk⟩. The sound should not be confused with the Swedish tj-sound, often spelled ⟨tj⟩, ⟨kj⟩, or ⟨k⟩.

Kenneth N. Stevens

Kenneth Noble Stevens was the Clarence J. LeBel Professor of Electrical Engineering and Computer Science, and Professor of Health Sciences and Technology at the Research Laboratory of Electronics at MIT. Stevens was head of the Speech Communication Group in MIT's Research Laboratory of Electronics (RLE), and was one of the world's leading scientists in acoustic phonetics.

Werner Meyer-Eppler, was a Belgian-born German physicist, experimental acoustician, phoneticist and information theorist.

Articulatory synthesis

Articulatory synthesis refers to computational techniques for synthesizing speech based on models of the human vocal tract and the articulation processes occurring there. The shape of the vocal tract can be controlled in a number of ways which usually involves modifying the position of the speech articulators, such as the tongue, jaw, and lips. Speech is created by digitally simulating the flow of air through the representation of the vocal tract.

Articulatory phonology is a linguistic theory originally proposed in 1986 by Catherine Browman of Haskins Laboratories and Louis M. Goldstein of Yale University and Haskins. The theory identifies theoretical discrepancies between phonetics and phonology and aims to unify the two by treating them as low- and high-dimensional descriptions of a single system.

Katherine Safford Harris is a noted psychologist and speech scientist. She is Distinguished Professor Emerita in Speech and Hearing at the CUNY Graduate Center and a member of the Board of Directors of Haskins Laboratories. She is also the former President of the Acoustical Society of America and Vice President of Haskins Laboratories.

Louis M. Goldstein is an American linguist and cognitive scientist. He was previously a professor and chair of the Department of Linguistics and a professor of psychology at Yale University, and is now a professor in the Department of Linguistics at the University of Southern California. He is a senior scientist at Haskins Laboratories in New Haven, Connecticut and a founding member of the Association for Laboratory Phonology.

Catherine P. Browman (1945–2008) was an American linguist and speech scientist. She was a research scientist at Bell Laboratories in New Jersey and Haskins Laboratories in New Haven, Connecticut, from which she retired due to illness. While at Bell Laboratories, she was known for her work on speech synthesis using demisyllables. She was best known for development, with Louis Goldstein, of the theory of articulatory phonology, a gesture-based approach to phonological and phonetic structure. The theoretical approach is incorporated in a computational model that generates speech from a gesturally-specified lexicon. She received her Ph.D. in linguistics from UCLA in 1978 and was a founding member of the Association for Laboratory Phonology.

Elliot Saltzman is an American psychologist and speech scientist. He is a professor in the Department of Physical Therapy at Boston University and a Senior Scientist at Haskins Laboratories in New Haven, Connecticut. He is best known for his development, with J. A. Scott Kelso of "task dynamics ." He is also known for his contributions to the development of a gestural-computational model at Haskins Laboratories that combines task dynamics with articulatory phonology and articulatory synthesis. His research interests include application of theories and methods of nonlinear dynamics and complexity theory to understanding the dynamical and biological bases of sensorimotor coordination and control. He is the co-founder, with Philip Rubin, of the IS group.

Gnuspeech is an extensible text-to-speech computer software package that produces artificial speech output based on real-time articulatory speech synthesis by rules. That is, it converts text strings into phonetic descriptions, aided by a pronouncing dictionary, letter-to-sound rules, and rhythm and intonation models; transforms the phonetic descriptions into parameters for a low-level articulatory speech synthesizer; uses these to drive an articulatory model of the human vocal tract producing an output suitable for the normal sound output devices used by various computer operating systems; and does this at the same or faster rate than the speech is spoken for adult speech.

Bergish dialects Collective name for a group of West Germanic dialects

Bergish is a collective name for a group of West Germanic dialects spoken in the Bergisches Land region east of the Rhine in western Germany. The name is commonly used among its speakers, but is not of much linguistic relevance, because the varieties belong to several quite distinct groups inside the continental West Germanic dialect continuum. As usual inside a dialect continuum, neighbouring varieties have a high degree mutual intelligibility and share many similarities while the two more distant ones may be completely mutually unintelligible and considerably different. Therefore, speakers usually perceive the differences in their immediate neighbourhood as merely dialectal oddities of an otherwise larger, solid group or language that they are all part of, such as "Bergish". Bergish is itself commonly classified as a form of "Rhinelandic", which in turn is part of German. Bergish stricto sensu is the eastmost part of the Limburgish language group, which extends far beyond the rivers Rhine and Maas into the Netherlands and Belgium. Bergish strictu sensu is located in the North West. It is also part of the East Limburgish group, that is, the varieties of Limbugish spoken in Germany. They combine Low Franconian properties with some Ripuarian properties and are seen as the transitory dialects between them in the dialect continuum of Dutch and German. The Bergish varieties in the northern areas are also referred to as parts of Meuse-Rhenish, which exclusively refers to the Low Franconian varieties, that are Limburgish including Bergish.

Franz Nikolaus Finck

Franz Nikolaus Finck was a German philologist, born in Krefeld. He was a professor of General Linguistics at the University of Berlin.

Neurocomputational speech processing is computer-simulation of speech production and speech perception by referring to the natural neuronal processes of speech production and speech perception, as they occur in the human nervous system. This topic is based on neuroscience and computational neuroscience.

Itzgründisch dialect

Itzgründisch is a Main Franconian dialect, which is spoken in the eponymous Itz Valley and its tributaries of Grümpen, Effelder, Röthen/Röden, Lauter, Füllbach and Rodach, the valleys of the Neubrunn, Biber and the upper Werra and in the valley of Steinach. In the small language area, which extends from the Itzgrund in Upper Franconia to the southern side of the Thuringian Highlands, “Fränkische” still exists in the original form. Because of the remoteness of the area, this isolated by the end of the 19th century and later during the division of Germany, this language has kept many linguistic features to this day. Scientific study of the Itzgründisch dialect was made for the first time, in the middle of the 19th century, by the linguist August Schleicher.

Bernd J. Kröger is a German phonetician and professor at RWTH Aachen University. He is known for his contributions in the field of neurocomputational speech processing, in particular the ACT model.

Klaus J. Kohler is one of the leading German phoneticians.

Werner Georg Kümmel was a German New Testament scholar and professor at the University of Marburg.


  1. "Georg Heike: Kurzbiographie" (in German). Retrieved 2011-12-26.