Phonology

Last updated

Phonology is the branch of linguistics that studies how languages systematically organize their phones or, for sign languages, their constituent parts of signs. The term can also refer specifically to the sound or sign system of a particular language variety. At one time, the study of phonology related only to the study of the systems of phonemes in spoken languages, but may now relate to any linguistic analysis either:

Contents

  1. at a level beneath the word (including syllable, onset and rime, articulatory gestures, articulatory features, mora, etc.), or
  2. all levels of language in which sound or signs are structured to convey linguistic meaning. [1]

Sign languages have a phonological system equivalent to the system of sounds in spoken languages. The building blocks of signs are specifications for movement, location, and handshape. [2] At first, a separate terminology was used for the study of sign phonology ("chereme" instead of "phoneme", etc.), but the concepts are now considered to apply universally to all human languages.

Terminology

The word "phonology" (as in "phonology of English") can refer either to the field of study or to the phonological system of a given language. [3] This is one of the fundamental systems that a language is considered to comprise, like its syntax, its morphology and its lexicon. The word phonology comes from Ancient Greek φωνή, phōnḗ, 'voice, sound', and the suffix -logy (which is from Greek λόγος, lógos, 'word, speech, subject of discussion').

Phonology is typically distinguished from phonetics, which concerns the physical production, acoustic transmission and perception of the sounds or signs of language. [4] [5] Phonology describes the way they function within a given language or across languages to encode meaning. For many linguists, phonetics belongs to descriptive linguistics and phonology to theoretical linguistics, but establishing the phonological system of a language is necessarily an application of theoretical principles to analysis of phonetic evidence in some theories. The distinction was not always made, particularly before the development of the modern concept of the phoneme in the mid-20th century. Some subfields of modern phonology have a crossover with phonetics in descriptive disciplines such as psycholinguistics and speech perception, which result in specific areas like articulatory phonology or laboratory phonology.

Definitions of the field of phonology vary. Nikolai Trubetzkoy in Grundzüge der Phonologie (1939) defines phonology as "the study of sound pertaining to the system of language," as opposed to phonetics, which is "the study of sound pertaining to the act of speech" (the distinction between language and speech being basically Ferdinand de Saussure's distinction between langue and parole). [6] More recently, Lass (1998) writes that phonology refers broadly to the subdiscipline of linguistics concerned with the sounds of language, and in more narrow terms, "phonology proper is concerned with the function, behavior and organization of sounds as linguistic items." [4] According to Clark et al. (2007), it means the systematic use of sound to encode meaning in any spoken human language, or the field of linguistics studying that use. [7]

History

Early evidence for a systematic study of the sounds in a language appears in the 4th century BCE Ashtadhyayi , a Sanskrit grammar composed by Pāṇini. In particular, the Shiva Sutras , an auxiliary text to the Ashtadhyayi, introduces what may be considered a list of the phonemes of Sanskrit, with a notational system for them that is used throughout the main text, which deals with matters of morphology, syntax and semantics.

Ibn Jinni of Mosul, a pioneer in phonology, wrote prolifically in the 10th century on Arabic morphology and phonology in works such as Kitāb Al-Munṣif, Kitāb Al-Muḥtasab, and Kitāb Al-Khaṣāʾiṣ  [ ar ]. [8]

The study of phonology as it exists today is defined by the formative studies of the 19th-century Polish scholar Jan Baudouin de Courtenay, [9] :17 who (together with his students Mikołaj Kruszewski and Lev Shcherba in the Kazan School) shaped the modern usage of the term phoneme in a series of lectures in 1876–1877. The word phoneme had been coined a few years earlier, in 1873, by the French linguist A. Dufriche-Desgenettes. In a paper read at 24 May meeting of the Société de Linguistique de Paris, [10] Dufriche-Desgenettes proposed for phoneme to serve as a one-word equivalent for the German Sprachlaut. [11] Baudouin de Courtenay's subsequent work, though often unacknowledged, is considered to be the starting point of modern phonology. He also worked on the theory of phonetic alternations (what is now called allophony and morphophonology) and may have had an influence on the work of Saussure, according to E. F. K. Koerner. [12]

Nikolai Trubetzkoy, 1920s Nikolai Trubetzkoy.jpg
Nikolai Trubetzkoy, 1920s

An influential school of phonology in the interwar period was the Prague school. One of its leading members was Prince Nikolai Trubetzkoy, whose Grundzüge der Phonologie (Principles of Phonology), [6] published posthumously in 1939, is among the most important works in the field from that period. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder of morphophonology, but the concept had also been recognized by de Courtenay. Trubetzkoy also developed the concept of the archiphoneme . Another important figure in the Prague school was Roman Jakobson, one of the most prominent linguists of the 20th century. Louis Hjelmslev's glossematics also contributed with a focus on linguistic structure independent of phonetic realization or semantics. [9] :175

In 1968, Noam Chomsky and Morris Halle published The Sound Pattern of English (SPE), the basis for generative phonology. In that view, phonological representations are sequences of segments made up of distinctive features. The features were an expansion of earlier work by Roman Jakobson, Gunnar Fant, and Morris Halle. The features describe aspects of articulation and perception, are from a universally fixed set and have the binary values + or −. There are at least two levels of representation: underlying representation and surface phonetic representation. Ordered phonological rules govern how underlying representation is transformed into the actual pronunciation (the so-called surface form). An important consequence of the influence SPE had on phonological theory was the downplaying of the syllable and the emphasis on segments. Furthermore, the generativists folded morphophonology into phonology, which both solved and created problems.

Natural phonology is a theory based on the publications of its proponent David Stampe in 1969 and, more explicitly, in 1979. In this view, phonology is based on a set of universal phonological processes that interact with one another; those that are active and those that are suppressed is language-specific. Rather than acting on segments, phonological processes act on distinctive features within prosodic groups. Prosodic groups can be as small as a part of a syllable or as large as an entire utterance. Phonological processes are unordered with respect to each other and apply simultaneously, but the output of one process may be the input to another. The second most prominent natural phonologist is Patricia Donegan, Stampe's wife; there are many natural phonologists in Europe and a few in the US, such as Geoffrey Nathan. The principles of natural phonology were extended to morphology by Wolfgang U. Dressler, who founded natural morphology.

In 1976, John Goldsmith introduced autosegmental phonology. Phonological phenomena are no longer seen as operating on one linear sequence of segments, called phonemes or feature combinations but rather as involving some parallel sequences of features that reside on multiple tiers. Autosegmental phonology later evolved into feature geometry, which became the standard theory of representation for theories of the organization of phonology as different as lexical phonology and optimality theory.

Government phonology, which originated in the early 1980s as an attempt to unify theoretical notions of syntactic and phonological structures, is based on the notion that all languages necessarily follow a small set of principles and vary according to their selection of certain binary parameters. That is, all languages' phonological structures are essentially the same, but there is restricted variation that accounts for differences in surface realizations. Principles are held to be inviolable, but parameters may sometimes come into conflict. Prominent figures in this field include Jonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette, and John Harris.

In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky developed optimality theory, an overall architecture for phonology according to which languages choose a pronunciation of a word that best satisfies a list of constraints ordered by importance; a lower-ranked constraint can be violated when the violation is necessary in order to obey a higher-ranked constraint. The approach was soon extended to morphology by John McCarthy and Alan Prince and has become a dominant trend in phonology. The appeal to phonetic grounding of constraints and representational elements (e.g. features) in various approaches has been criticized by proponents of "substance-free phonology", especially by Mark Hale and Charles Reiss. [13] [14]

An integrated approach to phonological theory that combines synchronic and diachronic accounts to sound patterns was initiated with Evolutionary Phonology in recent years. [15]

Analysis of phonemes

An important part of traditional, pre-generative schools of phonology is studying which sounds can be grouped into distinctive units within a language; these units are known as phonemes. For example, in English, the "p" sound in pot is aspirated (pronounced [pʰ]) while that in spot is not aspirated (pronounced [p]). However, English speakers intuitively treat both sounds as variations (allophones, which cannot give origin to minimal pairs) of the same phonological category, that is of the phoneme /p/. (Traditionally, it would be argued that if an aspirated [pʰ] were interchanged with the unaspirated [p] in spot, native speakers of English would still hear the same words; that is, the two sounds are perceived as "the same" /p/.) In some other languages, however, these two sounds are perceived as different, and they are consequently assigned to different phonemes. For example, in Thai, Bengali, and Quechua, there are minimal pairs of words for which aspiration is the only contrasting feature (two words can have different meanings but with the only difference in pronunciation being that one has an aspirated sound where the other has an unaspirated one).

The vowels of modern (Standard) Arabic (left) and (Israeli) Hebrew (right) from the phonemic point of view. Note the intersection of the two circles--the distinction between short a, i and u is made by both speakers, but Arabic lacks the mid articulation of short vowels, while Hebrew lacks the distinction of vowel length. Phonological Diagram of modern Arabic and Hebrew vowels.png
The vowels of modern (Standard) Arabic (left) and (Israeli) Hebrew (right) from the phonemic point of view. Note the intersection of the two circles—the distinction between short a, i and u is made by both speakers, but Arabic lacks the mid articulation of short vowels, while Hebrew lacks the distinction of vowel length.
The vowels of modern (Standard) Arabic and (Israeli) Hebrew from the phonetic point of view. The two circles are totally separate--none of the vowel-sounds made by speakers of one language is made by speakers of the other. Phonetic Diagram of modern Arabic and Hebrew vowels.png
The vowels of modern (Standard) Arabic and (Israeli) Hebrew from the phonetic point of view. The two circles are totally separate—none of the vowel-sounds made by speakers of one language is made by speakers of the other.

Part of the phonological study of a language therefore involves looking at data (phonetic transcriptions of the speech of native speakers) and trying to deduce what the underlying phonemes are and what the sound inventory of the language is. The presence or absence of minimal pairs, as mentioned above, is a frequently used criterion for deciding whether two sounds should be assigned to the same phoneme. However, other considerations often need to be taken into account as well.

The particular contrasts which are phonemic in a language can change over time. At one time, [f] and [v], two sounds that have the same place and manner of articulation and differ in voicing only, were allophones of the same phoneme in English, but later came to belong to separate phonemes. This is one of the main factors of historical change of languages as described in historical linguistics.

The findings and insights of speech perception and articulation research complicate the traditional and somewhat intuitive idea of interchangeable allophones being perceived as the same phoneme. First, interchanged allophones of the same phoneme can result in unrecognizable words. Second, actual speech, even at a word level, is highly co-articulated, so it is problematic to expect to be able to splice words into simple segments without affecting speech perception.

Different linguists therefore take different approaches to the problem of assigning sounds to phonemes. For example, they differ in the extent to which they require allophones to be phonetically similar. There are also differing ideas as to whether this grouping of sounds is purely a tool for linguistic analysis, or reflects an actual process in the way the human brain processes a language.

Since the early 1960s, theoretical linguists have moved away from the traditional concept of a phoneme, preferring to consider basic units at a more abstract level, as a component of morphemes; these units can be called morphophonemes, and analysis using this approach is called morphophonology.

Other topics

In addition to the minimal units that can serve the purpose of differentiating meaning (the phonemes), phonology studies how sounds alternate, or replace one another in different forms of the same morpheme (allomorphs), as well as, for example, syllable structure, stress, feature geometry, tone, and intonation.

Phonology also includes topics such as phonotactics (the phonological constraints on what sounds can appear in what positions in a given language) and phonological alternation (how the pronunciation of a sound changes through the application of phonological rules, sometimes in a given order that can be feeding or bleeding, [16] ) as well as prosody, the study of suprasegmentals and topics such as stress and intonation.

The principles of phonological analysis can be applied independently of modality because they are designed to serve as general analytical tools, not language-specific ones. The same principles have been applied to the analysis of sign languages (see Phonemes in sign languages), even though the sublexical units are not instantiated as speech sounds.

See also

Notes

  1. Brentari, Diane; Fenlon, Jordan; Cormier, Kearsy (July 2018). "Sign Language Phonology". Oxford Research Encyclopedia of Linguistics. doi:10.1093/acrefore/9780199384655.013.117. ISBN   9780199384655. S2CID   60752232.
  2. Stokoe, William C. (1978) [1960]. Sign Language Structure: An outline of the visual communication systems of the American deaf. Department of Anthropology and Linguistics, University at Buffalo. Studies in linguistics, Occasional papers. Vol. 8 (2nd ed.). Silver Spring, MD: Linstok Press.
  3. "Definition of PHONOLOGY". www.merriam-webster.com. Retrieved 3 January 2022.
  4. 1 2 Lass, Roger (1998). Phonology: An Introduction to Basic Concepts. Cambridge, UK; New York; Melbourne, Australia: Cambridge University Press. p. 1. ISBN   978-0-521-23728-4 . Retrieved 8 January 2011Paperback ISBN 0-521-28183-0 {{cite book}}: CS1 maint: postscript (link)
  5. Carr, Philip (2003). English Phonetics and Phonology: An Introduction. Massachusetts, US; Oxford, UK; Victoria, Australia; Berlin, Germany: Blackwell Publishing. ISBN   978-0-631-19775-1 . Retrieved 8 January 2011Paperback ISBN 0-631-19776-1 {{cite book}}: CS1 maint: postscript (link)
  6. 1 2 Trubetzkoy N., Grundzüge der Phonologie (published 1939), translated by C. Baltaxe as Principles of Phonology , University of California Press, 1969
  7. Clark, John; Yallop, Colin; Fletcher, Janet (2007). An Introduction to Phonetics and Phonology (3rd ed.). Massachusetts, US; Oxford, UK; Victoria, Australia: Blackwell Publishing. ISBN   978-1-4051-3083-7 . Retrieved 8 January 2011Alternative ISBN 1-4051-3083-0 {{cite book}}: CS1 maint: postscript (link)
  8. Bernards, Monique, "Ibn Jinnī", in: Encyclopaedia of Islam, THREE, Edited by: Kate Fleet, Gudrun Krämer, Denis Matringe, John Nawas, Everett Rowson. Consulted online on 27 May 2021 First published online: 2021 First print edition: 9789004435964, 20210701, 2021-4
  9. 1 2 Anderson, Stephen R. (2021). Phonology in the twentieth century (Second, revised and expanded ed.). Berlin: Language Science Press. doi:10.5281/zenodo.5509618. ISBN   978-3-96110-327-0. ISSN   2629-172X . Retrieved 28 December 2021.
  10. Anon (probably Louis Havet). (1873) "Sur la nature des consonnes nasales". Revue critique d'histoire et de littérature 13, No. 23, p. 368.
  11. Roman Jakobson, Selected Writings: Word and Language, Volume 2, Walter de Gruyter, 1971, p. 396.
  12. E. F. K. Koerner, Ferdinand de Saussure: Origin and Development of His Linguistic Thought in Western Studies of Language. A contribution to the history and theory of linguistics, Braunschweig: Friedrich Vieweg & Sohn [Oxford & Elmsford, N.Y.: Pergamon Press], 1973.
  13. Hale, Mark; Reiss, Charles (2008). The Phonological Enterprise. Oxford, UK: Oxford University Press. ISBN   978-0-19-953397-8.
  14. Hale, Mark; Reiss, Charles (2000). "'Substance abuse' and 'dysfunctionalism': Current trends in phonology". Linguistic Inquiry. 31 (1): 157–169. JSTOR   4179099.
  15. Blevins, Juliette. 2004. Evolutionary phonology: The emergence of sound patterns. Cambridge University Press.
  16. Goldsmith 1995:1.

Bibliography

Related Research Articles

<span class="mw-page-title-main">Allophone</span> Phone used to pronounce a single phoneme

In phonology, an allophone is one of multiple possible spoken sounds – or phones – used to pronounce a single phoneme in a particular language. For example, in English, the voiceless plosive and the aspirated form are allophones for the phoneme, while these two are considered to be different phonemes in some languages such as Central Thai. Similarly, in Spanish, and are allophones for the phoneme, while these two are considered to be different phonemes in English.

Liquids are a class of consonants that consists of rhotics and voiced lateral approximants, sometimes described as "r-like sounds" and "l-like sounds". The word liquid seems to be a calque of the Ancient Greek word ὑγρός initially used by grammarian Dionysius Thrax to describe Greek sonorants.

In phonology, minimal pairs are pairs of words or phrases in a particular language, spoken or signed, that differ in only one phonological element, such as a phoneme, toneme or chroneme, and have distinct meanings. They are used to demonstrate that two phones represent two separate phonemes in the language.

In phonology and linguistics, a phoneme is a set of phones that can distinguish one word from another in a particular language.

Phonetics is a branch of linguistics that studies how humans produce and perceive sounds or in the case of sign languages, the equivalent aspects of sign. Linguists who specialize in studying the physical properties of speech are phoneticians. The field of phonetics is traditionally divided into three sub-disciplines based on the research questions involved such as how humans plan and execute movements to produce speech, how various movements affect the properties of the resulting sound or how humans convert sound waves to linguistic information. Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones and it is also defined as the smallest unit that discerns meaning between sounds in any given language.

<span class="mw-page-title-main">Roman Jakobson</span> Russian linguist

Roman Osipovich Jakobson was a Russian linguist and literary theorist.

<span class="mw-page-title-main">Nikolai Trubetzkoy</span> Russian linguist and historian

Prince Nikolai Sergeyevich Trubetzkoy was a Russian linguist and historian whose teachings formed a nucleus of the Prague School of structural linguistics. He is widely considered to be the founder of morphophonology. He was also associated with the Russian Eurasianists.

Linguistics is the scientific study of human language. Someone who engages in this study is called a linguist. See also the Outline of linguistics, the List of phonetics topics, the List of linguists, and the List of cognitive science topics. Articles related to linguistics include:

<span class="mw-page-title-main">Voiced alveolar and postalveolar approximants</span> Consonantal sounds represented by ⟨ɹ⟩ / ⟨ð̠˕⟩ and ⟨ɹ̠⟩ in IPA

The voiced alveolar approximant is a type of consonantal sound used in some spoken languages. The symbol in the International Phonetic Alphabet that represents the alveolar and postalveolar approximants is ɹ, a lowercase letter r rotated 180 degrees. The equivalent X-SAMPA symbol is r\.

<span class="mw-page-title-main">Voiced labiodental nasal</span> Consonantal sound represented by ⟨ɱ⟩ in IPA

The voiced labiodental nasal is a type of consonantal sound. The symbol in the International Phonetic Alphabet that represents this sound is ɱ. The IPA symbol is a lowercase letter m with a leftward hook protruding from the lower right of the letter. Occasionally it is instead transcribed as an with a dental diacritic: .

<span class="mw-page-title-main">Voiceless dental and alveolar lateral fricatives</span> Consonantal sounds represented by ⟨ɬ⟩ in IPA

The voiceless alveolar lateral fricative is a type of consonantal sound, used in some spoken languages. The symbol in the International Phonetic Alphabet that represents voiceless dental, alveolar, and postalveolar lateral fricatives is, and the equivalent X-SAMPA symbol is L.

Acoustic phonetics is a subfield of phonetics, which deals with acoustic aspects of speech sounds. Acoustic phonetics investigates time domain features such as the mean squared amplitude of a waveform, its duration, its fundamental frequency, or frequency domain features such as the frequency spectrum, or even combined spectrotemporal features and the relationship of these properties to other branches of phonetics, and to abstract linguistic concepts such as phonemes, phrases, or utterances.

In the field of dialectology, a diasystem or polylectal grammar is a linguistic analysis set up to encode or represent a range of related varieties in a way that displays their structural differences.

<span class="mw-page-title-main">Segment (linguistics)</span> Distinct unit of speech

In linguistics, a segment is "any discrete unit that can be identified, either physically or auditorily, in the stream of speech". The term is most used in phonetics and phonology to refer to the smallest elements in a language, and this usage can be synonymous with the term phone.

A diaphoneme is an abstract phonological unit that identifies a correspondence between related sounds of two or more varieties of a language or language cluster. For example, some English varieties contrast the vowel of late with that of wait or eight. Other English varieties contrast the vowel of late or wait with that of eight. This non-overlapping pair of phonemes from two different varieties can be reconciled by positing three different diaphonemes: A first diaphoneme for words like late, a second diaphoneme for words like wait, and a third diaphoneme for words like eight.

Clinical linguistics is a sub-discipline of applied linguistics involved in the description, analysis, and treatment of language disabilities, especially the application of linguistic theory to the field of Speech-Language Pathology. The study of the linguistic aspect of communication disorders is of relevance to a broader understanding of language and linguistic theory.

In linguistics, connected speech or connected discourse is a continuous sequence of sounds forming utterances or conversations in spoken language. Analysis of connected speech shows sound changes affecting linguistic units traditionally described as phrases, words, lexemes, morphemes, syllables, phonemes or phones. The words that are modified by those rules will sound differently in connected speech than in citation form.

Panchronic phonology is an approach to historical phonology. Its aim is to formulate generalizations about sound changes that are independent of any particular language or language group.

Donca Steriade is a professor of Linguistics at MIT, specializing in phonological theory.

Annie Rialland is a French linguist who is Director of Research emerita of the CNRS Laboratory of Phonetics and Phonology (Paris). Her main domains of expertise are phonetics, phonology, prosody, and African languages.