Sonority hierarchy

Last updated

A sonority hierarchy or sonority scale is a hierarchical ranking of speech sounds (or phones). Sonority is loosely defined as the loudness of speech sounds relative to other sounds of the same pitch, length and stress, [1] therefore sonority is often related to rankings for phones to their amplitude. [2] For example, pronouncing the vowel [a] will produce a louder sound than the stop [t], so [a] would rank higher in the hierarchy. However, grounding sonority in amplitude is not universally accepted. [2] Instead, many researchers refer to sonority as the resonance of speech sounds. [2] This relates to the degree to which production of phones results in vibrations of air particles. Thus, sounds that are described as more sonorous are less subject to masking by ambient noises. [2]

Contents

Sonority hierarchies are especially important when analyzing syllable structure; rules about what segments may appear in onsets or codas together, such as SSP, are formulated in terms of the difference of their sonority values. Some languages also have assimilation rules based on sonority hierarchy, for example, the Finnish potential mood, in which a less sonorous segment changes to copy a more sonorous adjacent segment (e.g. -tne- → -nne-).

Sonority hierarchy

Sonority hierarchies vary somewhat in which sounds are grouped together. The one below is fairly typical:

vowels approximants
(glides and liquids)
nasals fricatives affricates stops
syllabic:+-
approximant:+-
sonorant:+-
continuant:+-
delayed release:+-

Sound types are the most sonorous on the left side of the scale, and become progressively less sonorous towards the right (e.g., fricatives are less sonorous than nasals).

The labels on the left refer to distinctive features, and categories of sounds can be grouped together according to whether they share a feature. For instance, as shown in the sonority hierarchy above, vowels are considered [+syllabic], whereas all consonants (including stops, affricates, fricatives, etc.) are considered [syllabic]. All sound categories falling under [+sonorant] are sonorants, whereas those falling under [sonorant] are obstruents. In this way, any contiguous set of sound types may be grouped together on the basis of no more than two features (for instance, glides, liquids, and nasals are [syllabic, +sonorant]).

Sonority scale

Most sonorous (weakest consonantality) to
least sonorous (strongest consonantality)
English examples
low vowels (open vowels)/a ə/
mid vowels /e o/
high vowels (close vowels) / glides (semivowels)/i u j w/ (first two are close vowels, last two are semivowels)
flaps [ɾ]
laterals /l/
nasals /m n ŋ/
voiced fricatives /v ð z/
voiceless fricatives /f θ s/
voiced plosives /b d g/
voiceless plosives /p t k/

[3] [4]

In English, the sonority scale, from highest to lowest, is the following: /a/>/eo/>/iujw/>/l/>/mnŋ/>/zvð/>/fθs/>/bdɡ/>/ptk/ [5] [6]

In simpler terms, the scale has members of the same group hold the same sonority from the greatest to the smallest presence of vibrations in the vocal folds. Vowels have the most vibrations, but consonants are characterized as such in part by the lack of vibrations or a break in vibrations. The top of the scale, open vowels, has the most air used for vibrations, and the bottom of the scale has the least air being used for vibrations. That can be demonstrated by putting a few fingers on one's throat and pronouncing an open vowel such as the vowel [a], and then pronouncing one of the plosives (also known as stop consonants) of the [p t k] class. For vowels, there is a consistent level pressure generated from the lungs and diaphragm, and the difference in pressure in one's body and outside the mouth is minimal. For plosive, the pressure generated from the lungs and diaphragm changes significantly, and the difference in pressure in one's body and outside the mouth is maximal before release (no air is flowing, and the vocal folds are not resisting the air flow).

More finely-nuanced hierarchies often exist within classes whose members cannot be said to be distinguished by relative sonority. In North American English, for example, the set /p t k/ has /t/ being by far the most subject to weakening when before an unstressed vowel (the usual American pronunciation has /t/ as a flap in later but normally no weakening of /p/ in caper or of /k/ in faker).

In Portuguese, intervocalic /n/ and /l/ are typically lost historically (e.g. Lat. LUNA > /lua/ 'moon', DONARE > /doar/ 'donate', COLORE > /kor/ 'color'), but /r/ remains (CERA > /sera/ 'wax'), but Romanian has transformed the intervocalic non-geminate /l/ into /r/ (SOLEM > /so̯are/ 'sun') and reduced the geminate /ll/ to /l/ (OLLA > /o̯alə/ 'pot'). It has, however, left /n/ (LUNA > /lunə/ 'moon') and /r/ (PIRA > /parə/ 'pear') unchanged. Similarly, Romance languages often have geminate /mm/ weaker than /nn/, and geminate /rr/ is often stronger than other geminates, including /pp tt kk/. In such cases, many phonologists refer not to sonority but to a more abstract notion of relative strength. The latter was once posited as universal in its arrangement, but it is now known to be language-specific.

Sonority in phonotactics

Syllable structure tends to be highly influenced and motivated by the sonority scale, with the general rule that more sonorous elements are internal (i.e., close to the syllable nucleus) and less sonorant elements are external. For instance, the sequence /plant/ is permissible in many languages, while /lpatn/ is much less likely. (This is the sonority sequencing principle). This rule is applied with varying levels of strictness cross-linguistically, with many languages allowing exceptions: for example, in English, /s/ can be found external to stops even though it is more sonorous (e.g. "strong", "hats").

In many languages the presence of two non-adjacent highly-sonorous elements can be a reliable indication of how many syllables are in the word; /ata/ is most likely two syllables, and many languages would deal with the sequences like /mbe/ or /lpatn/ by pronouncing them as multiple syllables, with syllabic sonorants: [m̩.be] and [l̩.pat.n̩].

Ecological patterns in sonority

The sonority ranking of speech sounds plays an important role in developing phonological patterns in language, which allows for the intelligible transmission of speech between individuals in a society. Differences in the occurrence of particular sounds in languages around the world have been observed by numerous researchers. It has been suggested that these differences are as a result of ecological pressures.

This understanding was developed from the acoustic adaptation hypothesis, which was a theory initially used to understand differences in bird songs across varying habitats. [7] However, the theory has been applied by researchers as a base for understanding why differences are shown in speech sounds within spoken languages around the world. [8]

Climates

Maddieson and Coupé’s [8] study on 633 languages worldwide observed that some of the variation in the sonority of speech sounds in languages can be accounted for by differences in climate. The pattern follows that in warmer climatic zones, language is more sonorous compared to languages in cooler climatic zones which favour the use of consonants. To explain these differences they emphasise the influence of atmospheric absorption and turbulence within warmer, ambient air, which may disrupt the integrity of acoustic signals. Therefore, employing more sonorous sounds in a language may reduce the distortion of soundwaves in warmer climates. Fought and Munroe [9] instead argue that these disparities in speech sounds are as a result of differences in the daily activities of individuals in different climates. Proposing that throughout history individuals residing in warmer climates tend to spend more time outdoors (likely engaging in agricultural work or social activities), therefore speech requires effective propagation of sound through the air for acoustic signals to meet the recipient over these long distances, unlike in cooler climates where people are communicating over shorter distances (spend more time indoors). Another explanation is that languages have adapted to maintain homeostasis. [10] Thermoregulation aims to ensure body temperature remains within a certain range of values, allowing for the proper functioning of cells. Therefore, it has been argued that differences in the regularity of phones in a language are an adaptation which helps to regulate internal bodily temperatures. Employing the use of open vowels like /a/ which is highly sonorous, requires the opening of vocal articulators. This allows for air to flow out of the mouth and with it evaporating water which reduces internal bodily temperatures. In contrast, voiceless plosives like /t/ are more common in cooler climates. Producing this speech sound obstructs airflow out of the mouth due to the constriction of vocal articulators. Thus, reducing the transfer of heat out of the body, which is important for individuals residing in cooler climates.

Vegetation

A positive correlation exists, so that as temperature increases, so does the use of more sonorous speech sounds. However, the presence of dense vegetation coverage leads to the correlation occurring oppositely, [11] so that less sonorous speech sounds are favoured by warmer climates when the area is covered by dense vegetation. This is said to be because in warmer climates with dense vegetation coverage individuals instead communicate over shorter distances, therefore favour speech sounds which are ranked lower in the sonority hierarchy.

Altitude

Everett, (2013) [12] suggested that in high elevation regions such as in the Andes, languages regularly employ the use of ejective plosives like //. Everett argued that in high altitude areas, with reduced ambient air pressure, the use of ejectives allows for ease of articulation when producing speech. Moreover, as no air is flowing out of the vocal folds, water is conserved whilst communicating, thus reducing dehydration in individuals residing in high elevation regions.

A range of other additional factors have also been observed which affect the degree of sonority of a particular language such as precipitation and sexual restrictiveness. [11] Inevitably, the patterns become more complex when considering a range of ecological factors simultaneously. Moreover, large amounts of variation are shown which may be due to patterns of migration.

Mechanisms underlying differences in sonority

The existence of these differences in speech sounds in modern day human language is said to be driven by cultural evolution. [13] Language is an important part of culture. In particular, speech sounds in the sonority scale are more likely to be selected for in different environments as a language favours phonetic structures which allow for the successful transmission of messages in the presence of ecological conditions. Henrich highlights the role of dual inheritance, which propels changes in language that persist across generations. It follows that slight differences in language patterns may be selected for because they are advantageous for individuals in the given environment. Biased transmission then occurs which allows for speech pattern to be adopted by members of the society. [13]

Related Research Articles

<span class="mw-page-title-main">Allophone</span> Phone used to pronounce a single phoneme

In phonology, an allophone is one of multiple possible spoken sounds – or phones – used to pronounce a single phoneme in a particular language. For example, in English, the voiceless plosive and the aspirated form are allophones for the phoneme, while these two are considered to be different phonemes in some languages such as Central Thai. Similarly, in Spanish, and are allophones for the phoneme, while these two are considered to be different phonemes in English.

In articulatory phonetics, a consonant is a speech sound that is articulated with complete or partial closure of the vocal tract, except for the h, which is pronounced without any stricture in the vocal tract. Examples are and [b], pronounced with the lips; and [d], pronounced with the front of the tongue; and [g], pronounced with the back of the tongue;, pronounced throughout the vocal tract;, [v], and, pronounced by forcing air through a narrow channel (fricatives); and and, which have air flowing through the nose (nasals). Contrasting with consonants are vowels.

Liquids are a class of consonants that consists of rhotics and voiced lateral approximants, sometimes described as "r-like sounds" and "l-like sounds". The word liquid seems to be a calque of the Ancient Greek word ὑγρός initially used by grammarian Dionysius Thrax to describe Greek sonorants.

In phonetics, rhotic consonants, or "R-like" sounds, are liquid consonants that are traditionally represented orthographically by symbols derived from the Greek letter rho, including ⟨R⟩, ⟨r⟩ in the Latin script and ⟨Р⟩, ⟨p⟩ in the Cyrillic script. They are transcribed in the International Phonetic Alphabet by upper- or lower-case variants of Roman ⟨R⟩, ⟨r⟩: r, ɾ, ɹ, ɻ, ʀ, ʁ, ɽ, and ɺ. Transcriptions for vocalic or semivocalic realisations of underlying rhotics include the ə̯ and ɐ̯.

In phonetics, a plosive, also known as an occlusive or simply a stop, is a pulmonic consonant in which the vocal tract is blocked so that all airflow ceases.

A vowel is a syllabic speech sound pronounced without any stricture in the vocal tract. Vowels are one of the two principal classes of speech sounds, the other being the consonant. Vowels vary in quality, in loudness and also in quantity (length). They are usually voiced and are closely involved in prosodic variation such as tone, intonation and stress.

A syllable is a unit of organization for a sequence of speech sounds, typically made up of a syllable nucleus with optional initial and final margins. Syllables are often considered the phonological "building blocks" of words. They can influence the rhythm of a language, its prosody, its poetic metre and its stress patterns. Speech can usually be divided up into a whole number of syllables: for example, the word ignite is made of two syllables: ig and nite.

Unless otherwise noted, statements in this article refer to Standard Finnish, which is based on the dialect spoken in the former Häme Province in central south Finland. Standard Finnish is used by professional speakers, such as reporters and news presenters on television.

Non-native pronunciations of English result from the common linguistic phenomenon in which non-native speakers of any language tend to transfer the intonation, phonological processes and pronunciation rules of their first language into their English speech. They may also create innovative pronunciations not found in the speaker's native language.

Phonotactics is a branch of phonology that deals with restrictions in a language on the permissible combinations of phonemes. Phonotactics defines permissible syllable structure, consonant clusters and vowel sequences by means of phonotactic constraints.

In linguistics, a consonant cluster, consonant sequence or consonant compound, is a group of consonants which have no intervening vowel. In English, for example, the groups and are consonant clusters in the word splits. In the education field it is variously called a consonant cluster or a consonant blend.

In phonetics and phonology, a sonorant or resonant is a speech sound that is produced with continuous, non-turbulent airflow in the vocal tract; these are the manners of articulation that are most often voiced in the world's languages. Vowels are sonorants, as are semivowels like and, nasal consonants like and, and liquid consonants like and. This set of sounds contrasts with the obstruents.

Implosive consonants are a group of stop consonants with a mixed glottalic ingressive and pulmonic egressive airstream mechanism. That is, the airstream is controlled by moving the glottis downward in addition to expelling air from the lungs. Therefore, unlike the purely glottalic ejective consonants, implosives can be modified by phonation. Contrastive implosives are found in approximately 13% of the world's languages.

The phonology of Italian describes the sound system—the phonology and phonetics—of Standard Italian and its geographical variants.

Esperanto is a constructed international auxiliary language designed to have a simple phonology. The creator of Esperanto, L. L. Zamenhof, described Esperanto pronunciation by comparing the sounds of Esperanto with the sounds of several major European languages.

The sonority sequencing principle (SSP) or sonority sequencing constraint is a phonotactic principle that aims to explain or predict the structure of a syllable in terms of sonority.

Hindustani is the lingua franca of northern India and Pakistan, and through its two standardized registers, Hindi and Urdu, a co-official language of India and co-official and national language of Pakistan respectively. Phonological differences between the two standards are minimal.

This article is about the phonology and phonetics of the Estonian language.

Hiw is an Oceanic language spoken on the island of Hiw, in the Torres Islands of Vanuatu. With about 280 speakers, Hiw is considered endangered.

This glossary gives a general overview of the various sound laws that have been formulated by linguists for the various Indo-European languages. A concise description is given for each rule; more details are given in their articles.

References

  1. Peter Ladefoged; Keith Johnson (1 January 2010). A Course in Phonetics. Cengage Learning. ISBN   978-1-4282-3126-9.
  2. 1 2 3 4 Ohala, John J. (1992). "Alternatives to the sonority hierarchy for explaining segmental sequential constraints" (PDF). Papers on the Parasession on the Syllable: 319–338.
  3. "What is the sonority scale?". www-01.sil.org. Archived from the original on 2017-06-13. Retrieved 2016-11-21.
  4. Burquest, Donald A., and David L. Payne. 1993. Phonological analysis: A functional approach. Dallas, TX: Summer Institute of Linguistics. pg 101
  5. O'Grady, W. D.; Archibald, J. (2012). Contemporary linguistic analysis: An introduction (7th ed.). Toronto: Pearson Longman. p. 70.
  6. "Consonants: Fricatives". facweb.furman.edu. Archived from the original on 2018-09-17. Retrieved 2016-11-28.
  7. Boncoraglio, Giuseppe; Saino, Nicola (2007). "Habitat structure and the evolution of bird song: a meta-analysis of the evidence for the acoustic adaptation hypothesis". Functional Ecology. 21 (1). doi: 10.1111/j.1365-2435.2006.01207.x . ISSN   0269-8463.
  8. 1 2 Maddieson, Ian (2018). "Language Adapts to Environment: Sonority and Temperature". Frontiers in Communication. 3. doi: 10.3389/fcomm.2018.00028 . ISSN   2297-900X.
  9. Fought, John G.; Munroe, Robert L.; Fought, Carmen R.; Good, Erin M. (2016). "Sonority and Climate in a World Sample of Languages: Findings and Prospects". Cross-Cultural Research. 38 (1): 27–51. doi:10.1177/1069397103259439. ISSN   1069-3971. S2CID   144410953.
  10. Evert Van de Vliert (22 December 2008). Climate, Affluence, and Culture. Cambridge University Press. pp. 5–. ISBN   978-1-139-47579-2.
  11. 1 2 Ember, Carol R.; Ember, Melvin (2007). "Climate, Econiche, and Sexuality: Influences on Sonority in Language". American Anthropologist. 109 (1): 180–185. doi:10.1525/aa.2007.109.1.180. ISSN   0002-7294.
  12. Aronoff, Mark; Everett, Caleb (2013). "Evidence for Direct Geographic Influences on Linguistic Sounds: The Case of Ejectives". PLOS ONE. 8 (6): e65275. Bibcode:2013PLoSO...865275E. doi: 10.1371/journal.pone.0065275 . ISSN   1932-6203. PMC   3680446 . PMID   23776463.
  13. 1 2 Joseph Henrich (17 October 2017). The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter. Princeton University Press. ISBN   978-0-691-17843-1.