Hypocorrection is a sociolinguistic phenomenon that involves the purposeful addition of slang or a shift in pronunciation, word form, or grammatical construction [1] and is propelled by a desire to appear less intelligible or to strike rapport. That contrasts with hesitation and modulation because rather than not having the right words to say or choosing to avoid them, the speaker chooses to adopt a nonstandard form of speech as a strategy to establish distance from or to become closer to their interlocutor.
Hypocorrection may also be a phonetical or a phonological phenomenon. Most sound changes originate from two types of phonetically motivated mechanisms: hypocorrection and hypercorrection. A hypocorrective sound change occurs when a listener fails to identify and to correct the perturbations in the speech signal and takes the signal at face value. [2]
Originally, hypocorrection, or an accented pronunciation of words, may have stemmed from physical properties involved in sound production, such as aeroacoustics, anatomy and vocal tract shape. [3]
Hypocorrection may also result from the failure to cancel coarticulatory effects. Ohala mentions that hypocorrection happens when a listener fails to make use of compensation or, to be exact, when the listener lacks experience with a series of contextual discrepancies that allows them to execute such correction or cannot detect the conditioning environment for various reasons such as noise and the filtering associated with communication channels. [4]
When a listener restores a phoneme from its contextually-influenced realisation, normal speech perception involves the process of correction. That is in accordance to a model proposed by John Ohala which involves synchronic unintended variation, hypocorrection, and hypercorrection. For example, in a language that contains no contrasting nasality for vowels, the utterance [kɑ̃n] can be reconstructed, that is, "corrected" by the listener as the phoneme sequence [kɑn] that was intended by the speaker because they have the knowledge that every vowel is nasalized before a nasal consonant. Hypocorrection occurs if the speaker fails to restore a phoneme, perhaps because the [n] was not pronounced very clearly, and analyses the utterance as [kɑ̃]. [5]
However, further studies suggest that there could be another possible reason for the occurrence of hypocorrection: variation in the compensation. For example, Beddor and Krakow (1999) tested American listeners' nasality judgments on the nasalised vowel [Ť]/[õ] between nasal consonants ([m ڧ ԝn]), on oral vowels [Ť]/[o] between oral consonants ([bVd]), and on the same oral vowels in isolation ([#V#]). They found that 25% of [ԝ] in nasal contexts were heard as more nasal than [V] in oral contexts, which shows that compensation was incomplete or irregular. In addition, Harrington et al. (2008) illustrated systematic variation in compensation between young and old listeners. They contrasted the two groups' identification of a vowel from an /i/-to-/u/ continuum in palatal ([j_st]) and labial ([sw_p]) contexts. Both groups' category boundaries were at comparable points on the palatal continuum and were closer to the /i/-end than on the labial continuum, which shows a compensation effect. However, the younger group's boundary on the labial continuum was much closer to the boundary on the palatal continuum, which demonstrates less compensation in comparison to the older group. These results indicated a difference in the listeners' own speech production: the /u/ for younger speakers was more fronted than that of the older speakers in general. The findings indicated that listeners compensate for only as much coarticulation as is expected in their own grammar, and that form of "grammar" is affected by the listener's previous linguistic experience. That could thus add to Ohala's list of causes of hypocorrection differences in the coarticulation/compensation norm between a speaker and a listener, which could result in events by which a listener uses compensation and still fails to extract from a heavily-coarticulated speech segment "the same pronunciation target intended by the speaker." [4]
As for the realm of the social aspect, the intentional use of hypocorrection or, for example, affecting a Southeastern American accent to sound less elitist involves "make-believe hesitations and colloquial language" that "work as affiliative strategies (softeners) etc." [6] Over time, hypocorrection has emerged by both physical features of voice production and affected accents, and it is typically used by people who do not wish to associate themselves with overly-sophisticated local dialects. Hypocorrection also works as a softener. [7] Some forms of hypocorrection are attempts to give one's discourse a clumsy, colloquial, or even a broken and dysfluent style in introducing clever or innovating statements or ideas. More often than not, hypocorrection allows the speaker, by toning down a potential self-flattering image, to avoid sounding pretentious or pedantic, thus reducing the risk of threat to the recipients' faces. That can be linked to the politeness theory, which accounts for politeness in terms of the "redressing of affronts" to a person's sociological face by face-threatening acts. [8] The theory elaborates on the concept of face (to "save" face or to "lose" face) and discusses politeness as a response to alleviate or avoid face-threatening acts that include insults, requests etc. Therefore, hypocorrection may be used in such situations to allow people to save face.
Hypocorrection may have a part in innovating sound change. Ohala proposed a theory of sound change arising from the listener's misperception. [9] [10] The theory highlights important variations in "the phonetic form of functionally equivalent speech units" and puts forth that when faced with coarticulatory speech variation, listeners do one of the following:
The first situation describes what happens in normal speech perception and the second situation describes what happens in hypocorrection, which is the type of misperception in the perceptual compensation for /u/-fronting. Hypocorrection is the underlying mechanism for many assimilatory sound changes, and the main concept of hypocorrection is that contextually-induced perturbation is regarded by a listener as a deliberate feature of the speech sound. Hence, hypocorrection has the potential to change the listener's phonological grammar by what Hyman called "phonologisation," a process by which intrinsic or automatic variation becomes extrinsic or controlled. [11] For years, many researchers have analysed sound change as a result of phonologisation [12] [13] [14] [15] which underscored the theoretical significance of hypocorrection as a condition for sound change via phonologisation. [4]
The listener misperception hypothesis of sound change [16] [17] [18] has been a worthwhile domain of inquiry over the years, partly because it makes testable predictions. According to the area of research, phonological rules arise by mechanical or physical constraints inherent to speech production and perception. The perceptions involve the likes of listener hypocorrection and hypercorrection. Cross-linguistic tendencies in grammars are therefore thought of as "the phonologization of inherent, universal phonetic biases". [19] Hypocorrection is formally symmetrical and so there is no basis for the unidirectionality of sound changes. For example, consonants normally palatalize, rather than depalatalize, before front vowels, which has no inherent explanation. That ambiguity begs for reanalysis, but something else must demonstrate the directionality of the change. Assimilation and dissimilation are quite different in other ways as well since dissimilation (by hypothesis, hypercorrection) never gives rise to new phonemes, unlike assimilation (via hypocorrection). Such inherent asymmetries are not predicted by the theory as it stands. [20]
Hypocorrection manifests in a few ways:
African Americans who have a native grasp of Standard English (SE) are a minority group within a minority group. In an attempt to show solidarity with inner-city African Americans, many such speakers will accommodate and shift style using vernacular African American speech in appropriate ethnographic contexts. Those efforts sometimes exceed the more prevalent linguistic norms for vernacular African American English (AAE) and result in the construction of hypocorrect utterances that become cases of linguistic overcompensation beyond the nonstandard target.' [21]
During interviews conducted by black fieldworkers, syntactic hypocorrection was observed in sentences including those that were produced by black speakers of Standard English during conversational interviews in which they were accommodating towards African American Vernacular English (AAVE). The black fieldworkers were encouraged to use vernacular norms, including slang, to provide conversational contexts in which AAVE would be appropriate, regardless of the informants' backgrounds.
Some well-documented grammatical forms of AAVE that were frequently used by the African American interviewers were:
It was observed during the interviews that once the informants started getting more comfortable or felt that they wanted to emphasise a point with the black fieldworkers, they would use more AAVE features in their speech although they used mainly Standard English in other circumstances. The above example demonstrates how syntactic hypocorrection is used in some scenarios to help speakers to achieve certain objectives or to express how they feel. [21]
Hypoarticulation is one of the interactional-communicative factors in connected speech, and it has long been noted and widely studied as "a reduction of less important tokens in relation to the more important ones." [22] Some features of hypoarticulation include more pronounced enunciations and diminished lip protrusions.
Many believe that infant-directed speech contains different characteristics which facilitate learning. However, it is not known for sure whether if the actual speech registers that is used to communicate between infants and adults differ.
In a study conducted by England, a large sample of vowels in infant directed speech was investigated, and speech that was used in natural situations was elicited from both mothers and infants. That was achieved by recording infant-directed speech from direct face-to-face interactions between mothers and their infants. The experimenter interacted with the mothers to elicit their adult-directed speech but was not present when the infant-directed speech was recorded.
Instead, the mothers recorded the infant-directed speech themselves to simulate daily activities as much as possible. The participants were gathered from maternity groups from various healthcare centres and their infants ranged from almost 4 to 24 weeks old. Recordings were done over a period of 6 months and were analysed with PRAAT.
Acoustical and statistical analyses for /æ:, æ, ø:, ɵ, o:, ɔ, y:, y, ʉ:, ʉ, e:, ɛ/ showed a selective increase in formant frequencies for some vowel qualities. Furthermore, vowels had higher fundamental frequency and were longer in infant-directed speech. The additional front articulation and the less lip protrusion in infant directed speech compared to adult directed speech made Englund conclude that infant-directed speech is hypoarticulated. Although hypoarticulation may potentially complicate the auditory language learning of infants, it most likely facilitates their perception of the visual aspects of speech and the emotional aspects of communication. Although infant-directed speech has an emotional and attention-getting message, it remains a perceptual challenge for infants. [23]
Perceptual compensation (PC) refers to the listeners' ability to handle phonetic variation because of the coarticulatory influence of surrounding context. Errors in PC have been hypothesised as a vital origin of sound change. However, little research has shed light upon when such errors might happen. Depending on the relative context-specific frequencies of competing sound categories, when PC is diminished, it results in hypocorrection, or when PC is exaggerated, it results in hypercorrection. [24] Hence, when PC is attenuated, listener hypocorrection may be caused.
An example would be that it is predicted that liquid dissimilation is largely originated from listener hypercorrection of liquid coarticulations. Liquid dissimilation is a co-occurrence restriction on identical features within a phonological domain, typically a word. In an experiment conducted by Abrego-Collier, the listener PC patterns for co-occurring liquids were tested by examining the listener's identification of targets along an /r/-/l/ continuum: more specifically, when two liquids were present. The experiment sought to find out how one's perception of a synthesised segment on a continuum between /r/ and /l/ was affected by the presence of another conditioning liquid consonant (/r/ or /l/). For the control, listeners were also tasked with categorising ambiguous liquids without other interfering liquids in a word.
The two hypotheses of the experiment were as follows:
Hypothesis A: When the conditioning consonant is /r/, listeners will be more likely to hear the continuum consonant as /l/ (the category space of /l/ will widen).
Hypothesis B: When the conditioning consonant is /l/, listeners will be more likely to hear the continuum consonant as /r/ than in the control (/d/) condition (the category space of /r/ will widen).
Abrego-Collier found that the listeners' identification of the continuum liquid was affected by the presence of conditioning /l/ via strengthening, rather than reversing the impact of coarticulation, and /l/ resulted in the continuum liquid being perceived more often as /l/. It was eventually concluded that if dissimilation had its roots in the listeners' (mis)perception of coarticulation, listeners' categorisation of co-occurring liquids was more of a hypocorrection than a hypercorrection. [19]
However, that list may not be exhaustive.
Approximants are speech sounds that involve the articulators approaching each other but not narrowly enough nor with enough articulatory precision to create turbulent airflow. Therefore, approximants fall between fricatives, which do produce a turbulent airstream, and vowels, which produce no turbulence. This class is composed of sounds like and semivowels like and, as well as lateral approximants like.
Liquids are a class of consonants that consists of rhotics and voiced lateral approximants, sometimes described as "r-like sounds" and "l-like sounds". The word liquid seems to be a calque of the Ancient Greek word ὑγρός, initially used by grammarian Dionysius Thrax to describe Greek sonorants.
Phonetics is a branch of linguistics that studies how humans produce and perceive sounds or in the case of sign languages, the equivalent aspects of sign. Linguists who specialize in studying the physical properties of speech are phoneticians. The field of phonetics is traditionally divided into three sub-disciplines based on the research questions involved such as how humans plan and execute movements to produce speech, how various movements affect the properties of the resulting sound or how humans convert sound waves to linguistic information. Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones and it is also defined as the smallest unit that discerns meaning between sounds in any given language.
Phonotactics is a branch of phonology that deals with restrictions in a language on the permissible combinations of phonemes. Phonotactics defines permissible syllable structure, consonant clusters and vowel sequences by means of phonotactic constraints.
African-American Vernacular English (AAVE) is the variety of English natively spoken, particularly in urban communities, by most working- and middle-class African Americans and some Black Canadians. Having its own unique grammatical, vocabulary, and accent features, AAVE is employed by middle-class Black Americans as the more informal and casual end of a sociolinguistic continuum. However, in formal speaking contexts, speakers tend to switch to more standard English grammar and vocabulary, usually while retaining elements of the non-standard accent. AAVE is widespread throughout the United States, but is not the native dialect of all African Americans, nor are all of its speakers African American.
Assimilation is a sound change in which some phonemes change to become more similar to other nearby sounds. A common type of phonological process across languages, assimilation can occur either within a word or between words.
The phonology of Catalan, a Romance language, has a certain degree of dialectal variation. Although there are two standard varieties, one based on Central Eastern dialect and another one based on South-Western or Valencian dialect, this article deals with features of all or most dialects, as well as regional pronunciation differences.
Cypriot Greek is the variety of Modern Greek that is spoken by the majority of the Cypriot populace and Greek Cypriot diaspora. It is considered a divergent dialect as it differs from Standard Modern Greek in various aspects of its lexicon, phonetics, phonology, morphology, syntax and even pragmatics, not only for historical reasons but also because of geographical isolation, and extensive contact with typologically distinct languages.
In phonology, particularly within historical linguistics, dissimilation is a phenomenon whereby similar consonants or vowels in a word become less similar. In English, dissimilation is particularly common with liquid consonants such as and when they occur in a sequence. The phenomenon is often credited to horror aequi, the principle that language users avoid repetition of identical linguistic structures.
An ethnolect is generally defined as a language variety that marks speakers as members of ethnic groups who originally used another language or distinctive variety. According to another definition, an ethnolect is any speech variety associated with a specific ethnic group. It may be a distinguishing mark of social identity, both within the group and for outsiders. The term combines the concepts of an ethnic group and dialect.
This article describes those aspects of the phonological history of the English language which concern consonants.
Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.
Linguistic insecurity comprises feelings of anxiety, self-consciousness, or lack of confidence in the mind of a speaker surrounding their use of language. Often, this anxiety comes from speakers' belief that their speech does not conform to the perceived standard and/or the style of language expected by the speakers' interlocutor(s). Linguistic insecurity is situationally induced and is often based on a feeling of inadequacy regarding personal performance in certain contexts, rather than a fixed attribute of an individual. This insecurity can lead to stylistic, and phonetic shifts away from an affected speaker's default speech variety; these shifts may be performed consciously on the part of the speaker, or may be reflective of an unconscious effort to conform to a more prestigious or context-appropriate variety or style of speech. Linguistic insecurity is linked to the perception of speech varieties in any community, and so may vary based on socioeconomic class and gender. It is also especially pertinent in multilingual societies.
Japanese has one liquid phoneme, realized usually as an apico-alveolar tap and sometimes as an alveolar lateral approximant. English has two: rhotic and lateral, with varying phonetic realizations centered on the postalveolar approximant and on the alveolar lateral approximant, respectively. Japanese speakers who learn English as a second language later than childhood often have difficulty in hearing and producing the and of English accurately.
In phonology and historical linguistics, cluster reduction is the simplification of consonant clusters in certain environments or over time. Cluster reduction can happen in different languages, dialects of those languages, in world Englishes, and as a part of language acquisition.
Jennifer Sandra Cole is a professor of linguistics and Director of the Prosody and Speech Dynamics Lab at Northwestern University. Her research uses experimental and computational methods to study the sound structure of language. She was the founding General Editor of Laboratory Phonology (2009–2015) and a founding member of the Association for Laboratory Phonology.
Speech acquisition focuses on the development of vocal, acoustic and oral language by a child. This includes motor planning and execution, pronunciation, phonological and articulation patterns.
Phonemic contrast refers to a minimal phonetic difference, that is, small differences in speech sounds, that makes a difference in how the sound is perceived by listeners, and can therefore lead to different mental lexical entries for words. For example, whether a sound is voiced or unvoiced matters for how a sound is perceived in many languages, such that changing this phonetic feature can yield a different word ; see Phoneme. Another example in English of a phonemic contrast would be the difference between leak and league; the minimal difference of voicing between [k] and [g] does lead to the two utterances being perceived as different words. On the other hand, an example that is not a phonemic contrast in English is the difference between and. In this case the minimal difference of vowel length is not a contrast in English and so those two forms would be perceived as different pronunciations of the same word seat.
Patrice (Pam) Speeter Beddor is John C. Catford Collegiate Professor of Linguistics at the University of Michigan, focusing on phonology and phonetics. Her research has dealt with phonetics, including work in coarticulation, speech perception, and the relationship between perception and production.
Georgia Zellou is an American linguistics professor at the University of California-Davis. Her research focuses on topics in phonetics and laboratory phonology.