In linguistics, a segment is "any discrete unit that can be identified, either physically or auditorily, in the stream of speech". [1] The term is most used in phonetics and phonology to refer to the smallest elements in a language, and this usage can be synonymous with the term phone.
In spoken languages, segments will typically be grouped into consonants and vowels, but the term can be applied to any minimal unit of a linear sequence meaningful to the given field of analysis, such as a mora or a syllable in prosodic phonology, a morpheme in morphology, or a chereme in sign language analysis. [2]
Segments are called "discrete" because they are, at least at some analytical level, separate and individual, and temporally ordered. Segments are generally not completely discrete in speech production or perception, however. The articulatory, visual and acoustic cues that encode them often overlap. Examples of overlap for spoken languages can be found in discussions of phonological assimilation, coarticulation, and other areas in the study of phonetics and phonology, especially autosegmental phonology.
Other articulatory, visual or acoustic cues, such as prosody (tone, stress), and secondary articulations such as nasalization, may overlap multiple segments and cannot be discretely ordered with them. These elements are known as suprasegmentals.
In phonetics, the smallest perceptible segment is a phone. In phonology, there is a subfield of segmental phonology that deals with the analysis of speech into phonemes (or segmental phonemes), which correspond fairly well to phonetic segments of the analysed speech.
The segmental phonemes of sign language (formally called "cheremes") are visual movements of hands, face, and body. They occur in a distinct spatial and temporal order. The SignWriting script represents the spatial order of the segments with a spatial cluster of graphemes. Other notations for sign language use a temporal order that implies a spatial order.
When analyzing the inventory of segmental units in any given language, some segments will be found to be marginal, in the sense that they are only found in onomatopoeic words, interjections, loan words, or a very limited number of ordinary words, but not throughout the language. Marginal segments, especially in loan words, are often the source of new segments in the general inventory of a language.[ example needed ]
Some contrastive elements of speech cannot be easily analyzed as distinct segments but rather belong to a syllable or word. These elements are called suprasegmental, and include intonation and stress. In some languages nasality and vowel harmony are considered suprasegmental or prosodic by some phonologists. [3] [4]
In linguistics, a liquid consonant or simply liquid is any of a class of consonants that consists of rhotics and voiced lateral approximants, which are also sometimes described as "R-like sounds" and "L-like sounds". The word liquid seems to be a calque of the Ancient Greek word ὑγρός, initially used by grammarian Dionysius Thrax to describe Greek sonorants.
In phonology, minimal pairs are pairs of words or phrases in a particular language, spoken or signed, that differ in only one phonological element, such as a phoneme, toneme or chroneme, and have distinct meanings. They are used to demonstrate that two phones represent two separate phonemes in the language.
A phoneme is any set of similar speech sounds that is perceptually regarded by the speakers of a language as a single basic sound—a smallest possible phonetic unit—that helps distinguish one word from another. All languages contain phonemes, and all spoken languages include both consonant and vowel phonemes. Phonemes are primarily studied under the branch of linguistics known as phonology.
Phonetics is a branch of linguistics that studies how humans produce and perceive sounds or, in the case of sign languages, the equivalent aspects of sign. Linguists who specialize in studying the physical properties of speech are phoneticians. The field of phonetics is traditionally divided into three sub-disciplines on questions involved such as how humans plan and execute movements to produce speech, how various movements affect the properties of the resulting sound or how humans convert sound waves to linguistic information. Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones and it is also defined as the smallest unit that discerns meaning between sounds in any given language.
Phonology is the branch of linguistics that studies how languages systematically organize their phonemes or, for sign languages, their constituent parts of signs. The term can also refer specifically to the sound or sign system of a particular language variety. At one time, the study of phonology related only to the study of the systems of phonemes in spoken languages, but now it may relate to any linguistic analysis either:
A vowel is a syllabic speech sound pronounced without any stricture in the vocal tract. Vowels are one of the two principal classes of speech sounds, the other being the consonant. Vowels vary in quality, in loudness and also in quantity (length). They are usually voiced and are closely involved in prosodic variation such as tone, intonation and stress.
A syllable is a basic unit of organization within a sequence of speech sounds, such as within a word, typically defined by linguists as a nucleus with optional sounds before or after that nucleus. In phonology and studies of languages, syllables are often considered the "building blocks" of words. They can influence the rhythm of a language, its prosody, its poetic metre and its stress patterns. Speech can usually be divided up into a whole number of syllables: for example, the word ignite is made of two syllables: ig and nite.
Voice or voicing is a term used in phonetics and phonology to characterize speech sounds. Speech sounds can be described as either voiceless or voiced.
English phonology is the system of speech sounds used in spoken English. Like many other languages, English has wide variation in pronunciation, both historically and from dialect to dialect. In general, however, the regional dialects of English share a largely similar phonological system. Among other things, most dialects have vowel reduction in unstressed syllables and a complex set of phonological features that distinguish fortis and lenis consonants.
In linguistics, prosody is the study of elements of speech, including intonation, stress, rhythm and loudness, that occur simultaneously with individual phonetic segments: vowels and consonants. Often, prosody specifically refers to such elements, known as suprasegmentals, when they extend across more than one phonetic segment.
Auditory phonetics is the branch of phonetics concerned with the hearing of speech sounds and with speech perception. It thus entails the study of the relationships between speech stimuli and a listener's responses to such stimuli as mediated by mechanisms of the peripheral and central auditory systems, including certain areas of the brain. It is said to compose one of the three main branches of phonetics along with acoustic and articulatory phonetics, though with overlapping methods and questions.
In linguistics, a chroneme is an abstract phonological suprasegmental feature used to signify contrastive differences in the length of speech sounds. Both consonants and vowels can be viewed as displaying this features. The noun chroneme is derived from Ancient Greek χρόνος (khrónos) 'time', and the suffixed -eme, which is analogous to the -eme in phoneme or morpheme. Two words with different meaning that are spoken exactly the same except for length of one segment are considered a minimal pair. The term was coined by the British phonetician Daniel Jones to avoid using the term phoneme to characterize a feature above the segmental level.
Japanese phonology is the system of sounds used in the pronunciation of the Japanese language. Unless otherwise noted, this article describes the standard variety of Japanese based on the Tokyo dialect.
Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.
Clinical linguistics is a sub-discipline of applied linguistics involved in the description, analysis, and treatment of language disabilities, especially the application of linguistic theory to the field of Speech-Language Pathology. The study of the linguistic aspect of communication disorders is of relevance to a broader understanding of language and linguistic theory.
Catherine Phebe Browman was an American linguist and speech scientist. She received her Ph.D. in linguistics from the University of California, Los Angeles (UCLA) in 1978. Browman was a research scientist at Bell Laboratories in New Jersey (1967–1972). While at Bell Laboratories, she was known for her work on speech synthesis using demisyllables. She later worked as researcher at Haskins Laboratories in New Haven, Connecticut (1982–1998). She was best known for developing, with Louis Goldstein, of the theory of articulatory phonology, a gesture-based approach to phonological and phonetic structure. The theoretical approach is incorporated in a computational model that generates speech from a gesturally-specified lexicon. Browman was made an honorary member of the Association for Laboratory Phonology.
Phonological development refers to how children learn to organize sounds into meaning or language (phonology) during their stages of growth.
The phonology of second languages is different from the phonology of first languages in various ways. The differences are considered to come from general characteristics of second languages, such as slower speech rate, lower proficiency than native speakers, and from the interaction between non-native speakers' first and second languages.
Speech tempo is a measure of the number of speech units of a given type produced within a given amount of time. Speech tempo is believed to vary within the speech of one person according to contextual and emotional factors, between speakers and also between different languages and dialects. However, there are many problems involved in investigating this variance scientifically.
Sign languages such as American Sign Language (ASL) are characterized by phonological processes analogous to those of oral languages. Phonemes serve the same role between oral and signed languages, the main difference being oral languages are based on sound and signed languages are spatial and temporal. There is debate about the phonotactics in ASL, but literature has largely agreed upon the Symmetry and Dominance Conditions for phonotactic constraints. Allophones perform the same in ASL as they do in spoken languages, where different phonemes can cause free variation, or complementary and contrastive distributions. There is assimilation between phonemes depending on the context around the sign when it is being produced. The brain processes spoken and signed language the same in terms of the linguistic properties, however, there is differences in activation between the auditory and visual cortex for language perception.