Categorical perception

Last updated

Categorical perception is a phenomenon of perception of distinct categories when there is gradual change in a variable along a continuum. It was originally observed for auditory stimuli but now found to be applicable to other perceptual modalities. [1] [2]

Contents

Motor theory of speech perception

And what about the very building blocks of the language we use to name categories: Are our speech-sounds —/ba/, /da/, /ga/ —innate or learned? The first question we must answer about them is whether they are categorical categories at all, or merely arbitrary points along a continuum. It turns out that if one analyzes the sound spectrogram of ba and pa, for example, both are found to lie along an acoustic continuum called "voice-onset-time." With a technique similar to the one used in "morphing" visual images continuously into one another, it is possible to "morph" a /ba/ gradually into a /pa/ and beyond by gradually increasing the voicing parameter.

Alvin Liberman and colleagues [3] (he did not talk about voice onset time in that paper) reported that when people listen to sounds that vary along the voicing continuum, they hear only /ba/s and /pa/s, nothing in between. This effect—in which a perceived quality jumps abruptly from one category to another at a certain point along a continuum, instead of changing gradually—he dubbed "categorical perception" (CP). He suggested that CP was unique to speech, that CP made speech special, and, in what came to be called "the motor theory of speech perception," he suggested that CP's explanation lay in the anatomy of speech production.

According to the (now abandoned) motor theory of speech perception, the reason people perceive an abrupt change between /ba/ and /pa/ is that the way we hear speech sounds is influenced by how people produce them when they speak. What is varying along this continuum is voice-onset-time: the "b" in /ba/ is voiced and the "p" in /pa/ is not. But unlike the synthetic "morphing" apparatus, people's natural vocal apparatus is not capable of producing anything in between ba and pa. So when one hears a sound from the voicing continuum, their brain perceives it by trying to match it with what it would have had to do to produce it. Since the only thing they can produce is /ba/ or /pa/, they will perceive any of the synthetic stimuli along the continuum as either /ba/ or /pa/, whichever it is closer to. A similar CP effect is found with ba/da; these too lie along a continuum acoustically, but vocally, /ba/ is formed with the two lips, /da/ with the tip of the tongue and the alveolar ridge, and our anatomy does not allow any intermediates.

The motor theory of speech perception explained how speech was special and why speech-sounds are perceived categorically: sensory perception is mediated by motor production. Wherever production is categorical, perception will be categorical; where production is continuous, perception will be continuous. And indeed vowel categories like a/u were found to be much less categorical than ba/pa or ba/da.

Acquired distinctiveness

If motor production mediates sensory perception, then one assumes that this CP effect is a result of learning to produce speech. Eimas et al. (1971), however, found that infants already have speech CP before they begin to speak. Perhaps, then, it is an innate effect, evolved to "prepare" us to learn to speak. [4] But Kuhl (1987) found that chinchillas also have "speech CP" even though they never learn to speak, and presumably did not evolve to do so. [5] Lane (1965) went on to show that CP effects can be induced by learning alone, with a purely sensory (visual) continuum in which there is no motor production discontinuity to mediate the perceptual discontinuity. [6] He concluded that speech CP is not special after all, but merely a special case of Lawrence's classic demonstration that stimuli to which you learn to make a different response become more distinctive and stimuli to which you learn to make the same response become more similar.

It also became clear that CP was not quite the all-or-none effect Liberman had originally thought it was: It is not that all /pa/s are indistinguishable and all /ba/s are indistinguishable: We can hear the differences, just as we can see the differences between different shades of red. It is just that the within-category differences (pa1/pa2 or red1/red2) sound/look much smaller than the between-category differences (pa2/ba1 or red2/yellow1), even when the size of the underlying physical differences (voicing, wavelength) are actually the same.

Identification and discrimination tasks

The study of categorical perception often uses experiments involving discrimination and identification tasks in order to categorize participants' perceptions of sounds. Voice onset time (VOT) is measured along a continuum rather than a binary. English bilabial stops /b/ and /p/ are voiced and voiceless counterparts of the same place and manner of articulation, yet native speakers distinguish the sounds primarily by where they fall on the VOT continuum. Participants in these experiments establish clear phoneme boundaries on the continuum; two sounds with different VOT will be perceived as the same phoneme if on the same side of the boundary. [7] Participants take longer to discriminate between two sounds falling in the same category of VOT than between two on opposite sides of the phoneme boundary, even if the difference in VOT is greater between the two in the same category. [8]

Identification

In a categorical perception identification task, participants often must identify stimuli, such as speech sounds. An experimenter testing the perception of the VOT boundary between /p/ and /b/ may play several sounds falling on various parts of the VOT continuum and ask volunteers whether they hear each sound as /p/ or /b/. [9] In such experiments, sounds on one side of the boundary are heard almost universally as /p/ and on the other as /b/. Stimuli on or near the boundary take longer to identify and are reported differently by different volunteers, but are perceived as either /b/ or /p/, rather than as a sound somewhere in the middle. [7]

Discrimination

A simple AB discrimination task presents participants with two options and participants must decide if they are identical. [9] Predictions for a discrimination task in an experiment are often based on the preceding identification task. An ideal discrimination experiment validating categorical perception of stop consonants would result in volunteers more often correctly discriminating stimuli that fall on opposite sides of the boundary, while discriminating at chance level on the same side of the boundary. [8]

In an ABX discrimination task, volunteers are presented with three stimuli. A and B must be distinct stimuli and volunteers decide which of the two the third stimulus X matches. This discrimination task is much more common than a simple AB task. [9] [8]

Whorf hypothesis

According to the Sapir–Whorf hypothesis (of which Lawrence's acquired similarity/distinctiveness effects would simply be a special case), language affects the way that people perceive the world. For example, colors are perceived categorically only because they happen to be named categorically: Our subdivisions of the spectrum are arbitrary, learned, and vary across cultures and languages. But Berlin & Kay (1969) suggested that this was not so: Not only do most cultures and languages subdivide and name the color spectrum the same way, but even for those who don't, the regions of compression and separation are the same. [10] We all see blues as more alike and greens as more alike, with a fuzzy boundary in between, whether or not we have named the difference. This view has been challenged in a review article by Regier and Kay (2009) who discuss a distinction between the questions "1. Do color terms affect color perception?" and "2. Are color categories determined by largely arbitrary linguistic convention?". They report evidence that linguistic categories, stored in the left hemisphere of the brain for most people, do affect categorical perception but primarily in the right visual field, and that this effect is eliminated with a concurrent verbal interference task. [11]

Universalism, in contrasts to the Sapir-Whorf hypothesis, posits that perceptual categories are innate, and are unaffected by the language that one speaks. [12]

Support

Support of the Sapir-Whorf hypothesis describes instances in which speakers of one language demonstrate categorical perception in a way that is different from speakers of another language. Examples of such evidence are provided below:

Regier and Kay (2009) reported evidence that linguistic categories affect categorical perception primarily in the right visual field. [13] The right visual field is controlled by the left hemisphere of the brain, which also controls language faculties. Davidoff (2001) presented evidence that in color discrimination tasks, native English speakers discriminated more easily between color stimuli across a determined blue-green boundary than within the same side, but did not show categorical perception when given the same task with Berinmo "nol" and "wor"; Berinmo speakers performed oppositely. [14]

A popular theory in current research is "weak-Whorfianism,' which is the theory that although there is a strong universal component to perception, cultural differences still have an impact. For example, a 1998 study found that while there was evidence of universal perception of color between speakers of Setswana and English, there were also marked differences between the two language groups. [15]

Evolved categorical perception

The signature of categorical perception (CP) is within-category compression and/or between-category separation. The size of the CP effect is merely a scaling factor; it is this compression/separation "accordion effect", that is CP's distinctive feature. In this respect, the "weaker" CP effect for vowels, whose motor production is continuous rather than categorical, but whose perception is by this criterion categorical, is every bit as much of a CP effect as the ba/pa and ba/da effects. But, as with colors, it looks as if the effect is an innate one: Our sensory category detectors for both color and speech sounds are born already "biased" by evolution: Our perceived color and speech-sound spectrum is already "warped" with these compression/separations.

Learned categorical perception

The Lane/Lawrence demonstrations, lately replicated and extended by Goldstone (1994), showed that CP can be induced by learning alone. [16] There are also the countless categories cataloged in our dictionaries that, according to categorical perception, are unlikely to be inborn. Nativist theorists such as Fodor [1983] have sometimes seemed to suggest that all of our categories are inborn. [17] There are recent demonstrations that, although the primary color and speech categories may be inborn, their boundaries can be modified or even lost as a result of learning, and weaker secondary boundaries can be generated by learning alone. [18]

In the case of innate CP, our categorically biased sensory detectors pick out their prepared color and speech-sound categories far more readily and reliably than if our perception had been continuous.

Learning is a cognitive process that results in a relatively permanent change in behavior. Learning can influence perceptual processing. [19] Learning influences perceptual processing by altering the way in which an individual perceives a given stimulus based on prior experience or knowledge. This means that the way something is perceived is changed by how it was seen, observed, or experienced before. The effects of learning can be studied in categorical perception by looking at the processes involved. [20]

Learned categorical perception can be divided into different processes through some comparisons. The processes can be divided into between category and within category groups of comparison . [21] Between category groups are those that compare between two separate sets of objects. Within category groups are those that compare within one set of objects. Between subjects comparisons lead to a categorical expansion effect. A categorical expansion occurs when the classifications and boundaries for the category become broader, encompassing a larger set of objects. In other words, a categorical expansion is when the "edge lines" for defining a category become wider. Within subjects comparisons lead to a categorical compression effect. A categorical compression effect corresponds to the narrowing of category boundaries to include a smaller set of objects (the "edge lines" are closer together). [21] Therefore, between category groups lead to less rigid group definitions whereas within category groups lead to more rigid definitions.

Another method of comparison is to look at both supervised and unsupervised group comparisons. Supervised groups are those for which categories have been provided, meaning that the category has been defined previously or given a label; unsupervised groups are groups for which categories are created, meaning that the categories will be defined as needed and are not labeled. [22]

In studying learned categorical perception, themes are important. Learning categories is influenced by the presence of themes. Themes increase quality of learning. This is seen especially in cases where the existing themes are opposite. [22] In learned categorical perception, themes serve as cues for different categories. They assist in designating what to look for when placing objects into their categories. For example, when perceiving shapes, angles are a theme. The number of angles and their size provide more information about the shape and cue different categories. Three angles would cue a triangle, whereas four might cue a rectangle or a square. Opposite to the theme of angles would be the theme of circularity. The stark contrast between the sharp contour of an angle and the round curvature of a circle make it easier to learn.

Similar to themes, labels are also important to learned categorical perception. [21] Labels are “noun-like” titles that can encourage categorical processing with a focus on similarities. [21] The strength of a label can be determined by three factors: analysis of affective (or emotional) strength, permeability (the ability to break through) of boundaries, and a judgment (measurement of rigidity) of discreteness. [21] Sources of labels differ, and, similar to unsupervised/supervised categories, are either created or already exist. [21] [22] Labels affect perception regardless of their source. Peers, individuals, experts, cultures, and communities can create labels. The source doesn’t appear to matter as much as mere presence of a label, what matters is that there is a label. There is a positive correlation between strength of the label (combination of three factors) and the degree to which the label affects perception, meaning that the stronger the label, the more the label affects perception. [21]

Cues used in learned categorical perception can foster easier recall and access of prior knowledge in the process of learning and using categories. [22] An item in a category can be easier to recall if the category has a cue for the memory. As discussed, labels and themes both function as cues for categories, and, therefore, aid in the memory of these categories and the features of the objects belonging to them.

There are several brain structures at work that promote learned categorical perception. The areas and structures involved include: neurons, the prefrontal cortex, and the inferotemporal cortex. [20] [23] Neurons in general are linked to all processes in the brain and, therefore, facilitate learned categorical perception. They send the messages between brain areas and facilitate the visual and linguistic processing of the category. The prefrontal cortex is involved in “forming strong categorical representations.” [20] The inferotemporal cortex has cells that code for different object categories and are turned along diagnostic category dimensions, areas distinguishing category boundaries. [20]

The learning of categories and categorical perception can be improved through adding verbal labels, making themes relevant to the self, making more separate categories, and by targeting similar features that make it easier to form and define categories.

Learned categorical perception occurs not only in human species but has been demonstrated in animal species as well. Studies have targeted categorical perception using humans, monkeys, rodents, birds, frogs. [23] [24] These studies have led to numerous discoveries. They focus primarily on learning the boundaries of categories, where inclusion begins and ends, and they support the hypothesis that categorical perception does have a learned component.

Computational and neural models

Computational modeling (Tijsseling & Harnad 1997; Damper & Harnad 2000) has shown that many types of category-learning mechanisms (e.g. both back-propagation and competitive networks) display CP-like effects. [25] [26] In back-propagation nets, the hidden-unit activation patterns that "represent" an input build up within-category compression and between-category separation as they learn; other kinds of nets display similar effects. CP seems to be a means to an end: Inputs that differ among themselves are "compressed" onto similar internal representations if they must all generate the same output; and they become more separate if they must generate different outputs. The network's "bias" is what filters inputs onto their correct output category. The nets accomplish this by selectively detecting (after much trial and error, guided by error-correcting feedback) the invariant features that are shared by the members of the same category and that reliably distinguish them from members of different categories; the nets learn to ignore all other variation as irrelevant to the categorization.

Brain basis

Neural data provide correlates of CP and of learning. [27] Differences between event-related potentials recorded from the brain have been found to be correlated with differences in the perceived category of the stimulus viewed by the subject. Neural imaging studies have shown that these effects are localized and even lateralized to certain brain regions in subjects who have successfully learned the category, and are absent in subjects who have not. [28] [29]

Categorical perception is identified with the left prefrontal cortex with this showing such perception for speech units while this is not by posterior areas earlier in their processing such as areas in the left superior temporal gyrus. [30]

Language-induced

Both innate and learned CP are sensorimotor effects: The compression/separation biases are sensorimotor biases, and presumably had sensorimotor origins, whether during the sensorimotor life-history of the organism, in the case of learned CP, or the sensorimotor life-history of the species, in the case of innate CP. The neural net I/O models are also compatible with this fact: Their I/O biases derive from their I/O history. But when we look at our repertoire of categories in a dictionary, it is highly unlikely that many of them had a direct sensorimotor history during our lifetimes, and even less likely in our ancestors' lifetimes. How many of us have seen a unicorn in real life? We have seen pictures of them, but what had those who first drew those pictures seen? And what about categories I cannot draw or see (or taste or touch): What about the most abstract categories, such as goodness and truth?

Some of our categories must originate from another source than direct sensorimotor experience, and here we return to language and the Whorf Hypothesis: Can categories, and their accompanying CP, be acquired through language alone? Again, there are some neural net simulation results suggesting that once a set of category names has been "grounded" through direct sensorimotor experience, they can be combined into Boolean combinations (man = male & human) and into still higher-order combinations (bachelor = unmarried & man) which not only pick out the more abstract, higher-order categories much the way the direct sensorimotor detectors do, but also inherit their CP effects, as well as generating some of their own. Bachelor inherits the compression/separation of unmarried and man, and adds a layer of separation/compression of its own. [31] [32]

These language-induced CP-effects remain to be directly demonstrated in human subjects; so far only learned and innate sensorimotor CP have been demonstrated. [33] [34] The latter shows the Whorfian power of naming and categorization, in warping our perception of the world. That is enough to rehabilitate the Whorf Hypothesis from its apparent failure on color terms (and perhaps also from its apparent failure on eskimo snow terms [35] ), but to show that it is a full-blown language effect, and not merely a vocabulary effect, it will have to be shown that our perception of the world can also be warped, not just by how things are named but by what we are told about them.

Emotion

Emotions are an important characteristic of the human species. An emotion is an abstract concept that is most easily observed by looking at facial expressions. Emotions and their relation to categorical perception are often studied using facial expressions. [36] [37] [38] [39] [40] Faces contain a large amount of valuable information. [38]

Emotions are divided into categories because they are discrete from one another. Each emotion entails a separate and distinct set of reactions, consequences, and expressions. The feeling and expression of emotions is a natural occurrence, and, it is actually a universal occurrence for some emotions. There are six basic emotions that are considered universal to the human species across age, gender, race, country, and culture and that are considered to be categorically distinct. These six basic emotions are: happiness, disgust, sadness, surprise, anger, and fear. [39] According to the discrete emotions approach, people experience one emotion and not others, rather than a blend. [39] Categorical perception of emotional facial expressions does not require lexical categories. [39] Of these six emotions, happiness is the most easily identified.

The perception of emotions using facial expressions reveals slight gender differences [36] based on the definition and boundaries (essentially, the “edge line” where one emotion ends and a subsequent emotion begins) of the categories. The emotion of anger is perceived easier and quicker when it is displayed by males. However, the same effects are seen in the emotion of happiness when portrayed by women. [36] These effects are essentially observed because the categories of the two emotions (anger and happiness) are more closely associated with other features of these specific genders.

Although a verbal label is provided to emotions, it is not required to categorically perceive them. Before language in infants, they can distinguish emotional responses. The categorical perception of emotions is by a "hardwired mechanism". [39] Additional evidence exists showing the verbal labels from cultures that may not have a label for a specific emotion but can still categorically perceive it as its own emotion, discrete and isolated from other emotions. [39] The perception of emotions into categories has also been studied using the tracking of eye movements which showed an implicit response with no verbal requirement because the eye movement response required only the movement and no subsequent verbal response. [37]

The categorical perception of emotions is sometimes a result of joint processing. Other factors may be involved in this perception. Emotional expression and invariable features (features that remain relatively consistent) often work together. [38] Race is one of the invariable features that contribute to categorical perception in conjunction with expression. Race can also be considered a social category. [38] Emotional categorical perception can also be seen as a mix of categorical and dimensional perception. Dimensional perception involves visual imagery. Categorical perception occurs even when processing is dimensional. [40]

See also

Related Research Articles

The idea of linguistic relativity, also known as the Sapir–Whorf hypothesis, the Whorf hypothesis, or Whorfianism, is a principle suggesting that the structure of a language influences its speakers' worldview or cognition, and thus individuals' languages determine or shape their perceptions of the world.

Absolute pitch (AP), often called perfect pitch, is the ability to identify or re-create a given musical note without the benefit of a reference tone. AP may be demonstrated using linguistic labelling, associating mental imagery with the note, or sensorimotor responses. For example, an AP possessor can accurately reproduce a heard tone on a musical instrument without "hunting" for the correct pitch.

Categorization is a type of cognition involving conceptual differentiation between characteristics of conscious experience, such as objects, events, or ideas. It involves the abstraction and differentiation of aspects of experience by sorting and distinguishing between groupings, through classification or typification on the basis of traits, features, similarities or other criteria that are universal to the group. Categorization is considered one of the most fundamental cognitive abilities, and it is studied particularly by psychology and cognitive linguistics.

<span class="mw-page-title-main">Cognition</span> Act or process of knowing

Cognition is the "mental action or process of acquiring knowledge and understanding through thought, experience, and the senses". It encompasses all aspects of intellectual functions and processes such as: perception, attention, thought, imagination, intelligence, the formation of knowledge, memory and working memory, judgment and evaluation, reasoning and computation, problem-solving and decision-making, comprehension and production of language. Cognitive processes use existing knowledge and discover new knowledge.

<span class="mw-page-title-main">Stevan Harnad</span> Canadian cognitive scientist (born 1945)

Stevan Robert Harnad is a Canadian cognitive scientist based in Montreal.

Cognitive development is a field of study in neuroscience and psychology focusing on a child's development in terms of information processing, conceptual resources, perceptual skill, language learning, and other aspects of the developed adult brain and cognitive psychology. Qualitative differences between how a child processes their waking experience and how an adult processes their waking experience are acknowledged. Cognitive development is defined as the emergence of the ability to consciously cognize, understand, and articulate their understanding in adult terms. Cognitive development is how a person perceives, thinks, and gains understanding of their world through the relations of genetic and learning factors. There are four stages to cognitive information development. They are, reasoning, intelligence, language, and memory. These stages start when the baby is about 18 months old, they play with toys, listen to their parents speak, they watch TV, anything that catches their attention helps build their cognitive development.

<span class="mw-page-title-main">Speech</span> Human vocal communication using spoken language

Speech is a human vocal communication using language. Each language uses phonetic combinations of vowel and consonant sounds that form the sound of its words, and using those words in their semantic character as words in the lexicon of a language according to the syntactic constraints that govern lexical words' function in a sentence. In speaking, speakers perform many different intentional speech acts, e.g., informing, declaring, asking, persuading, directing, and can use enunciation, intonation, degrees of loudness, tempo, and other non-representational or paralinguistic aspects of vocalization to convey meaning. In their speech, speakers also unintentionally communicate many aspects of their social position such as sex, age, place of origin, physical states, psychological states, physico-psychological states, education or experience, and the like.

The symbol grounding problem is a concept in the fields of artificial intelligence, cognitive science, philosophy of mind, and semantics. It addresses the challenge of connecting symbols, such as words or abstract representations, to the real-world objects or concepts they refer to. In essence, it is about how symbols acquire meaning in a way that is tied to the physical world. It is concerned with how it is that words get their meanings, and hence is closely related to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of how it is that mental states are meaningful, and hence to the problem of consciousness: what is the connection between certain physical systems and the contents of subjective experiences.

Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.

Lawrence W. Barsalou is an American psychologist and a cognitive scientist, currently working at the University of Glasgow.

Phonological development refers to how children learn to organize sounds into meaning or language (phonology) during their stages of growth.

The concept of linguistic relativity concerns the relationship between language and thought, specifically whether language influences thought, and, if so, how. This question has led to research in multiple disciplines—including anthropology, cognitive science, linguistics, and philosophy. Among the most debated theories in this area of work is the Sapir–Whorf hypothesis. This theory states that the language a person speaks will affect the way that this person thinks. The theory varies between two main proposals: that language structure determines how individuals perceive the world and that language structure influences the world view of speakers of a given language but does not determine it.

The motor theory of speech perception is the hypothesis that people perceive spoken words by identifying the vocal tract gestures with which they are pronounced rather than by identifying the sound patterns that speech generates. It originally claimed that speech perception is done through a specialized module that is innate and human-specific. Though the idea of a module has been qualified in more recent versions of the theory, the idea remains that the role of the speech motor system is not only to produce speech articulations but also to detect them.

Perceptual learning is learning better perception skills such as differentiating two musical tones from one another or categorizations of spatial and temporal patterns relevant to real-world expertise. Examples of this may include reading, seeing relations among chess pieces, and knowing whether or not an X-ray image shows a tumor.

<span class="mw-page-title-main">Embodied cognition</span> Interdisciplinary theory

Embodied cognition is the concept suggesting that many features of cognition are shaped by the state and capacities of the organism. The cognitive features include a wide spectrum of cognitive functions, such as perception biases, memory recall, comprehension and high-level mental constructs and performance on various cognitive tasks. The bodily aspects involve the motor system, the perceptual system, the bodily interactions with the environment (situatedness), and the assumptions about the world built the functional structure of organism's brain and body.

Sensory-motor coupling is the coupling or integration of the sensory system and motor system. Sensorimotor integration is not a static process. For a given stimulus, there is no one single motor command. "Neural responses at almost every stage of a sensorimotor pathway are modified at short and long timescales by biophysical and synaptic processes, recurrent and feedback connections, and learning, as well as many other internal and external variables".

Statistical language acquisition, a branch of developmental psycholinguistics, studies the process by which humans develop the ability to perceive, produce, comprehend, and communicate with natural language in all of its aspects through the use of general learning mechanisms operating on statistical patterns in the linguistic input. Statistical learning acquisition claims that infants' language-learning is based on pattern perception rather than an innate biological grammar. Several statistical elements such as frequency of words, frequent frames, phonotactic patterns and other regularities provide information on language structure and meaning for facilitation of language acquisition.

Emotion perception refers to the capacities and abilities of recognizing and identifying emotions in others, in addition to biological and physiological processes involved. Emotions are typically viewed as having three components: subjective experience, physical changes, and cognitive appraisal; emotion perception is the ability to make accurate decisions about another's subjective experience by interpreting their physical changes through sensory systems responsible for converting these observed changes into mental representations. The ability to perceive emotion is believed to be both innate and subject to environmental influence and is also a critical component in social interactions. How emotion is experienced and interpreted depends on how it is perceived. Likewise, how emotion is perceived is dependent on past experiences and interpretations. Emotion can be accurately perceived in humans. Emotions can be perceived visually, audibly, through smell and also through bodily sensations and this process is believed to be different from the perception of non-emotional material.

Statistical learning is the ability for humans and other animals to extract statistical regularities from the world around them to learn about the environment. Although statistical learning is now thought to be a generalized learning mechanism, the phenomenon was first identified in human infant language acquisition.

Interindividual differences in perception describes the effect that differences in brain structure or factors such as culture, upbringing and environment have on the perception of humans. Interindividual variability is usually regarded as a source of noise for research. However, in recent years, it has become an interesting source to study sensory mechanisms and understand human behavior. With the help of modern neuroimaging methods such as fMRI and EEG, individual differences in perception could be related to the underlying brain mechanisms. This has helped to explain differences in behavior and cognition across the population. Common methods include studying the perception of illusions, as they can effectively demonstrate how different aspects such as culture, genetics and the environment can influence human behavior.

References

  1. Fugate Jennifer M. B. (Jan 2013). "Categorical Perception for Emotional Faces". Emotion Review. 5 (1): 84–89. doi:10.1177/1754073912451350. PMC   4267261 . PMID   25525458.
  2. Crystal, D. (1987). The Cambridge Encyclopedia of Language. Cambridge CB2 1RP: Cambridge University Press
  3. Liberman, A. M., Harris, K. S., Hoffman, H. S. & Griffith, B. C. (1957). "The discrimination of speech sounds within and across phoneme boundaries". Journal of Experimental Psychology. 54 (5): 358–368. doi:10.1037/h0044417. PMID   13481283. S2CID   10117886.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  4. Eimas, P.D.; Siqueland, E.R.; Jusczyk, P.W. & Vigorito, J. (1971). "Speech perception in infants". Science. 171 (3968): 303–306. Bibcode:1971Sci...171..303E. doi:10.1126/science.171.3968.303. hdl: 11858/00-001M-0000-002B-0DB3-1 . PMID   5538846. S2CID   15554065.
  5. Kuhl, P. K. (1987). "The Special-Mechanisms Debate in Speech Perception: Nonhuman Species and Nonspeech Signals". In S. Harnad (ed.). Categorical perception: The groundwork of Cognition. New York: Cambridge University Press.
  6. Lane, H. (1965). "The motor theory of speech perception: A critical review". Psychological Review. 72 (4): 275–309. doi:10.1037/h0021986. PMID   14348425.
  7. 1 2 Fernández, Eva; Cairns, Helen (2011). Fundamentals of Psycholinguistics . West Sussex, United Kingdom: Wiley-Blackwell. pp.  175–179. ISBN   978-1-4051-9147-0.
  8. 1 2 3 Repp, Bruno (1984). "Categorical Perception: Issues, Methods, Findings" (PDF). SPEECH AND LANGUAGE: Advances in Basic Research and Practice. 10: 243–335.
  9. 1 2 3 Brandt, Jason; Rosen, Jeffrey (1980). "Auditory Phonemic Perception in Dyslexia: Categorical Identification and Discrimination of Stop Consonants". Brain and Language. 9 (2): 324–337. doi:10.1016/0093-934x(80)90152-2. PMID   7363076. S2CID   19098726.[ permanent dead link ]
  10. Berlin, B.; Kay, P. (1969). Basic color terms: Their universality and evolution. Berkeley: University of California Press. ISBN   978-1-57586-162-3.
  11. Regier, T.; Kay, P. (2009). "Language, thought, and color: Whorf was half right". Trends in Cognitive Sciences. 13 (10): 439–447. doi:10.1016/j.tics.2009.07.001. PMID   19716754. S2CID   2564005.
  12. Penn, Julia, M. (1972). Linguistic relativity versus innate ideas: The origins of the Sapir-Whorf hypothesis in German thought. Walter de Gruyter. p. 11.{{cite book}}: CS1 maint: multiple names: authors list (link)
  13. Regier, T.; Kay, P. (2009). "Language, thought, and color: Whorf was half right". Trends in Cognitive Sciences. 13 (10): 439–447. doi:10.1016/j.tics.2009.07.001. PMID   19716754. S2CID   2564005.
  14. Davidoff, Jules (September 2001). "Language and perceptual categorisation" (PDF). Trends in Cognitive Sciences. 5 (9): 382–387. doi:10.1016/s1364-6613(00)01726-5. PMID   11520702. S2CID   12975180.
  15. Davies, I.R.L.; Sowden, P.T.; Jerrett, D.T.; Jerrett, T.; Corbett, G.G. (1998). "A cross-cultural study of English and Setswana speakers on a colour triads task: A test of the Sapir-Whorf hypothesis". British Journal of Psychology. 89: 1–15. doi:10.1111/j.2044-8295.1998.tb02669.x.
  16. Goldstone, R. L. (1994). "Influences of categorization on perceptual discrimination". Journal of Experimental Psychology. General. 123 (2): 178–200. doi:10.1037/0096-3445.123.2.178. PMID   8014612.
  17. Fodor, J. (1983). The modularity of mind. MIT Press. ISBN   978-0-262-06084-4.
  18. Roberson, D., Davies, I. & Davidoff, J. (2000). "Color categories are not universal: Replications and new evidence from a stone-age culture" (PDF). Journal of Experimental Psychology. General. 129 (3): 369–398. doi:10.1037/0096-3445.129.3.369. PMID   11006906.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  19. Notman, Leslie; Paul Sowden; Emre Ozgen (2005). "The Nature of Learned Categorical Perception Effects: A Psychophysical Approach". Cognition. 95 (2): B1–B14. doi:10.1016/j.cognition.2004.07.002. PMID   15694641. S2CID   19535670.
  20. 1 2 3 4 Casey, Matthew; Paul Sowden (2012). "Modeling learned categorical perception in human vision" (PDF). Neural Networks. 33: 114–126. doi:10.1016/j.neunet.2012.05.001. PMID   22622262.
  21. 1 2 3 4 5 6 7 Foroni, Francesco; Myron Rothbart (2011). "Category Boundaries and Category Labels: When Does a Category Name Influence the Perceived Similarity of Category Members". Social Cognition. 29 (5): 547–576. doi:10.1521/soco.2011.29.5.547.
  22. 1 2 3 4 Clapper, John (2012). "The Effects of Prior Knowledge on Incidental Category Learning". Journal of Experimental Psychology: Learning, Memory, and Cognition. 38 (6): 1558–1577. doi:10.1037/a0028457. PMID   22612162.
  23. 1 2 Prather, Jonathan; Stephen Nowicki; Rindy Anderson; Susan Peters; Richard Mooney (2009). "Neural Correlates of Categorical Perception in Learned Vocal Communication". Nature Neuroscience. 12 (2): 221–228. doi:10.1038/nn.2246. PMC   2822723 . PMID   19136972.
  24. Eriksson, Jan L.; Villa, Alessandro E.P. (2006). "Learning of auditory equivalence classes for vowels by rats". Behavioural Processes. 73 (3): 348–359. doi:10.1016/j.beproc.2006.08.005. PMID   16997507. S2CID   20165526.
  25. Damper, R.I.; Harnad, S. (2000). "Neural Network Modeling of Categorical Perception". Perception and Psychophysics. 62 (4): 843–867. doi: 10.3758/BF03206927 . PMID   10883589.
  26. Tijsseling, A.; Harnad, S. (1997). "Warping Similarity Space in Category Learning by Backprop Nets". In Ramscar, M.; Hahn, U.; Cambouropoulos, E.; Pain, H. (eds.). Proceedings of SimCat 1997: Interdisciplinary Workshop on Similarity and Categorization. Department of Artificial Intelligence, Edinburgh University. pp. 263–269.
  27. Sharma, A.; Dorman, M.F. (1999). "Cortical auditory evoked potential correlates of categorical perception of voice-onset time". Journal of the Acoustical Society of America. 106 (2): 1078–1083. Bibcode:1999ASAJ..106.1078S. doi:10.1121/1.428048. PMID   10462812.
  28. Seger, Carol A.; Poldrack, Russell A.; Prabhakaran, Vivek; Zhao, Margaret; Glover, Gary H.; Gabrieli, John D. E. (2000). "Hemispheric asymmetries and individual differences in visual concept learning as measured by functional MRI". Neuropsychologia. 38 (9): 1316–1324. doi:10.1016/S0028-3932(00)00014-2. PMID   10865107. S2CID   14991837.
  29. Raizada, RDS; Poldrack; RA (2007). "Selective Amplification of Stimulus Differences during Categorical Processing of Speech". Neuron. 56 (4): 726–740. doi: 10.1016/j.neuron.2007.11.001 . PMID   18031688.
  30. Myers, EB; Blumstein, SE; Walsh, E; Eliassen, J.; Batton, D; Kirk, JS (2009). "Inferior frontal regions underlie the perception of phonetic category invariance". Psychol Sci. 20 (7): 895–903. doi:10.1111/j.1467-9280.2009.02380.x. PMC   2851201 . PMID   19515116.
  31. Cangelosi, A.; Harnad, S. (2001). "The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories". Evolution of Communication. 4 (1): 117–142. doi:10.1075/eoc.4.1.07can. hdl: 10026.1/3619 . S2CID   15837328.
  32. Cangelosi A.; Greco A.; Harnad S. (2000). "From robotic toil to symbolic theft: Grounding transfer from entry-level to higher-level categories". Connection Science. 12 (2): 143–162. doi:10.1080/09540090050129763. hdl: 10026.1/3618 . S2CID   6597188.
  33. Pevtzow, R.; Harnad, S. (1997). "Warping Similarity Space in Category Learning by Human Subjects: The Role of Task Difficulty". In Ramscar, M.; Hahn, U.; Cambouropolos, E.; Pain, H. (eds.). Proceedings of SimCat 1997: Interdisciplinary Workshop on Similarity and Categorization. Department of Artificial Intelligence, Edinburgh University. pp. 189–195.
  34. Livingston, K. Andrews; Harnad, S. (1998). "Categorical Perception Effects Induced by Category Learning". Journal of Experimental Psychology: Learning, Memory, and Cognition. 24 (3): 732–753. doi:10.1037/0278-7393.24.3.732. PMID   9606933.
  35. Pullum, G. K. (1989). "The great eskimo vocabulary hoax". Natural Language and Linguistic Theory . 7 (2): 275–281. doi:10.1007/bf00138079. S2CID   170321854.
  36. 1 2 3 Hess, Ursula; Reginald Adams; Robert Kleck (2009). "The Categorical Perception of Emotions and Traits". Social Cognition. 27 (2): 320–326. doi:10.1521/soco.2009.27.2.320.
  37. 1 2 Cheal, Jenna; M. D. Rutherford (2012). "Mapping Emotion Category Boundaries Using a Visual Expectation Paradigm". Perception. 39 (11): 1514–1525. doi:10.1068/p6683. PMID   21313948. S2CID   26578275.
  38. 1 2 3 4 Otten, Marte; Mahzarin Banaji (2012). "Social Categories Shape the Neural Representation of Emotion: Evidence From a Visual Face Adaptation Task". Frontiers in Integrative Neuroscience. 6: 9. doi: 10.3389/fnint.2012.00009 . PMC   3289861 . PMID   22403531.
  39. 1 2 3 4 5 6 Sauter, Disa; Oliver LeGuen; Daniel Haun (2011). "Categorical Perception of Emotional Facial Expressions Does Not Require Lexical Categories" (PDF). Emotion. 11 (6): 1479–1483. doi:10.1037/a0025336. hdl:11858/00-001M-0000-0011-A074-B. PMID   22004379. S2CID   14676908.
  40. 1 2 Fujimura, Tomomi; Yoshi-Taka Matsuda; Kentaro Katahira; Masato Okada; Kazuo Okanoya (2012). "Categorical and Dimensional Perceptions in Decoding Emotional Facial Expressions". Cognition & Emotion. 26 (4): 587–601. doi:10.1080/02699931.2011.595391. PMC   3379784 . PMID   21824015.

Creative Commons by-sa small.svg  This article incorporates textby Stevan Harnad available under the CC BY-SA 3.0 license.The text and its release have been received by the Wikimedia Volunteer Response Team ; for more information, see the talk page .

Further reading