Pattern recognition (psychology)

Last updated

In psychology and cognitive neuroscience, pattern recognition describes a cognitive process that matches information from a stimulus with information retrieved from memory. [1]

Contents

Pattern recognition occurs when information from the environment is received and entered into short-term memory, causing automatic activation of a specific content of long-term memory. An early example of this is learning the alphabet in order. When a carer repeats 'A, B, C' multiple times to a child, utilizing the pattern recognition, the child says 'C' after they hear 'A, B' in order. Recognizing patterns allows us to predict and expect what is coming. The process of pattern recognition involves matching the information received with the information already stored in the brain. Making the connection between memories and information perceived is a step of pattern recognition called identification. Pattern recognition requires repetition of experience. Semantic memory, which is used implicitly and subconsciously, is the main type of memory involved with recognition. [2]

Pattern recognition is not only crucial to humans, but to other animals as well. Even koalas, who possess less-developed thinking abilities, use pattern recognition to find and consume eucalyptus leaves. The human brain has developed more, but holds similarities to the brains of birds and lower mammals. The development of neural networks in the outer layer of the brain in humans has allowed for better processing of visual and auditory patterns. Spatial positioning in the environment, remembering findings, and detecting hazards and resources to increase chances of survival are examples of the application of pattern recognition for humans and animals. [3]

There are six main theories of pattern recognition: template matching, prototype-matching, feature analysis, recognition-by-components theory, bottom-up and top-down processing, and Fourier analysis. The application of these theories in everyday life is not mutually exclusive. Pattern recognition allows us to read words, understand language, recognize friends, and even appreciate music. Each of the theories applies to various activities and domains where pattern recognition is observed. Facial, music and language recognition, and seriation are a few of such domains. Facial recognition and seriation occur through encoding visual patterns, while music and language recognition use the encoding of auditory patterns.

Theories

Template matching

Template matching theory describes the most basic approach to human pattern recognition. It is a theory that assumes every perceived object is stored as a "template" into long-term memory. [4] Incoming information is compared to these templates to find an exact match. [5] In other words, all sensory input is compared to multiple representations of an object to form one single conceptual understanding. The theory defines perception as a fundamentally recognition-based process. It assumes that everything we see, we understand only through past exposure, which then informs our future perception of the external world. [6] For example, A, A, and A are all recognized as the letter A, but not B. This viewpoint is limited, however, in explaining how new experiences can be understood without being compared to an internal memory template.[ citation needed ]

Prototype matching

Unlike the exact, one-to-one, template matching theory, prototype matching instead compares incoming sensory input to one average prototype.[ citation needed ] This theory proposes that exposure to a series of related stimuli leads to the creation of a "typical" prototype based on their shared features. [6] It reduces the number of stored templates by standardizing them into a single representation. [4] The prototype supports perceptual flexibility, because unlike in template matching, it allows for variability in the recognition of novel stimuli.[ citation needed ] For instance, if a child had never seen a lawn chair before, they would still be able to recognize it as a chair because of their understanding of its essential characteristics as having four legs and a seat. This idea, however, limits the conceptualization of objects that cannot necessarily be "averaged" into one, like types of canines, for instance. Even though dogs, wolves, and foxes are all typically furry, four-legged, moderately sized animals with ears and a tail, they are not all the same, and thus cannot be strictly perceived with respect to the prototype matching theory.

Multiple discrimination scaling

Template and feature analysis approaches to recognition of objects (and situations) have been merged / reconciled / overtaken by multiple discrimination theory. This states that the amounts in a test stimulus of each salient feature of a template are recognized in any perceptual judgment as being at a distance in the universal unit of 50% discrimination (the objective performance 'JND'[ clarification needed ] [7] ) from the amount of that feature in the template. [8]

Recognition by components theory

Image showing the breakdown of common geometric shapes (geons) Breakdown of objects into Geons.png
Image showing the breakdown of common geometric shapes (geons)

Similar to feature detection theory, recognition by components (RBC) focuses on the bottom-up features of the stimuli being processed. First proposed by Irving Biederman (1987), this theory states that humans recognize objects by breaking them down into their basic 3D geometric shapes called geons (i.e. cylinders, cubes, cones, etc.). An example is how we break down a common item like a coffee cup: we recognize the hollow cylinder that holds the liquid and a curved handle off the side that allows us to hold it. Even though not every coffee cup is exactly the same, these basic components helps us to recognize the consistency across examples (or pattern). RBC suggests that there are fewer than 36 unique geons that when combined can form a virtually unlimited number of objects. To parse and dissect an object, RBC proposes we attend to two specific features: edges and concavities. Edges enable the observer to maintain a consistent representation of the object regardless of the viewing angle and lighting conditions. Concavities are where two edges meet and enable the observer to perceive where one geon ends and another begins.

The RBC principles of visual object recognition can be applied to auditory language recognition as well. In place of geons, language researchers propose that spoken language can be broken down into basic components called phonemes. For example, there are 44 phonemes in the English language.

Top-down and bottom-up processing

Top-down processing

Top-down processing refers to the use of background information in pattern recognition. [9] It always begins with a person's previous knowledge, and makes predictions due to this already acquired knowledge. [10] Psychologist Richard Gregory estimated that about 90% of the information is lost between the time it takes to go from the eye to the brain, which is why the brain must guess what the person sees based on past experiences. In other words, we construct our perception of reality, and these perceptions are hypotheses or propositions based on past experiences and stored information. The formation of incorrect propositions will lead to errors of perception such as visual illusions. [9] Given a paragraph written with difficult handwriting, it is easier to understand what the writer wants to convey if one reads the whole paragraph rather than reading the words in separate terms. The brain may be able to perceive and understand the gist of the paragraph due to the context supplied by the surrounding words. [11]

Bottom-up processing

Bottom-up processing is also known as data-driven processing, because it originates with the stimulation of the sensory receptors. [10] Psychologist James Gibson opposed the top-down model and argued that perception is direct, and not subject to hypothesis testing as Gregory proposed. He stated that sensation is perception and there is no need for extra interpretation, as there is enough information in our environment to make sense of the world in a direct way. His theory is sometimes known as the "ecological theory" because of the claim that perception can be explained solely in terms of the environment. An example of bottom up-processing involves presenting a flower at the center of a person's field. The sight of the flower and all the information about the stimulus are carried from the retina to the visual cortex in the brain. The signal travels in one direction. [11]

Seriation

A simple seriation task involving arranging shapes by size Seriation task w shapes.jpg
A simple seriation task involving arranging shapes by size

In psychologist Jean Piaget's theory of cognitive development, the third stage is called the Concrete Operational State. It is during this stage that the abstract principle of thinking called "seriation" is naturally developed in a child. [12] Seriation is the ability to arrange items in a logical order along a quantitative dimension such as length, weight, age, etc. [13] It is a general cognitive skill which is not fully mastered until after the nursery years. [14] To seriate means to understand that objects can be ordered along a dimension, [12] and to effectively do so, the child needs to be able to answer the question "What comes next?" [14] Seriation skills also help to develop problem-solving skills, which are useful in recognizing and completing patterning tasks.

Piaget's work on seriation

Piaget studied the development of seriation along with Szeminska in an experiment where they used rods of varying lengths to test children's skills. [15] They found that there were three distinct stages of development of the skill. In the first stage, children around the age of 4 could not arrange the first ten rods in order. They could make smaller groups of 2–4, but could not put all the elements together. In the second stage where the children were 5–6 years of age, they could succeed in the seriation task with the first ten rods through the process of trial and error. They could insert the other set of rods into order through trial and error. In the third stage, the 7-8-year-old children could arrange all the rods in order without much trial and error. The children used the systematic method of first looking for the smallest rod first and the smallest among the rest. [15]

Development of problem-solving skills

To develop the skill of seriation, which then helps advance problem-solving skills, children should be provided with opportunities to arrange things in order using the appropriate language, such as "big" and "bigger" when working with size relationships. They should also be given the chance to arrange objects in order based on the texture, sound, flavor and color. [14] Along with specific tasks of seriation, children should be given the chance to compare the different materials and toys they use during play. Through activities like these, the true understanding of characteristics of objects will develop. To aid them at a young age, the differences between the objects should be obvious. [14] Lastly, a more complicated task of arranging two different sets of objects and seeing the relationship between the two different sets should also be provided. A common example of this is having children attempt to fit saucepan lids to saucepans of different sizes, or fitting together different sizes of nuts and bolts. [14]

Application of seriation in schools

To help build up math skills in children, teachers and parents can help them learn seriation and patterning. Young children who understand seriation can put numbers in order from lowest to highest. Eventually, they will come to understand that 6 is higher than 5, and 20 is higher than 10. [16] Similarly, having children copy patterns or create patterns of their own, like ABAB patterns, is a great way to help them recognize order and prepare for later math skills, such as multiplication. Child care providers can begin exposing children to patterns at a very young age by having them make groups and count the total number of objects. [16]

Facial pattern recognition

Recognizing faces is one of the most common forms of pattern recognition. Humans are extremely effective at remembering faces, but this ease and automaticity belies a very challenging problem. [17] [18] All faces are physically similar. Faces have two eyes, one mouth, and one nose all in predictable locations, yet humans can recognize a face from several different angles and in various lighting conditions. [18]

Neuroscientists posit that recognizing faces takes place in three phases. The first phase starts with visually focusing on the physical features. The facial recognition system then needs to reconstruct the identity of the person from previous experiences. This provides us with the signal that this might be a person we know. The final phase of recognition completes when the face elicits the name of the person. [19]

Although humans are great at recognizing faces under normal viewing angles, upside-down faces are tremendously difficult to recognize. This demonstrates not only the challenges of facial recognition but also how humans have specialized procedures and capacities for recognizing faces under normal upright viewing conditions. [18]

Neural mechanisms

Brain animation highlighting the fusiform face area, thought to be where facial processing and recognition takes place Fusiform Face Area - animation1.gif
Brain animation highlighting the fusiform face area, thought to be where facial processing and recognition takes place

Scientists agree that there is a certain area in the brain specifically devoted to processing faces. This structure is called the fusiform gyrus, and brain imaging studies have shown that it becomes highly active when a subject is viewing a face. [20]

Several case studies have reported that patients with lesions or tissue damage localized to this area have tremendous difficulty recognizing faces, even their own. Although most of this research is circumstantial, a study at Stanford University provided conclusive evidence for the fusiform gyrus' role in facial recognition. In a unique case study, researchers were able to send direct signals to a patient's fusiform gyrus. The patient reported that the faces of the doctors and nurses changed and morphed in front of him during this electrical stimulation. Researchers agree this demonstrates a convincing causal link between this neural structure and the human ability to recognize faces. [20]

Facial recognition development

Although in adults, facial recognition is fast and automatic, children do not reach adult levels of performance (in laboratory tasks) until adolescence. [21] Two general theories have been put forth to explain how facial recognition normally develops. The first, general cognitive development theory, proposes that the perceptual ability to encode faces is fully developed early in childhood, and that the continued improvement of facial recognition into adulthood is attributed to other general factors. These general factors include improved attentional focus, deliberate task strategies, and metacognition. Research supports the argument that these other general factors improve dramatically into adulthood. [21] Face-specific perceptual development theory argues that the improved facial recognition between children and adults is due to a precise development of facial perception. The cause for this continuing development is proposed to be an ongoing experience with faces.

Developmental issues

Several developmental issues manifest as a decreased capacity for facial recognition. Using what is known about the role of the fusiform gyrus, research has shown that impaired social development along the autism spectrum is accompanied by a behavioral marker where these individuals tend to look away from faces, and a neurological marker characterized by decreased neural activity in the fusiform gyrus. Similarly, those with developmental prosopagnosia (DP) struggle with facial recognition to the extent they are often unable to identify even their own faces. Many studies report that around 2% of the world's population have developmental prosopagnosia, and that individuals with DP have a family history of the trait. [18] Individuals with DP are behaviorally indistinguishable from those with physical damage or lesions on the fusiform gyrus, again implicating its importance to facial recognition. Despite those with DP or neurological damage, there remains a large variability in facial recognition ability in the total population. [18] It is unknown what accounts for the differences in facial recognition ability, whether it is a biological or environmental disposition. Recent research analyzing identical and fraternal twins showed that facial recognition was significantly higher correlated in identical twins, suggesting a strong genetic component to individual differences in facial recognition ability. [18]

Language development

Pattern recognition in language acquisition

Recent[ when? ] research reveals that infant language acquisition is linked to cognitive pattern recognition. [22] Unlike classical nativist and behavioral theories of language development, [23] scientists now believe that language is a learned skill. [22] Studies at the Hebrew University and the University of Sydney both show a strong correlation between the ability to identify visual patterns and to learn a new language. [22] [24] Children with high shape recognition showed better grammar knowledge, even when controlling for the effects of intelligence and memory capacity. [24] This is supported by the theory that language learning is based on statistical learning, [22] the process by which infants perceive common combinations of sounds and words in language and use them to inform future speech production.

Phonological development

The first step in infant language acquisition is to decipher between the most basic sound units of their native language. This includes every consonant, every short and long vowel sound, and any additional letter combinations like "th" and "ph" in English. These units, called phonemes, are detected through exposure and pattern recognition. Infants use their "innate feature detector" capabilities to distinguish between the sounds of words. [23] They split them into phonemes through a mechanism of categorical perception. Then they extract statistical information by recognizing which combinations of sounds are most likely to occur together, [23] like "qu" or "h" plus a vowel. In this way, their ability to learn words is based directly on the accuracy of their earlier phonetic patterning.

Grammar development

The transition from phonemic differentiation into higher-order word production [23] is only the first step in the hierarchical acquisition of language. Pattern recognition is furthermore utilized in the detection of prosody cues, the stress and intonation patterns among words. [23] Then it is applied to sentence structure and the understanding of typical clause boundaries. [23] This entire process is reflected in reading as well. First, a child recognizes patterns of individual letters, then words, then groups of words together, then paragraphs, and finally entire chapters in books. [25] Learning to read and learning to speak a language are based on the "stepwise refinement of patterns" [25] in perceptual pattern recognition.

Music pattern recognition

Music provides deep and emotional experiences for the listener. [26] These experiences become contents in long-term memory, and every time we hear the same tunes, those contents are activated. Recognizing the content by the pattern of the music affects our emotion. The mechanism that forms the pattern recognition of music and the experience has been studied by multiple researchers. The sensation felt when listening to our favorite music is evident by the dilation of the pupils, the increase in pulse and blood pressure, the streaming of blood to the leg muscles, and the activation of the cerebellum, the brain region associated with physical movement. [26] While retrieving the memory of a tune demonstrates general recognition of musical pattern, pattern recognition also occurs while listening to a tune for the first time. The recurring nature of the metre allows the listener to follow a tune, recognize the metre, expect its upcoming occurrence, and figure the rhythm. The excitement of following a familiar music pattern happens when the pattern breaks and becomes unpredictable. This following and breaking of a pattern creates a problem-solving opportunity for the mind that form the experience. [26] Psychologist Daniel Levitin argues that the repetitions, melodic nature and organization of this music create meaning for the brain. [27] The brain stores information in an arrangement of neurons which retrieve the same information when activated by the environment. By constantly referencing information and additional stimulation from the environment, the brain constructs musical features into a perceptual whole. [27]

The medial prefrontal cortex – one of the last areas affected by Alzheimer's disease – is the region activated by music.

Cognitive mechanisms

To understand music pattern recognition, we need to understand the underlying cognitive systems that each handle a part of this process. Various activities are at work in this recognition of a piece of music and its patterns. Researchers have begun to unveil the reasons behind the stimulated reactions to music. Montreal-based researchers asked ten volunteers who got "chills" listening to music to listen to their favorite songs while their brain activity was being monitored. [26] The results show the significant role of the nucleus accumbens (NAcc) region – involved with cognitive processes such as motivation, reward, addiction, etc. – creating the neural arrangements that make up the experience. [26] A sense of reward prediction is created by anticipation before the climax of the tune, which comes to a sense of resolution when the climax is reached. The longer the listener is denied the expected pattern, the greater the emotional arousal when the pattern returns. Musicologist Leonard Meyer used fifty measures of Beethoven's 5th movement of the String Quartet in C-sharp minor, Op. 131 to examine this notion. [26] The stronger this experience is, the more vivid memory it will create and store. This strength affects the speed and accuracy of retrieval and recognition of the musical pattern. The brain not only recognizes specific tunes, it distinguishes standard acoustic features, speech and music.

MIT researchers conducted a study to examine this notion. [28] The results showed six neural clusters in the auditory cortex responding to the sounds. Four were triggered when hearing standard acoustic features, one specifically responded to speech, and the last exclusively responded to music. Researchers who studied the correlation between temporal evolution of timbral, tonal and rhythmic features of music, came to the conclusion that music engages the brain regions connected to motor actions, emotions and creativity. The research indicates that the whole brain "lights up" when listening to music. [29] This amount of activity boosts memory preservation, hence pattern recognition.

Recognizing patterns of music is different for a musician and a listener. Although a musician may play the same notes every time, the details of the frequency will always be different. The listener will recognize the musical pattern and their types despite the variations. These musical types are conceptual and learned, meaning they might vary culturally. [30] While listeners are involved with recognizing (implicit) musical material, musicians are involved with recalling them (explicit). [2]

A UCLA study found that when watching or hearing music being played, neurons associated with the muscles needed for playing the instrument fire. Mirror neurons light up when musicians and non-musicians listen to a piece. [31]

Developmental issues

Pattern recognition of music can build and strengthen other skills, such as musical synchrony and attentional performance and musical notation and brain engagement. Even a few years of musical training enhances memory and attention levels. Scientists at University of Newcastle conducted a study on patients with severe acquired brain injuries (ABIs) and healthy participants, using popular music to examine music-evoked autobiographical memories (MEAMs). [29] The participants were asked to record their familiarity with the songs, whether they liked them and what memories they evoked. The results showed that the ABI patients had the highest MEAMs, and all the participants had MEAMs of a person, people or life period that were generally positive. [29] The participants completed the task by utilizing pattern recognition skills. Memory evocation caused the songs to sound more familiar and well-liked. This research can be beneficial to rehabilitating patients of autobiographical amnesia who do not have fundamental deficiency in autobiographical recall memory and intact pitch perception. [29]

In a study at University of California, Davis mapped the brain of participants while they listened to music. [32] The results showed links between brain regions to autobiographical memories and emotions activated by familiar music. This study can explain the strong response of patients with Alzheimer's disease to music. This research can help such patients with pattern recognition-enhancing tasks.

False pattern recognition

Whale, submarine or sheep? SharkOrSubmarine4024617900.jpg
Whale, submarine or sheep?

The human tendency to see patterns that do not actually exist is called apophenia. Examples include the Man in the Moon, faces or figures in shadows, in clouds, and in patterns with no deliberate design, such as the swirls on a baked confection, and the perception of causal relationships between events which are, in fact, unrelated. Apophenia figures prominently in conspiracy theories, gambling, misinterpretation of statistics and scientific data, and some kinds of religious and paranormal experiences. Misperception of patterns in random data is called pareidolia. Recent researches in neurosciences and cognitive sciences suggest to understand 'false pattern recognition', in the paradigm of predictive coding.

See also

Notes

    Related Research Articles

    <span class="mw-page-title-main">Agnosia</span> Inability to process sensory information

    Agnosia is a neurological disorder characterized by an inability to process sensory information. Often there is a loss of ability to recognize objects, persons, sounds, shapes, or smells while the specific sense is not defective nor is there any significant memory loss. It is usually associated with brain injury or neurological illness, particularly after damage to the occipitotemporal border, which is part of the ventral stream. Agnosia only affects a single modality, such as vision or hearing. More recently, a top-down interruption is considered to cause the disturbance of handling perceptual information.

    <span class="mw-page-title-main">Temporal lobe</span> One of the four lobes of the mammalian brain

    The temporal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The temporal lobe is located beneath the lateral fissure on both cerebral hemispheres of the mammalian brain.

    <span class="mw-page-title-main">Face perception</span> Cognitive process of visually interpreting the human face

    Facial perception is an individual's understanding and interpretation of the face. Here, perception implies the presence of consciousness and hence excludes automated facial recognition systems. Although facial recognition is found in other species, this article focuses on facial perception in humans.

    <span class="mw-page-title-main">Prosopagnosia</span> Cognitive disorder of face perception

    Prosopagnosia, also known as face blindness, is a cognitive disorder of face perception in which the ability to recognize familiar faces, including one's own face (self-recognition), is impaired, while other aspects of visual processing and intellectual functioning remain intact. The term originally referred to a condition following acute brain damage, but a congenital or developmental form of the disorder also exists, with a prevalence of 2.5%. The brain area usually associated with prosopagnosia is the fusiform gyrus, which activates specifically in response to faces. The functionality of the fusiform gyrus allows most people to recognize faces in more detail than they do similarly complex inanimate objects. For those with prosopagnosia, the method for recognizing faces depends on the less sensitive object-recognition system. The right hemisphere fusiform gyrus is more often involved in familiar face recognition than the left. It remains unclear whether the fusiform gyrus is specific for the recognition of human faces or if it is also involved in highly trained visual stimuli. Prosopoagnosic patients are under normal conditions able to recognize facial expressions and emotions.

    <span class="mw-page-title-main">Fusiform gyrus</span> Gyrus of the temporal and occipital lobes of the brain

    The fusiform gyrus, also known as the lateral occipitotemporal gyrus,is part of the temporal lobe and occipital lobe in Brodmann area 37. The fusiform gyrus is located between the lingual gyrus and parahippocampal gyrus above, and the inferior temporal gyrus below. Though the functionality of the fusiform gyrus is not fully understood, it has been linked with various neural pathways related to recognition. Additionally, it has been linked to various neurological phenomena such as synesthesia, dyslexia, and prosopagnosia.

    Apophenia is the tendency to perceive meaningful connections between unrelated things. The term was coined by psychiatrist Klaus Conrad in his 1958 publication on the beginning stages of schizophrenia. He defined it as "unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness". He described the early stages of delusional thought as self-referential over-interpretations of actual sensory perceptions, as opposed to hallucinations. Apophenia has also come to describe a human propensity to unreasonably seek definite patterns in random information, such as can occur in gambling.

    <span class="mw-page-title-main">Neuroesthetics</span> Sub-discipline of empirical aesthetics

    Neuroesthetics is a relatively recent sub-discipline of applied aesthetics. Empirical aesthetics takes a scientific approach to the study of aesthetic experience of art, music, or any object that can give rise to aesthetic judgments. Neuroesthetics is a term coined by Semir Zeki in 1999 and received its formal definition in 2002 as the scientific study of the neural bases for the contemplation and creation of a work of art. Neuroesthetics uses neuroscience to explain and understand the aesthetic experiences at the neurological level. The topic attracts scholars from many disciplines including neuroscientists, art historians, artists, art therapists and psychologists.

    Cognitive development is a field of study in neuroscience and psychology focusing on a child's development in terms of information processing, conceptual resources, perceptual skill, language learning, and other aspects of the developed adult brain and cognitive psychology. Qualitative differences between how a child processes their waking experience and how an adult processes their waking experience are acknowledged. Cognitive development is defined as the emergence of the ability to consciously cognize, understand, and articulate their understanding in adult terms. Cognitive development is how a person perceives, thinks, and gains understanding of their world through the relations of genetic and learning factors. There are four stages to cognitive information development. They are, reasoning, intelligence, language, and memory. These stages start when the baby is about 18 months old, they play with toys, listen to their parents speak, they watch TV, anything that catches their attention helps build their cognitive development.

    Visual agnosia is an impairment in recognition of visually presented objects. It is not due to a deficit in vision, language, memory, or intellect. While cortical blindness results from lesions to primary visual cortex, visual agnosia is often due to damage to more anterior cortex such as the posterior occipital and/or temporal lobe(s) in the brain.[2] There are two types of visual agnosia: apperceptive agnosia and associative agnosia.

    <span class="mw-page-title-main">Inferior temporal gyrus</span> One of three gyri of the temporal lobe of the brain

    The inferior temporal gyrus is one of three gyri of the temporal lobe and is located below the middle temporal gyrus, connected behind with the inferior occipital gyrus; it also extends around the infero-lateral border on to the inferior surface of the temporal lobe, where it is limited by the inferior sulcus. This region is one of the higher levels of the ventral stream of visual processing, associated with the representation of objects, places, faces, and colors. It may also be involved in face perception, and in the recognition of numbers and words.

    Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.

    <span class="mw-page-title-main">Fusiform face area</span> Part of the human visual system that is specialized for facial recognition

    The fusiform face area is a part of the human visual system that is specialized for facial recognition. It is located in the inferior temporal cortex (IT), in the fusiform gyrus.

    <span class="mw-page-title-main">Pandemonium architecture</span>

    Pandemonium architecture is a theory in cognitive science that describes how visual images are processed by the brain. It has applications in artificial intelligence and pattern recognition. The theory was developed by the artificial intelligence pioneer Oliver Selfridge in 1959. It describes the process of object recognition as a hierarchical system of detection and association by a metaphorical set of "demons" sending signals to each other. This model is now recognized as the basis of visual perception in cognitive science.

    Visual object recognition refers to the ability to identify the objects in view based on visual input. One important signature of visual object recognition is "object invariance", or the ability to identify objects across changes in the detailed context in which objects are viewed, including changes in illumination, object pose, and background context.

    The Troland Research Awards are an annual prize given by the United States National Academy of Sciences to two researchers in recognition of psychological research on the relationship between consciousness and the physical world. The areas where these award funds are to be spent include but are not limited to areas of experimental psychology, the topics of sensation, perception, motivation, emotion, learning, memory, cognition, language, and action. The award preference is given to experimental work with a quantitative approach or experimental research seeking physiological explanations.

    Social cues are verbal or non-verbal signals expressed through the face, body, voice, motion and guide conversations as well as other social interactions by influencing our impressions of and responses to others. These percepts are important communicative tools as they convey important social and contextual information and therefore facilitate social understanding.

    Emotion perception refers to the capacities and abilities of recognizing and identifying emotions in others, in addition to biological and physiological processes involved. Emotions are typically viewed as having three components: subjective experience, physical changes, and cognitive appraisal; emotion perception is the ability to make accurate decisions about another's subjective experience by interpreting their physical changes through sensory systems responsible for converting these observed changes into mental representations. The ability to perceive emotion is believed to be both innate and subject to environmental influence and is also a critical component in social interactions. How emotion is experienced and interpreted depends on how it is perceived. Likewise, how emotion is perceived is dependent on past experiences and interpretations. Emotion can be accurately perceived in humans. Emotions can be perceived visually, audibly, through smell and also through bodily sensations and this process is believed to be different from the perception of non-emotional material.

    In psychology, the face superiority effect refers to the phenomena of how all individuals perceive and encode other human faces in memory. Rather than perceiving and encoding single features of a face, we perceive and encode a human face as one holistic unified element. This phenomenon aids our visual system in the recognition of thousands of faces, a task that would be difficult if it were necessary to recognize sets of individual features and characteristics. However, this effect is limited to perceiving upright faces and does not occur when a face is at an unusual angle, such as when faces are upside-down or contorted in phenomena like the Thatcher effect and Pareidolia.

    The Fusiform body area (FBA) is a part of the extrastriate visual cortex, an object representation system involved in the visual processing of human bodies in contrast to body parts. Its function is similar to but distinct from the extrastriate body area (EBA), which perceives bodies in relation body parts, and the fusiform face area (FFA), which is involved in the perception of faces. Marius Peelen and Paul Downing identified this brain region in 2004 through an fMRI study.; in 2005 Rebecca Schwarzlose and a team of cognitive researchers named this brain region the fusiform body area.

    The occipital face area (OFA) is a region of the human cerebral cortex which is specialised for face perception. The OFA is located on the lateral surface of the occipital lobe adjacent to the inferior occipital gyrus. The OFA comprises a network of brain regions including the fusiform face area (FFA) and posterior superior temporal sulcus (STS) which support facial processing.

    References

    1. Eysenck, Michael W.; Keane, Mark T. (2003). Cognitive Psychology: A Student's Handbook (4th ed.). Hove; Philadelphia; New York: Taylor & Francis. ISBN   9780863775512. OCLC 894210185. Retrieved 27 November 2014.
    2. 1 2 Snyder, B. (2000). Music and memory: An introduction. MIT press.
    3. Mattson, M. P. (2014). Superior pattern processing is the essence of the evolved human brain. Frontiers in neuroscience, 8.
    4. 1 2 Shugen, Wang (2002). "Framework of pattern recognition model based on the cognitive psychology". Geo-spatial Information Science. 5 (2): 74–78. Bibcode:2002GSIS....5...74W. doi: 10.1007/BF02833890 . ISSN   1009-5020. S2CID   124159004.
    5. "Perception and Perceptual Illusions | Psychology Today". www.psychologytoday.com. Retrieved 2023-08-16.
    6. 1 2 "Top-down and bottom-up theories of perception - Cognitive Psychology". cognitivepsychology.wikidot.com. Retrieved 2023-08-16.
    7. Torgerson, 1958
    8. Booth & Freeman, 1993, Acta Psychologica
    9. 1 2 "Visual Perception Theory In Psychology". 2022-11-03. Retrieved 2023-08-16.
    10. 1 2 "Bottom-up and Top-down Processing: A Collaborative Duality | Psych 256: Introduction to Cognitive Psychology". sites.psu.edu. Retrieved 2023-08-16.
    11. 1 2 "Top-Down VS Bottom-Up Processing". explorable.com. Retrieved 2023-08-16.
    12. 1 2 Kidd, Julie K.; Curby, Timothy W.; Boyer, Caroline E.; Gadzichowski, K. Marinka; Gallington, Deborah A.; Machado, Jessica A.; Pasnak, Robert (2012). "Benefits of an Intervention Focused on Oddity and Seriation". Early Education & Development. 23 (6): 900–918. doi:10.1080/10409289.2011.621877. ISSN   1040-9289. S2CID   143509212.
    13. Berk, L. E. (2013). Development through the lifespan (6th ed.). Pearson. ISBN   9780205957606
    14. 1 2 3 4 5 Curtis, A. (2002). Curriculum for the pre-school child. Routledge. ISBN   9781134770458
    15. 1 2 Inhelder, B., & Piaget, J. (1964). Early growth of logic in the child; classification and seriation, aby Bärbel Inhelder and Jean Piaget. New York: Routledge and Paul.
    16. 1 2 Basic Math Skills in Child Care: Creating Patterns and Arranging Objects in Order. Retrieved from Extension Articles on 2017-10-20 http://articles.extension.org/pages/25597/basic-math-skills-in-child-care:-creating-patterns-and-arranging-objects-in-order Archived 2018-10-23 at the Wayback Machine
    17. Sheikh, Knvul. "How We Save Face--Researchers Crack the Brain's Facial-Recognition Code". Scientific American. Retrieved 2023-08-16.
    18. 1 2 3 4 5 6 Duchaine, B. (2015). Individual differences in face recognition ability: Impacts on law enforcement, criminal justice and national security. APA: Psychological Science Agenda. Retrieved from: http://www.apa.org/science/about/psa/2015/06/face-recognition.aspx
    19. Wlassoff, V. (2015). How the Brain Recognizes Faces. Brain Blogger. Retrieved from: http://brainblogger.com/2015/10/17/how-the-brain-recognizes-faces/
    20. 1 2 "Identifying the Brain's Own Facial Recognition System" . Retrieved 2023-08-16.
    21. 1 2 McKone, E., et al. (2012). A critical review of the development of face recognition: Experience is less important than previously believed. Cognitive Neuropsychology. doi 10.1080/02643294.2012.660138
    22. 1 2 3 4 "Language Ability Linked to Pattern Recognition". VOA. 2013-05-29. Retrieved 2023-08-16.
    23. 1 2 3 4 5 6 Kuhl, Patricia K. (2000-10-24). "A new view of language acquisition". Proceedings of the National Academy of Sciences. 97 (22): 11850–11857. Bibcode:2000PNAS...9711850K. doi: 10.1073/pnas.97.22.11850 . ISSN   0027-8424. PMC   34178 . PMID   11050219.
    24. 1 2 University of Sydney. (2016, May 5). Pattern learning key to children's language development. ScienceDaily. Retrieved October 25, 2017 from http://www.sciencedaily.com/releases/2016/05/160505222938.htm
    25. 1 2 Basulto, D. (2013, July 24). Humans are the world’s best pattern-recognition machines, but for how long? Retrieved October 25, 2017 from http://bigthink.com/endless-innovation/humans-are-the-worlds-best-pattern-recognition-machines-but-for-how-long
    26. 1 2 3 4 5 6 Lehrer, Jonah. “The Neuroscience Of Music.” Wired, Conde Nast, 3 June 2017, www.wired.com/2011/01/the-neuroscience-of-music/.
    27. 1 2 Levitin, D. J. (2006). This is your brain on music: The science of a human obsession. Penguin.
    28. "This Is Your Brain On Music: How Our Brains Process Melodies That Pull On Our Heartstrings". Medical Daily. 2014-03-11. Retrieved 2023-08-16.
    29. 1 2 3 4 "Why Do the Songs from Your Past Evoke Such Vivid Memories? | Psychology Today". www.psychologytoday.com. Retrieved 2023-08-16.
    30. Agus, T. R., Thorpe, S. J., & Pressnitzer, D. (2010). Rapid formation of robust auditory memories: insights from noise. Neuron, 66(4), 610-618.
    31. "How Do Our Brains Process Music?". Smithsonian Magazine. Retrieved 2023-08-16.
    32. Greensfelder, Liese (2009-02-23). "Study Finds Brain Hub That Links Music, Memory and Emotion". UC Davis. Retrieved 2023-08-16.

    Commons-logo.svg Media related to Visual pattern recognition at Wikimedia Commons