Fusiform face area | |
---|---|
Anatomical terminology |
The fusiform face area (FFA, meaning spindle-shaped face area) is a part of the human visual system (while also activated in people blind from birth [1] ) that is specialized for facial recognition. [2] It is located in the inferior temporal cortex (IT), in the fusiform gyrus (Brodmann area 37).
The FFA is located in the ventral stream on the ventral surface of the temporal lobe on the lateral side of the fusiform gyrus. It is lateral to the parahippocampal place area. It displays some lateralization, usually being larger in the right hemisphere.
The FFA was discovered and continues to be investigated in humans using positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) studies. Usually, a participant views images of faces, objects, places, bodies, scrambled faces, scrambled objects, scrambled places, and scrambled bodies. This is called a functional localizer. Comparing the neural response between faces and scrambled faces will reveal areas that are face-responsive, while comparing cortical activation between faces and objects will reveal areas that are face-selective.
The human FFA was first described by Justine Sergent in 1992 [3] and later named by Nancy Kanwisher in 1997 [2] who proposed that the existence of the FFA is evidence for domain specificity in the visual system. Studies have recently shown that the FFA is composed of functional clusters that are at a finer spatial scale than prior investigations have measured. [4] Electrical stimulation of these functional clusters selectively distorts face perception, which is causal support for the role of these functional clusters in perceiving the facial image. [5] While it is generally agreed that the FFA responds more to faces than to most other categories, there is debate about whether the FFA is uniquely dedicated to face processing, as proposed by Nancy Kanwisher and others, or whether it participates in the processing of other objects. The expertise hypothesis, as championed by Isabel Gauthier and others, offers an explanation for how the FFA becomes selective for faces in most people. The expertise hypothesis suggests that the FFA is a critical part of a network that is important for individuating objects that are visually similar because they share a common configuration of parts. Gauthier et al., in an adversarial collaboration with Kanwisher, [6] tested both car and bird experts, and found some activation in the FFA when car experts were identifying cars and when bird experts were identifying birds. [7] This finding has been replicated, [8] [9] and expertise effects in the FFA have been found for other categories such as chess displays [10] and X-rays. [11] Recently, it was found that the thickness of the cortex in the FFA predicts the ability to recognize faces as well as vehicles. [12]
A 2009 magnetoencephalography study found that objects incidentally perceived as faces, an example of pareidolia, evoke an early (165-millisecond) activation in the FFA, at a time and location similar to that evoked by faces, whereas other common objects do not evoke such activation. This activation is similar to a face-specific ERP component N170. The authors suggest that face perception evoked by face-like objects is a relatively early process, and not a late cognitive reinterpretation phenomenon. [13]
One case study of agnosia provided evidence that faces are processed in a special way. A patient known as C. K., who suffered brain damage as a result of a car accident, later developed object agnosia. He experienced great difficulty with basic-level object recognition, also extending to body parts, but performed very well at recognizing faces. [14] A later study showed that C. K. was unable to recognize faces that were inverted or otherwise distorted, even in cases where they could easily be identified by normal subjects. [15] This is taken as evidence that the fusiform face area is specialized for processing faces in a normal orientation.
Studies using functional magnetic resonance imaging and electrocorticography have demonstrated that activity in the FFA codes for individual faces [16] [17] [18] [19] and the FFA is tuned for behaviorally relevant facial features. [16] An electrocorticography study found that the FFA is involved in multiple stages of face processing, continuously from when people see a face until they respond to it, demonstrating the dynamic and important role the FFA plays as part of the face perception network. [16]
Another study found that there is stronger activity in the FFA when a person sees a familiar face as opposed to an unfamiliar one. Participants were shown different pictures of faces that either had the same identity, familiar, or faces with separate identities, or unfamiliar. It found that participants were more accurate at matching familiar faces than unfamiliar ones. Using an fMRI, they also found that the participants that were more accurate in identifying familiar faces had more activity in their right fusiform face area and participants that were poor at matching had less activity in their right fusiform area. [20]
In 2020, scientists showed the area is also activated in people born blind. [1]
The fusiform face area (FFA) is a part of the brain located in the fusiform gyrus with a debated purpose. Some researchers believe that the FFA is evolutionary purposed for face perception. Others believe that the FFA discriminates between any familiar stimuli.
Psychologists debate whether the FFA is activated by faces for an evolutionary or expertise reason. The conflicting hypotheses stem from the ambiguity in FFA activation, as the FFA is activated by both familiar objects and faces. A study regarding novel objects called greebles determined this phenomenon. [21] When first exposed to greebles, a person's FFA was activated more strongly by faces than by greebles. After familiarising themselves with individual greebles or becoming a greeble expert, a person's FFA was activated equally by faces and greebles. Likewise, children with autism have been shown to develop object recognition at a similarly impaired pace as face recognition. [22] Studies of late patients of autism have discovered that autistic people have lower neuron densities in the FFA. [23] This raises an interesting question, however: Is the poor face perception due to a reduced number of cells or is there a reduced number of cells because autistic people seldom perceive faces? [24] Asked simply: Are faces simply objects with which every person has expertise?
There is evidence supporting the FFA's evolutionary face-perception. Case studies into other dedicated areas of the brain may suggest that the FFA is intrinsically designed to recognize faces. Other studies have recognized areas of the brain essential to recognizing environments and bodies. [25] [26] Without these dedicated areas, people are incapable of recognizing places and bodies. Similar research regarding prosopagnosia has determined that the FFA is essential to the recognition of unique faces. [27] [28] However, these patients are capable of recognizing the same people normally by other means, such as voice. Studies involving language characters have also been conducted in order to ascertain the role of the FFA in face recognition. These studies have found that objects, such as Chinese characters, elicit a high response in different areas of the FFA than those areas that elicit a high response from faces. [29] This data implies that certain areas of the FFA have evolutionary face-perception purposes.
The FFA is underdeveloped in children and does not fully develop until adolescence. This calls into question the evolutionary purpose of the FFA, as children show the ability to differentiate faces. Three-day-old babies have been shown to prefer the face of their mother. [30] Babies as early as three months old have shown the ability to distinguish between faces. [31] During this time, babies may exhibit the ability to differentiate between genders, with some evidence suggesting that they prefer faces of the same sex as their primary caregiver. [32] It is theorized that, in terms of evolution, babies focus on women for food, although the preference could simply reflect a bias for the caregivers they experience. Infants do not appear to use this area for the perception of faces. Recent fMRI work has found no face selective area in the brain of infants 4 to 6 months old. [33] However, given that the adult human brain has been studied far more extensively than the infant brain, and that infants are still undergoing major neurodevelopmental processes, it may simply be that the FFA is not located in an anatomically familiar area. It may also be that activation for many different percepts and cognitive tasks in infants is diffuse in terms of neural circuitry, as infants are still undergoing periods of neurogenesis and neural pruning; this may make it more difficult to distinguish the signal, or what we would imagine as visual and complex familiar objects (like faces), from the noise, including static firing rates of neurons, and activity that is dedicated to a different task entirely than the activity of face processing. Infant vision involves only light and dark recognition, recognizing only major features of the face, activating the amygdala. These findings question the evolutionary purpose of the FFA.
Studies into what else may trigger the FFA validates arguments about its evolutionary purpose. There are countless facial expressions humans use that disturb the structure of the face. These disruptions and emotions are first processed in the amygdala and later transmitted to the FFA for facial recognition. This data is then used by the FFA to determine more static information about the face. [34] The fact that the FFA is so far downstream in the processing of emotion suggests that it has little to do with emotion perception and instead deals in face perception.
Recent evidence, however, shows that the FFA has other functions regarding emotion. The FFA is differentially activated by faces exhibiting different emotions. A study has determined that the FFA is activated more strongly by fearful faces than neutral faces. [35] This implies that the FFA has functions in processing emotion despite its downstream processing and questions its evolutionary purpose to identify faces.
Facial perception is an individual's understanding and interpretation of the face. Here, perception implies the presence of consciousness and hence excludes automated facial recognition systems. Although facial recognition is found in other species, this article focuses on facial perception in humans.
Prosopagnosia, also known as face blindness, is a cognitive disorder of face perception in which the ability to recognize familiar faces, including one's own face (self-recognition), is impaired, while other aspects of visual processing and intellectual functioning remain intact. The term originally referred to a condition following acute brain damage, but a congenital or developmental form of the disorder also exists, with a prevalence of 2.5%. The brain area usually associated with prosopagnosia is the fusiform gyrus, which activates specifically in response to faces. The functionality of the fusiform gyrus allows most people to recognize faces in more detail than they do similarly complex inanimate objects. For those with prosopagnosia, the method for recognizing faces depends on the less sensitive object-recognition system. The right hemisphere fusiform gyrus is more often involved in familiar face recognition than the left. It remains unclear whether the fusiform gyrus is specific for the recognition of human faces or if it is also involved in highly trained visual stimuli.
The fusiform gyrus, also known as the lateral occipitotemporal gyrus,is part of the temporal lobe and occipital lobe in Brodmann area 37. The fusiform gyrus is located between the lingual gyrus and parahippocampal gyrus above, and the inferior temporal gyrus below. Though the functionality of the fusiform gyrus is not fully understood, it has been linked with various neural pathways related to recognition. Additionally, it has been linked to various neurological phenomena such as synesthesia, dyslexia, and prosopagnosia.
Visual processing is a term that is used to refer to the brain's ability to use and interpret visual information from the world around us. The process of converting light energy into a meaningful image is a complex process that is facilitated by numerous brain structures and higher level cognitive processes. On an anatomical level, light energy first enters the eye through the cornea, where the light is bent. After passing through the cornea, light passes through the pupil and then lens of the eye, where it is bent to a greater degree and focused upon the retina. The retina is where a group of light-sensing cells, called photoreceptors are located. There are two types of photoreceptors: rods and cones. Rods are sensitive to dim light and cones are better able to transduce bright light. Photoreceptors connect to bipolar cells, which induce action potentials in retinal ganglion cells. These retinal ganglion cells form a bundle at the optic disc, which is a part of the optic nerve. The two optic nerves from each eye meet at the optic chiasm, where nerve fibers from each nasal retina cross which results in the right half of each eye's visual field being represented in the left hemisphere and the left half of each eye's visual fields being represented in the right hemisphere. The optic tract then diverges into two visual pathways, the geniculostriate pathway and the tectopulvinar pathway, which send visual information to the visual cortex of the occipital lobe for higher level processing.
Neuronal tuning refers to the hypothesized property of brain cells by which they selectively represent a particular type of sensory, association, motor, or cognitive information. Some neuronal responses have been hypothesized to be optimally tuned to specific patterns through experience. Neuronal tuning can be strong and sharp, as observed in primary visual cortex, or weak and broad, as observed in neural ensembles. Single neurons are hypothesized to be simultaneously tuned to several modalities, such as visual, auditory, and olfactory. Neurons hypothesized to be tuned to different signals are often hypothesized to integrate information from the different sources. In computational models called neural networks, such integration is the major principle of operation. The best examples of neuronal tuning can be seen in the visual, auditory, olfactory, somatosensory, and memory systems, although due to the small number of stimuli tested the generality of neuronal tuning claims is still an open question.
Visual agnosia is an impairment in recognition of visually presented objects. It is not due to a deficit in vision, language, memory, or intellect. While cortical blindness results from lesions to primary visual cortex, visual agnosia is often due to damage to more anterior cortex such as the posterior occipital and/or temporal lobe(s) in the brain.[2] There are two types of visual agnosia: apperceptive agnosia and associative agnosia.
The inferior temporal gyrus is one of three gyri of the temporal lobe and is located below the middle temporal gyrus, connected behind with the inferior occipital gyrus; it also extends around the infero-lateral border on to the inferior surface of the temporal lobe, where it is limited by the inferior sulcus. This region is one of the higher levels of the ventral stream of visual processing, associated with the representation of objects, places, faces, and colors. It may also be involved in face perception, and in the recognition of numbers and words.
Visual search is a type of perceptual task requiring attention that typically involves an active scan of the visual environment for a particular object or feature among other objects or features. Visual search can take place with or without eye movements. The ability to consciously locate an object or target amongst a complex array of stimuli has been extensively studied over the past 40 years. Practical examples of using visual search can be seen in everyday life, such as when one is picking out a product on a supermarket shelf, when animals are searching for food among piles of leaves, when trying to find a friend in a large crowd of people, or simply when playing visual search games such as Where's Wally?
Nancy Gail Kanwisher FBA is the Walter A Rosenblith Professor of Cognitive Neuroscience in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology and an investigator at the McGovern Institute for Brain Research. She studies the neural and cognitive mechanisms underlying human visual perception and cognition.
In cognitive neuroscience, visual modularity is an organizational concept concerning how vision works. The way in which the primate visual system operates is currently under intense scientific scrutiny. One dominant thesis is that different properties of the visual world require different computational solutions which are implemented in anatomically/functionally distinct regions that operate independently – that is, in a modular fashion.
The greebles are artificial objects designed to be used as stimuli in psychological studies of object and face recognition. They were named by the American psychologist Robert Abelson. The greebles were created for Isabel Gauthier's dissertation work at Yale, so as to share constraints with faces: they have a small number of parts in a common configuration. Greebles have appeared in psychology textbooks, and in more than 25 scientific articles on perception. They are often used in mental rotation task experiments.
Visual object recognition refers to the ability to identify the objects in view based on visual input. One important signature of visual object recognition is "object invariance", or the ability to identify objects across changes in the detailed context in which objects are viewed, including changes in illumination, object pose, and background context.
In neuroscience, functional specialization is a theory which suggests that different areas in the brain are specialized for different functions.
Form perception is the recognition of visual elements of objects, specifically those to do with shapes, patterns and previously identified important characteristics. An object is perceived by the retina as a two-dimensional image, but the image can vary for the same object in terms of the context with which it is viewed, the apparent size of the object, the angle from which it is viewed, how illuminated it is, as well as where it resides in the field of vision. Despite the fact that each instance of observing an object leads to a unique retinal response pattern, the visual processing in the brain is capable of recognizing these experiences as analogous, allowing invariant object recognition. Visual processing occurs in a hierarchy with the lowest levels recognizing lines and contours, and slightly higher levels performing tasks such as completing boundaries and recognizing contour combinations. The highest levels integrate the perceived information to recognize an entire object. Essentially object recognition is the ability to assign labels to objects in order to categorize and identify them, thus distinguishing one object from another. During visual processing information is not created, but rather reformatted in a way that draws out the most detailed information of the stimulus.
Isabel Gauthier is a cognitive neuroscientist currently holding the position of David K. Wilson Professor of Psychology and head of the Object Perception Lab at Vanderbilt University’s Department of Psychology. In 2000, with the support of the James S. McDonnell Foundation, she founded the Perceptual Expertise Network (PEN), which now comprises over ten labs based across North America. In 2006 PEN became part of the NSF-funded Temporal Dynamics of Learning Center (TDLC).
The extrastriate body area (EBA) is a subpart of the extrastriate visual cortex involved in the visual perception of human body and body parts, akin in its respective domain to the fusiform face area, involved in the perception of human faces. The EBA was identified in 2001 by the team of Nancy Kanwisher using fMRI.
A neurological look at race is multifaceted. The cross-race effect has been neurologically explained by there being differences in brain processing while viewing same-race and other-race faces. There is a debate over the cause of the cross-race effect.
The Fusiform body area (FBA) is a part of the extrastriate visual cortex, an object representation system involved in the visual processing of human bodies in contrast to body parts. Its function is similar to but distinct from the extrastriate body area (EBA), which perceives bodies in relation body parts, and the fusiform face area (FFA), which is involved in the perception of faces. Marius Peelen and Paul Downing identified this brain region in 2004 through an fMRI study.; in 2005 Rebecca Schwarzlose and a team of cognitive researchers named this brain region the fusiform body area.
The face inversion effect is a phenomenon where identifying inverted (upside-down) faces compared to upright faces is much more difficult than doing the same for non-facial objects.
The occipital face area (OFA) is a region of the human cerebral cortex which is specialised for face perception. The OFA is located on the lateral surface of the occipital lobe adjacent to the inferior occipital gyrus. The OFA comprises a network of brain regions including the fusiform face area (FFA) and posterior superior temporal sulcus (STS) which support facial processing.