N170

Last updated

The N170 is a component of the event-related potential (ERP) that reflects the neural processing of faces, familiar objects or words. [1] Furthermore, the N170 is modulated by prediction error processes. [2] [3]

Contents

When potentials evoked by images of faces are compared to those elicited by other visual stimuli, the former show increased negativity 130-200 ms after stimulus presentation. This response is maximal over occipito-temporal electrode sites, which is consistent with a source located at the fusiform and inferior-temporal gyri, confirmed by electrocorticography. [4] [5] The N170 generally displays right-hemisphere lateralization and has been linked with the structural encoding of faces, hence is considered to be primarily sensitive to faces. [6] [7] A study, employing transcranial magnetic stimulation combined with EEG, found that N170 can be modulated by top-down influences from prefrontal cortex. [8]

History

The N170 was first described by Shlomo Bentin and colleagues in 1996, [9] who measured ERPs from participants viewing faces and other objects. They found that human faces and face parts (such as eyes) elicited different responses than other stimuli, including animal faces, body parts, and cars.

Earlier work performed by Botzel and Grusser and first reported in 1989 [10] also attempted to find a component of the ERP that corresponded to the processing of human faces. They showed observers line drawings (in one experiment) and black-and-white photographs (in two additional experiments) of faces, trees, and chairs. They found that, compared to the other stimulus classes, faces elicited a larger positive component approximately 150 ms after onset, which was maximal at central electrode sites (at the top of the head). The topography of this effect and lack of lateralization led to the conclusion that this face-specific potential did not arise in face-selective areas in the occipital-temporal region, but instead in the limbic system. Subsequent work referred to this component as the vertex positive potential (VPP). [11]

In an attempt to rectify these two apparently conflicting results, Joyce and Rossion [12] recorded ERPs from 53 scalp electrodes while participants viewed faces and other visual stimuli. After recording, they re-referenced the data to several commonly used reference electrode sites, including the nose and mastoid process. They found that the N170 and VPP can be accounted for by the same dipole arrangement arising from the same neural generators, and therefore reflect the same process.

Functional sensitivity

Three of the most studied attributes of the N170 include manipulations of face inversion, facial race, and emotional expressions.

It has been established that inverted faces (i.e., those presented upside-down) are more difficult to perceive [13] (the Thatcher effect is a good illustration of this). In their landmark study, Bentin et al. found that inverted faces increased the latency of the N170 component. [9] Jacques and colleagues further studied the timecourse of the face inversion effect (FIE) using an adaptation paradigm. [14] When the same stimulus is presented multiple times, the neuronal response decreases over time; when a different stimulus is presented, the response recovers. The conditions under which a "release from adaptation" occurs therefore provides a way to measure stimulus similarity. In their experiment, Jacques et al. found that the release from adaptation is smaller and occurs 30 ms later for inverted faces, indicating that the neuronal population encoding face identity require additional processing time to detect the identity of inverted faces.

In an experiment examining the effects of race on the N170's amplitude, it was found that an "Other-Race Effect" was elicited in conjunction with face inversions. Vizioli and colleagues examined the effect of face recognition impairment while subjects process same race (SR) or other race (OR) pictures. [15] The research team devised a N170 experiment based on the premise that visual expertise plays a critical role in inversion, hypothesizing that viewers' greater level of expertise with SR faces (holistic processing) should elicit a stronger FIE compared to OR face stimuli. The authors recorded EEGs from Western Caucasian and East Asian subjects (two separate groups) who were presented with pictures of Western Caucasian, East Asian and African America faces in upright and inverted orientations. All the facial stimuli were cropped to remove external features (i.e. hair, beards, hats, etc.). Both groups displayed a later N170 with larger amplitude (over the right hemisphere) for inverted than upright same-race (SR) faces, but showed no inversion effect for OR and AA photo stimuli. Moreover, no race effects were observed in regard to the peak amplitude of the N170 for upright faces in both groups of participants. The results also found no significant latency differences among the races of stimuli, but facial inversion did increase and delay the N170 amplitude and onset respectively. They conclude that the subjects' lack of experience with inverted faces makes processing such stimuli more difficult than pictures shown in their canonical orientation, regardless of what race the stimulus is.

Besides modulation by inversion and race, emotional expressions have also been a focus of N170 face research. In an experiment conducted by Righart and de Gelder, ERP results show that the early stages of face processing may be affected by emotional scenes when categorizations of fearful and happy facial expressions are made by subjects. [16] In this paradigm subjects had to view color pictures of happy or fearful faces that were centrally overlaid on pictures of natural scenes. And in order to control for low level features, such as color and other items that could care meaning, all the scene pictures were scrambled by randomizing the position of pixels across the image. The final results of the experiment show that emotion effects were associated with the N170 in which there was a larger (negative) amplitude for faces when they appeared in a fearful context then when placed in happy or neutral scenes. In fact, left occipito-temporal distributed N170 amplitudes were dramatically increased for intact fearful faces when they appeared in a fearful scene, though levels were not as high when a fearful face was presented in a happy or neutral scene. Similar results did occur in regard to intact happy faces, but the amplitudes were not as high as those related to fearful scenes or expressions. [17] Righart and de Gelder conclude that information from task-irrelevant scenes is rapidly combined with the information from facial expressions, and that subjects use context information in the early stage of processing when they need to discriminate/categorize facial expressions.

Results from a study conducted by Ghuman and colleagues using direct neural recordings from the fusiform face area using electrocorticography showed that while the N170 displays a very strong response to faces when compared to other visual images, the N170 is not sensitive to the identity of the face. [4] Instead, they showed that which face a person is viewing can be decoded from the activity between 250–500 ms, consistent with the hypothesis that identity processing begins with the N250. [18] These results suggest that the N170 is important for gist-level processing of faces and face detection, processes which may set the stage for later face individuation.

Generators

Given the ease and rapidity with which humans can recognize faces, a great deal of neuroscientific research has endeavored to understand how and where the brain processes them. Early research on prosopagnosia, or "face blindness", found that damage to the occipito-temporal region led to an impaired or complete inability for people to recognize faces. Convergent evidence for the importance of this region in face processing came through the use of fMRI, which found that a region of the fusiform gyrus, the "fusiform face area", responded selectively to images of faces.

Intracranial recordings in humans using electrocorticography provide very strong evidence that the fusiform face area is one of the generators of the N170, [4] [5] though other regions of the face processing network may also contribute to the N170.

An investigation of the N170 undertaken [19] used ERP source-localization techniques to estimate the location of the neural generator of the N170. They concluded that the N170 arose from the posterior superior temporal sulcus. However, these techniques are fraught with potential sources of error, and there is disagreement on the validity of inferences drawn from such findings. [20]

Faces or interstimulus variance

In 2007, Guillaume Thierry and colleagues [21] presented evidence that called into question the face-specificity of the N170. Most earlier experiments found an N170 when the response to frontal views of faces was compared to those of other objects that could appear in more variable poses and configurations. In their study, they introduced a new factor: stimuli could be faces or non-faces, and either class could have high or low similarity. Similarity was measured by calculating the correlation between pixel values in pairs of same-category stimuli. When ERPs were compared for these conditions, they found a typical N170 effect in the low-similarity non-face vs. high-similarity face comparison. However, high-similarity non-faces showed a significant N170, while low-similarity faces did not. These results led the authors to conclude that the N170 is actually a measure of stimulus similarity, and not face processing per se.

In response to this, Rossion and Jacques [7] measured similarity as above for several object categories used in a previous study of the N170. They found that faces elicited a larger N170 than other classes of objects that had similar or higher similarity values, such as houses, cars, and shoes. While it remains uncertain why Thierry et al. observed an effect of similarity on the N170, Rossion and Jacques speculate that lower similarity leads to more variance in the latency of the response. Since ERP components are measured by averaging the results from many individual trials, high latency variance effectively “smears” the response, reducing the amplitude of the average. Rossion and Jacques also offer criticism of the methodology used by Thierry and colleagues, arguing that their failure to find a difference between high-similarity faces and high-similarity non-faces was due to a poor choice of electrode sites.

See also

Related Research Articles

An evoked potential or evoked response is an electrical potential in a specific pattern recorded from a specific part of the nervous system, especially the brain, of a human or other animals following presentation of a stimulus such as a light flash or a pure tone. Different types of potentials result from stimuli of different modalities and types. Evoked potential is distinct from spontaneous potentials as detected by electroencephalography (EEG), electromyography (EMG), or other electrophysiologic recording method. Such potentials are useful for electrodiagnosis and monitoring that include detections of disease and drug-related sensory dysfunction and intraoperative monitoring of sensory pathway integrity.

<span class="mw-page-title-main">Face perception</span> Cognitive process of visually interpreting the human face

Facial perception is an individual's understanding and interpretation of the face. Here, perception implies the presence of consciousness and hence excludes automated facial recognition systems. Although facial recognition is found in other species, this article focuses on facial perception in humans.

<span class="mw-page-title-main">Event-related potential</span> Brain response that is the direct result of a specific sensory, cognitive, or motor event

An event-related potential (ERP) is the measured brain response that is the direct result of a specific sensory, cognitive, or motor event. More formally, it is any stereotyped electrophysiological response to a stimulus. The study of the brain in this way provides a noninvasive means of evaluating brain functioning.

<span class="mw-page-title-main">Fusiform gyrus</span> Gyrus of the temporal and occipital lobes of the brain

The fusiform gyrus, also known as the lateral occipitotemporal gyrus,is part of the temporal lobe and occipital lobe in Brodmann area 37. The fusiform gyrus is located between the lingual gyrus and parahippocampal gyrus above, and the inferior temporal gyrus below. Though the functionality of the fusiform gyrus is not fully understood, it has been linked with various neural pathways related to recognition. Additionally, it has been linked to various neurological phenomena such as synesthesia, dyslexia, and prosopagnosia.

The N400 is a component of time-locked EEG signals known as event-related potentials (ERP). It is a negative-going deflection that peaks around 400 milliseconds post-stimulus onset, although it can extend from 250-500 ms, and is typically maximal over centro-parietal electrode sites. The N400 is part of the normal brain response to words and other meaningful stimuli, including visual and auditory words, sign language signs, pictures, faces, environmental sounds, and smells.

<span class="mw-page-title-main">P300 (neuroscience)</span> Event-related potential

The P300 (P3) wave is an event-related potential (ERP) component elicited in the process of decision making. It is considered to be an endogenous potential, as its occurrence links not to the physical attributes of a stimulus, but to a person's reaction to it. More specifically, the P300 is thought to reflect processes involved in stimulus evaluation or categorization.

Visual neuroscience is a branch of neuroscience that focuses on the visual system of the human body, mainly located in the brain's visual cortex. The main goal of visual neuroscience is to understand how neural activity results in visual perception, as well as behaviors dependent on vision. In the past, visual neuroscience has focused primarily on how the brain responds to light rays projected from static images and onto the retina. While this provides a reasonable explanation for the visual perception of a static image, it does not provide an accurate explanation for how we perceive the world as it really is, an ever-changing, and ever-moving 3-D environment. The topics summarized below are representative of this area, but far from exhaustive. To be less topic specific, one can see this textbook for the computational link between neural activities and visual perception and behavior: "Understanding vision: theory, models, and data", published by Oxford University Press 2014.

The mismatch negativity (MMN) or mismatch field (MMF) is a component of the event-related potential (ERP) to an odd stimulus in a sequence of stimuli. It arises from electrical activity in the brain and is studied within the field of cognitive neuroscience and psychology. It can occur in any sensory system, but has most frequently been studied for hearing and for vision, in which case it is abbreviated to vMMN. The (v)MMN occurs after an infrequent change in a repetitive sequence of stimuli For example, a rare deviant (d) stimulus can be interspersed among a series of frequent standard (s) stimuli. In hearing, a deviant sound can differ from the standards in one or more perceptual features such as pitch, duration, loudness, or location. The MMN can be elicited regardless of whether someone is paying attention to the sequence. During auditory sequences, a person can be reading or watching a silent subtitled movie, yet still show a clear MMN. In the case of visual stimuli, the MMN occurs after an infrequent change in a repetitive sequence of images.

<span class="mw-page-title-main">Fusiform face area</span> Part of the human visual system that is specialized for facial recognition

The fusiform face area is a part of the human visual system that is specialized for facial recognition. It is located in the inferior temporal cortex (IT), in the fusiform gyrus.

The contingent negative variation (CNV) is a negative slow surface potential, as measured by electroencephalography (EEG), that occurs during the period between a warning stimulus or signal and an imperative ("go") stimulus. The CNV was one of the first event-related potential (ERP) components to be described. The CNV component was first described by W. Grey Walter and colleagues in an article published in Nature in 1964. The importance of this finding was that it was one of the first studies which showed that consistent patterns of the amplitude of electric responses could be obtained from the large background noise which occurs in EEG recordings and that this activity could be related to a cognitive process such as expectancy.

The P600 is an event-related potential (ERP) component, or peak in electrical brain activity measured by electroencephalography (EEG). It is a language-relevant ERP component and is thought to be elicited by hearing or reading grammatical errors and other syntactic anomalies. Therefore, it is a common topic of study in neurolinguistic experiments investigating sentence processing in the human brain.

In neuroscience, the N100 or N1 is a large, negative-going evoked potential measured by electroencephalography ; it peaks in adults between 80 and 120 milliseconds after the onset of a stimulus, and is distributed mostly over the fronto-central region of the scalp. It is elicited by any unpredictable stimulus in the absence of task demands. It is often referred to with the following P200 evoked potential as the "N100-P200" or "N1-P2" complex. While most research focuses on auditory stimuli, the N100 also occurs for visual, olfactory, heat, pain, balance, respiration blocking, and somatosensory stimuli.

Difference due to memory (Dm) indexes differences in neural activity during the study phase of an experiment for items that subsequently are remembered compared to items that are later forgotten. It is mainly discussed as an event-related potential (ERP) effect that appears in studies employing a subsequent memory paradigm, in which ERPs are recorded when a participant is studying a list of materials and trials are sorted as a function of whether they go on to be remembered or not in the test phase. For meaningful study material, such as words or line drawings, items that are subsequently remembered typically elicit a more positive waveform during the study phase. This difference typically occurs in the range of 400–800 milliseconds (ms) and is generally greatest over centro-parietal recording sites, although these characteristics are modulated by many factors.

The P3a, or novelty P3, is a component of time-locked (EEG) signals known as event-related potentials (ERP). The P3a is a positive-going scalp-recorded brain potential that has a maximum amplitude over frontal/central electrode sites with a peak latency falling in the range of 250–280 ms. The P3a has been associated with brain activity related to the engagement of attention and the processing of novelty.

In neuroscience, the visual P200 or P2 is a waveform component or feature of the event-related potential (ERP) measured at the human scalp. Like other potential changes measurable from the scalp, this effect is believed to reflect the post-synaptic activity of a specific neural process. The P2 component, also known as the P200, is so named because it is a positive going electrical potential that peaks at about 200 milliseconds after the onset of some external stimulus. This component is often distributed around the centro-frontal and the parieto-occipital areas of the scalp. It is generally found to be maximal around the vertex of the scalp, however there have been some topographical differences noted in ERP studies of the P2 in different experimental conditions.

The N200, or N2, is an event-related potential (ERP) component. An ERP can be monitored using a non-invasive electroencephalography (EEG) cap that is fitted over the scalp on human subjects. An EEG cap allows researchers and clinicians to monitor the minute electrical activity that reaches the surface of the scalp from post-synaptic potentials in neurons, which fluctuate in relation to cognitive processing. EEG provides millisecond-level temporal resolution and is therefore known as one of the most direct measures of covert mental operations in the brain. The N200 in particular is a negative-going wave that peaks 200-350ms post-stimulus and is found primarily over anterior scalp sites. Past research focused on the N200 as a mismatch detector, but it has also been found to reflect executive cognitive control functions, and has recently been used in the study of language.

<span class="mw-page-title-main">P3b</span>

The P3b is a subcomponent of the P300, an event-related potential (ERP) component that can be observed in human scalp recordings of brain electrical activity. The P3b is a positive-going amplitude peaking at around 300 ms, though the peak will vary in latency from 250 to 500 ms or later depending upon the task and on the individual subject response. Amplitudes are typically highest on the scalp over parietal brain areas.

N2pc refers to an ERP component linked to selective attention. The N2pc appears over visual cortex contralateral to the location in space to which subjects are attending; if subjects pay attention to the left side of the visual field, the N2pc appears in the right hemisphere of the brain, and vice versa. This characteristic makes it a useful tool for directly measuring the general direction of a person's attention with fine-grained temporal resolution.

The face inversion effect is a phenomenon where identifying inverted (upside-down) faces compared to upright faces is much more difficult than doing the same for non-facial objects.

The occipital face area (OFA) is a region of the human cerebral cortex which is specialised for face perception. The OFA is located on the lateral surface of the occipital lobe adjacent to the inferior occipital gyrus. The OFA comprises a network of brain regions including the fusiform face area (FFA) and posterior superior temporal sulcus (STS) which support facial processing.

References

  1. Rossion, Bruno; Joyce, Carrie A; Cottrell, Garrison W; Tarr, Michael J (November 2003). "Early lateralization and orientation tuning for face, word, and object processing in the visual cortex". NeuroImage. 20 (3): 1609–1624. doi:10.1016/j.neuroimage.2003.07.010. PMID   14642472. S2CID   6844697.
  2. Johnston, Patrick; Robinson, Jonathan; Kokkinakis, Athanasios; Ridgeway, Samuel; Simpson, Michael; Johnson, Sam; Kaufman, Jordy; Young, Andrew W. (April 2017). "Temporal and spatial localization of prediction-error signals in the visual brain" (PDF). Biological Psychology. 125: 45–57. doi:10.1016/j.biopsycho.2017.02.004. PMID   28257807. S2CID   10308172.
  3. Robinson, Jonathan E.; Breakspear, Michael; Young, Andrew W.; Johnston, Patrick J. (16 April 2018). "Dose-dependent modulation of the visually evoked N1/N170 by perceptual surprise: a clear demonstration of prediction-error signalling" (PDF). European Journal of Neuroscience. 52 (11): 4442–4452. doi:10.1111/ejn.13920. PMID   29602233. S2CID   4887440.
  4. 1 2 3 Ghuman, Avniel Singh; Brunet, Nicolas M.; Li, Yuanning; Konecky, Roma O.; Pyles, John A.; Walls, Shawn A.; Destefino, Vincent; Wang, Wei; Richardson, R. Mark (2014-01-01). "Dynamic encoding of face information in the human fusiform gyrus". Nature Communications. 5: 5672. Bibcode:2014NatCo...5.5672G. doi:10.1038/ncomms6672. ISSN   2041-1723. PMC   4339092 . PMID   25482825.
  5. 1 2 Allison, T.; Puce, A.; Spencer, D. D.; McCarthy, G. (1999-08-01). "Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli". Cerebral Cortex. 9 (5): 415–430. doi: 10.1093/cercor/9.5.415 . ISSN   1047-3211. PMID   10450888.
  6. Eimer, Martin (2011). "The Face-Sensitive N170 Component of the Event-Related Brain Potential". In Calder, Andrew J.; Rhodes, Gillian; Johnson, Mark H.; Haxby, James V. (eds.). Oxford Handbook of Face Perception. OUP Oxford. ISBN   9780191622144.
  7. 1 2 Rossion, B. & Jacques, C. (2008). "Does physical interstimulus variance account for early electrophysiological face sensitive responses in the human brain? Ten lessons on the N170". NeuroImage. 39 (4): 1959–1979. doi:10.1016/j.neuroimage.2007.10.011. PMID   18055223. S2CID   15106925.
  8. Mattavelli, G.; Rosanova, M.; Casali, A. G.; Papagno, C.; Romero Lauro L. J. (2013). "Top-down interference and cortical responsiveness in face processing: A TMS-EEG study". NeuroImage. 76 (1): 24–32. doi:10.1016/j.neuroimage.2013.03.020. PMID   23523809. S2CID   13911132.
  9. 1 2 Bentin, S.; McCarthy, G.; Perez, E.; Puce, A.; Allison, T. (1996). "Electrophysiological studies of face perception in humans". Journal of Cognitive Neuroscience. 8 (6): 551–565. doi:10.1162/jocn.1996.8.6.551. PMC   2927138 . PMID   20740065.
  10. Botzel, K.; Grusser, O. J. (1989). "Electric brain potentials-evoked by pictures of faces and non-faces – a search for face-specific EEG-potentials". Experimental Brain Research. 77 (2): 349–360. doi:10.1007/BF00274992. PMID   2792281. S2CID   5373013.
  11. Jeffreys, D. A. (1989). "A face-responsive potential recorded from the human scalp". Experimental Brain Research. 78 (1): 193–202. doi:10.1007/BF00230699. PMID   2591512. S2CID   12084552.
  12. Joyce, C.A. & Rossion, B. (2005). "The face-sensitive N170 and VPP components manifest the same brain processes: The effect of reference electrode site". Clinical Neurophysiology. 116 (11): 2613–2631. doi:10.1016/j.clinph.2005.07.005. PMID   16214404. S2CID   34009900.
  13. Yin R. K. (1969). "Looking at upside-down faces". Journal of Experimental Psychology: Human Perception and Performance. 81 (1): 141–145. doi:10.1037/h0027474.
  14. Jacques, C.; d'Arripe, O.; Rossion, B. (2007). "The time course of the face inversion effect during individual face discrimination". Journal of Vision. 7 (3): 1–9. doi: 10.1167/7.8.3 . PMID   17685810.
  15. Vizioli, L.; Foreman, K.; Rousselet, G. A.; Caldara, R. (2010). "Inverting faces elicits sensitivity to race on the N170 component: a cross-cultural study". Journal of Vision. 10 (1): 1–23. doi: 10.1167/10.1.15 . PMID   20143908.
  16. Righart, R. & de Gelder, B. (2008). "Rapid influence of emotional scenes on encoding of facial expressions: an ERP study. Social Cognitive & Affective Neuroscience". Social Cognitive and Affective Neuroscience. 3 (3): 270–8. doi:10.1093/scan/nsn021. PMC   2566764 . PMID   19015119.
  17. Blau, V. C.; Maurer, U.; Tottenham, N.; McCandliss, B. D. (2007). "The face-specific N170 component is modulated by emotional facial expression". Behavioral and Brain Functions. 3: 7. doi: 10.1186/1744-9081-3-7 . PMC   1794418 . PMID   17244356.
  18. Tanaka, James W.; Curran, Tim; Porterfield, Albert L.; Collins, Daniel (2006-09-01). "Activation of preexisting and acquired face representations: the N250 event-related potential as an index of face familiarity". Journal of Cognitive Neuroscience. 18 (9): 1488–1497. CiteSeerX   10.1.1.543.8563 . doi:10.1162/jocn.2006.18.9.1488. ISSN   0898-929X. PMID   16989550. S2CID   9793157.
  19. Itier, R. J.; Taylor, M. J. (2004). "Source analysis of the N170 to faces and objects". NeuroReport. 15 (8): 1261–1265. doi:10.1097/01.wnr.0000127827.73576.d8. PMID   15167545. S2CID   46705488.
  20. Luck, S. J. (2005). "ERP Localization". An Introduction to the Event-Related Potential Technique . Boston: MIT Press. pp.  267–301.
  21. Thierry, G.; Martin, C. D.; Downing, P.; Pegna, A. J. (2007). "Controlling for interstimulus perceptual variance abolishes N170 face selectivity". Nature Neuroscience. 10 (7): 505–511. doi:10.1038/nn1864. PMID   17334361. S2CID   21008862.