Face Recognition Grand Challenge

Last updated
Final Large logo.png

The Face Recognition Grand Challenge (FRGC) was conducted from May 2004 until March 2006 to promote and advance face recognition technology.

Contents

Overview

The Face Recognition Grand Challenge (FRGC) was a project that aimed to promote and advance face recognition technology to support existing face recognition efforts within the U.S. Government. The project ran from May 2004 to March 2006 and was open to face recognition researchers and developers in companies, academia, and research institutions. The FRGC developed new face recognition techniques and prototype systems that significantly improved performance.

The FRGC consisted of progressively difficult challenge problems, each of which included a dataset of facial images and a defined set of experiments. The challenge problems were designed to overcome one of the impediments to developing improved face recognition, which is the lack of data.

There are three main areas for improving face recognition algorithms: high-resolution images, three-dimensional (3D) face recognition, and new pre-processing techniques. Current face recognition systems are designed to work with relatively small, static facial images. In the FRGC, high-resolution images consist of facial images with an average of 250 pixels between the centers of the eyes, which is significantly higher than the 40 to 60 pixels in current images. The FRGC aims to foster the development of new algorithms that leverage the additional information present in high-resolution images.

Three-dimensional face recognition algorithms identify faces based on the 3D shape of a person’s face. Unlike current face recognition systems that are affected by changes in lighting and pose, 3D face recognition has the potential to improve performance under these conditions, as the shape of faces remains unaffected.

In recent years,[ when? ] advancements in computer graphics and computer vision have enabled the modeling of lighting and pose changes in facial imagery. These advances have led to the development of new algorithms that can automatically correct for lighting and pose changes before processing through a face recognition system. The pre-processing aspect of the FRGC aims to measure the impact of these new pre-processing algorithms on recognition performance.

Structure of the Face Recognition Grand Challenge

The FRGC is structured around challenge problems designed to push researchers to meet the FRGC performance goal.

There are three new aspects of the FRGC within the face recognition community. Firstly, the size of the FRGC in terms of data is noteworthy. The FRGC dataset comprises 50,000 recordings. Secondly, the complexity of the FRGC sets it apart. Unlike previous face recognition datasets that focused on still images, the FRGC encompasses three modes:

  1. High-resolution still images
  2. 3D images
  3. Multiple images of a person

The third new aspect is the infrastructure. The Biometric Experimentation Environment (BEE) provides the infrastructure for FRGC. BEE, an XML-based framework, describes and documents computational experiments. It enables experiment description, distribution, recording of raw results, analysis, presentation of results, and documentation in a common format. This marks the first time a computational-experimental environment has supported a challenge problem in face recognition or biometrics.

The FRGC Data Set

The FRGC data distribution consists of three parts. The first part is the FRGC dataset. The second part is the FRGC BEE. The BEE distribution includes all the datasets for performing and scoring the six experiments. The third part consists of baseline algorithms for experiments 1 through 4. With all three components, it is possible to run experiments 1 through 4, from processing raw images to producing Receiver Operating Characteristics (ROCs).

The FRGC data comprises 50,000 recordings divided into training and validation partitions. The training partition is for algorithm training, while the validation partition assesses approach performance in a laboratory setting. The validation partition includes data from 4,003 subject sessions. A subject session represents all images of a person taken during a biometric data collection, containing four controlled still images, two uncontrolled still images, and one three-dimensional image. The controlled images were taken in a studio setting, showing full frontal facial images under two lighting conditions and two facial expressions (smiling and neutral). Uncontrolled images were taken in varying illumination conditions, such as hallways, atriums, or outdoors. Each set of uncontrolled images contains two expressions: smiling and neutral. The 3D image was captured under controlled illumination conditions and includes both range and texture images. The 3D images were acquired using a Minolta Vivid 900/910 series sensor.

The FRGC distribution consists of six experiments. In experiment 1, the gallery comprises a single controlled still image of a person, and each probe consists of a single controlled still image. Experiment 1 serves as the control experiment. Experiment 2 studies the effect of using multiple still images of a person on performance. In experiment 2, each biometric sample consists of the four controlled images of a person taken in a subject session. For example, the gallery consists of four images of each person, all taken in the same subject session. Similarly, a probe consists of four images of a person.

Experiment 3 measures the performance of 3D face recognition. In experiment 3, both the gallery and probe set consist of 3D images of a person. Experiment 4 assesses recognition performance using uncontrolled images. In experiment 4, the gallery contains a single controlled still image, and the probe set comprises a single uncontrolled still image.

Experiments 5 and 6 compare 3D and 2D images. In both experiments, the gallery consists of 3D images. In experiment 5, the probe set consists of a single controlled still image. In experiment 6, the probe set comprises a single uncontrolled still image.

Sponsors

Related Research Articles

Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

Biometrics are body measurements and calculations related to human characteristics. Biometric authentication is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance.

<span class="mw-page-title-main">Eigenface</span> Set of eigenvectors used in the computer vision problem of human face recognition

An eigenface is the name given to a set of eigenvectors when used in the computer vision problem of human face recognition. The approach of using eigenfaces for recognition was developed by Sirovich and Kirby and used by Matthew Turk and Alex Pentland in face classification. The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix. This produces dimension reduction by allowing the smaller set of basis images to represent the original training images. Classification can be achieved by comparing how faces are represented by the basis set.

<span class="mw-page-title-main">Iris recognition</span> Method of biometric identification

Iris recognition is an automated method of biometric identification that uses mathematical pattern-recognition techniques on video images of one or both of the irises of an individual's eyes, whose complex patterns are unique, stable, and can be seen from some distance. The discriminating powers of all biometric technologies depend on the amount of entropy they are able to encode and use in matching. Iris recognition is exceptional in this regard, enabling the avoidance of "collisions" even in cross-comparisons across massive populations. Its major limitation is that image acquisition from distances greater than a meter or two, or without cooperation, can be very difficult. However, the technology is in development and iris recognition can be accomplished from even up to 10 meters away or in a live camera feed.

<span class="mw-page-title-main">Facial recognition system</span> Technology capable of matching a face from an image against a database of faces

A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.

<span class="mw-page-title-main">Three-dimensional face recognition</span> Mode of facial recognition

Three-dimensional face recognition is a modality of facial recognition methods in which the three-dimensional geometry of the human face is used. It has been shown that 3D face recognition methods can achieve significantly higher accuracy than their 2D counterparts, rivaling fingerprint recognition.

The Facial Recognition Technology (FERET) database is a dataset used for facial recognition system evaluation as part of the Face Recognition Technology (FERET) program. It was first established in 1993 under a collaborative effort between Harry Wechsler at George Mason University and Jonathon Phillips at the Army Research Laboratory in Adelphi, Maryland. The FERET database serves as a standard database of facial images for researchers to use to develop various algorithms and report results. The use of a common database also allowed one to compare the effectiveness of different approaches in methodology and gauge their strengths and weaknesses.

Ioannis A. Kakadiaris is a Greek-born American computer scientist who has developed an identity verification system at the University of Houston. He is a Hugh Roy and Lillie Cranz Cullen University Professor of Computer Science, Electrical & Computer Engineering, and Biomedical Engineering at the University of Houston, a position to which he was appointed in 2011.

Private biometrics is a form of encrypted biometrics, also called privacy-preserving biometric authentication methods, in which the biometric payload is a one-way, homomorphically encrypted feature vector that is 0.05% the size of the original biometric template and can be searched with full accuracy, speed and privacy. The feature vector's homomorphic encryption allows search and match to be conducted in polynomial time on an encrypted dataset and the search result is returned as an encrypted match. One or more computing devices may use an encrypted feature vector to verify an individual person or identify an individual in a datastore without storing, sending or receiving plaintext biometric data within or between computing devices or any other entity. The purpose of private biometrics is to allow a person to be identified or authenticated while guaranteeing individual privacy and fundamental human rights by only operating on biometric data in the encrypted space. Some private biometrics including fingerprint authentication methods, face authentication methods, and identity-matching algorithms according to bodily features. Private biometrics are constantly evolving based on the changing nature of privacy needs, identity theft, and biotechnology.

The Facial Recognition Technology (FERET) program was a government-sponsored project that aimed to create a large, automatic face-recognition system for intelligence, security, and law enforcement purposes. The program began in 1993 under the combined leadership of Dr. Harry Wechsler at George Mason University (GMU) and Dr. Jonathon Phillips at the Army Research Laboratory (ARL) in Adelphi, Maryland and resulted in the development of the Facial Recognition Technology (FERET) database. The goal of the FERET program was to advance the field of face recognition technology by establishing a common database of facial imagery for researchers to use and setting a performance baseline for face-recognition algorithms.

<span class="mw-page-title-main">Multiple Biometric Grand Challenge</span> Biometric project

Multiple Biometric Grand Challenge (MBGC) is a biometric project. Its primary goal is to improve performance of face and iris recognition technology on both still and video imagery with a series of challenge problems and evaluation.

<span class="mw-page-title-main">Face Recognition Vendor Test</span>

The Face Recognition Vendor Test (FRVT) was a series of large scale independent evaluations for face recognition systems realized by the National Institute of Standards and Technology in 2000, 2002, 2006, 2010, 2013 and 2017. Previous evaluations in the series were the Face Recognition Technology (FERET) evaluations in 1994, 1995 and 1996. The project is now in an Ongoing status with periodic reports, and continues to grow in scope. It now includes tests for Face-in-Video-Evaluation (FIVE), facial morphing detection, and testing for demographic effects.

In order to identify a person, a security system has to compare personal characteristics with a database. A scan of a person's iris, fingerprint, face, or other distinguishing feature is created, and a series of biometric points are drawn at key locations in the scan. For example, in the case of a facial scan, biometric points might be placed at the tip of each ear lobe and in the corners of both eyes. Measurements taken between all the points of a scan are compiled and result in a numerical "score". This score is unique for every individual, but it can quickly and easily be compared to any compiled scores of the facial scans in the database to determine if there is a match.

Identity-based security is a type of security that focuses on access to digital information or services based on the authenticated identity of an entity. It ensures that the users and services of these digital resources are entitled to what they receive. The most common form of identity-based security involves the login of an account with a username and password. However, recent technology has evolved into fingerprinting or facial recognition.

<span class="mw-page-title-main">Biometric device</span> Identification and authentication device

A biometric device is a security identification and authentication device. Such devices use automated methods of verifying or recognising the identity of a living person based on a physiological or behavioral characteristic. These characteristics include fingerprints, facial images, iris and voice recognition.

DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users. The Facebook Research team has stated that the DeepFace method reaches an accuracy of 97.35% ± 0.25% on Labeled Faces in the Wild (LFW) data set where human beings have 97.53%. This means that DeepFace is sometimes more successful than human beings. As a result of growing societal concerns Meta announced that it plans to shut down Facebook facial recognition system, deleting the face scan data of more than one billion users. This change will represent one of the largest shifts in facial recognition usage in the technology's history. Facebook planned to delete by December 2021 more than one billion facial recognition templates, which are digital scans of facial features. However, it did not plan to eliminate DeepFace which is the software that powers the facial recognition system. The company has also not ruled out incorporating facial recognition technology into future products, according to Meta spokesperson.

<span class="mw-page-title-main">Visage SDK</span> Software development kit

Visage SDK is a multi-platform software development kit (SDK) created by Visage Technologies AB. Visage SDK allows software programmers to build facial motion capture and eye tracking applications.

Emotion recognition is the process of identifying human emotion. People vary widely in their accuracy at recognizing the emotions of others. Use of technology to help people with emotion recognition is a relatively nascent research area. Generally, the technology works best if it uses multiple modalities in context. To date, the most work has been conducted on automating the recognition of facial expressions from video, spoken expressions from audio, written expressions from text, and physiology as measured by wearables.

DataWorks Plus LLC is a privately held biometrics systems integrator based in Greenville, South Carolina. The company started in 2000 and originally focused on mugshot management, adding facial recognition beginning in 2005. Brad Bylenga is the CEO, and Todd Pastorini is the EVP and GM. Usage of the technology by police departments has resulted in wrongful arrests.

Identity replacement technology is any technology that is used to cover up all or parts of a person's identity, either in real life or virtually. This can include face masks, face authentication technology, and deepfakes on the Internet that spread fake editing of videos and images. Face replacement and identity masking are used by either criminals or law-abiding citizens. Identity replacement tech, when operated on by criminals, leads to heists or robbery activities. Law-abiding citizens utilize identity replacement technology to prevent government or various entities from tracking private information such as locations, social connections, and daily behaviors.

References

PD-icon.svg This article incorporates public domain material from NIST Face Recognition Grand Challenge. National Institute of Standards and Technology.