DeepFace

Last updated

DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users. [1] [2] The Facebook Research team has stated that the DeepFace method reaches an accuracy of 97.35% ± 0.25% on Labeled Faces in the Wild (LFW) data set where human beings have 97.53%. [3] This means that DeepFace is sometimes more successful than human beings. As a result of growing societal concerns Meta announced [4] that it plans to shut down Facebook facial recognition system, deleting the face scan data of more than one billion users. [5] This change will represent one of the largest shifts in facial recognition usage in the technology's history. Facebook planned to delete by December 2021 more than one billion facial recognition templates, which are digital scans of facial features. However, it did not plan to eliminate DeepFace which is the software that powers the facial recognition system. The company has also not ruled out incorporating facial recognition technology into future products, according to Meta spokesperson. [5]

Contents

Commercial rollout

Origin

DeepFace was produced by a collection of scientists from Facebook's artificial intelligence research team. The team includes Yainiv Taigman and a Facebook research scientist Ming Yang. They were also joined by Lior Wolf, a faculty member from Tel Aviv University. Yaniv Taigman, came to Facebook when Facebook acquired Face.com in 2012.

Facebook started rolling out DeepFace to its users in early 2015, and have continuously expanding DeepFace's use and software,. [6] DeepFace, according to the director of Facebook's artificial intelligence research, is not intended to invade individual privacy. Instead, DeepFace alerts individuals when their face appears in any photo posted on Facebook. When they receive this notification, they have the option of removing their face from the photo. [6]

European Union

When the DeepFace technology was initially deployed, users had the option to turn DeepFace off. However, they were not notified that it was on. [7] Because of this, DeepFace was not released in the European Union. A data privacy law in the EU argued that Facebook's facial recognition did not comply with EU data protection laws. Because users do not consent to all the uses of their biometric data, it does not comply. [8]

Accuracy

DeepFace systems can identify faces with 97% accuracy, almost the same accuracy as a human in the same position. Facebook's facial recognition is more effective than the FBI's technology, which has 85% accuracy. [9] Google's technology, FaceNet is more successful than DeepFace using the same data sets. FaceNet set a record for accuracy, 99.63%. Google's FaceNet incorporates data from Google Photos. [10]

Applications

Facebook uses individual facial recognition templates to find photos that an individual is in so they can review, engage, or share the content. DeepFace protects individuals from impersonation or identity theft. Take, for example, an instance where an individual used someone's profile photo as their own. Through DeepFace,  Facebook can identify and alert the person whose information is being misused. [11] To ensure that individuals have control over their facial recognition, Facebook does not share facial templates. Additionally, Facebook will remove images from facial recognition templates if someone has deleted their account or untagged themself from a photo. Individuals also have the ability to turn their facial recognition off on Facebook. If the feature is turned off, Facebook will cease facial recognition for that individual.  

Following the release of DeepFace in 2015, its uses have remained fairly stagnant. Because more individuals have uploaded images to Facebook, the algorithm has gotten more accurate. Facebook's DeepFace is the largest facial recognition dataset that currently exists. Because of this, some individuals argue that Facebook's facial ID database could be distributed to government agencies. [12] These uses, however, would be prohibited by most data privacy laws. In response to privacy concerns, Facebook removed their automatic facial recognition feature – allowing individuals to opt in to tagging through DeepFace. This change was implemented in 2019.

Architecture

The DeepFace system consists of four modules: 2D alignment, 3D alignment, frontalization, and neural network. An image of a face is passed through them in sequence, resulting in a 4096-dimensional feature vector representing the face. The feature vector can then be further processed for many different tasks. For example, to identify the face, one can compare it against a list of feature vectors of known faces, and identify the face with the most similar feature vector.

DeepFace uses fiducial point detectors based on existing databases to direct the alignment of faces. The facial alignment begins with a 2D alignment, and then continues with 3D alignment and frontalization. That is, DeepFace's process is two steps. First, it corrects the angles of an image so that the face in the photo is looking forward. To accomplish this, it uses a 3-D model of a face. [13]

2D alignment

The 2D alignment module detects 6 fiducial points on the detected face — the center of the eyes, tip of the nose and mouth location. These points are translated onto a warped image to help detect the face. However, 2D transformation fails to compensate for rotations that are out of place.

3D alignment

In order to align faces, DeepFace uses a generic 3D model wherein 2D images are cropped as 3D versions. The 3D image has 67 fiducial points. After the image has been warped, there are 67 anchor points manually placed on the image to match the 67 fiducial points. A 3D-to-2D camera is then fitted that minimizes losses. Because 3D detected points on the contour of the face can be inaccurate, this step is important.

Frontalization

Because full perspective projections are not modeled, the fitted camera is only an approximation of the individual's actual face. To reduce errors, DeepFace aims to warp the 2D images with smaller distortions. Also, thee camera P is capable of replacing parts of the image and blending them with their symmetrical counterparts.

Neural network

The neural network is a sequence of layers, arranged as follows: convolutional layer - max pooling - convolutional layer - 3 locally connected layers - fully connected layer.

The input is an RGB image of the face, scaled to resolution , and the output is a real vector of dimension 4096, being the feature vector of the face image.

In the 2014 paper, [13] an additional fully connected layer is added at the end to classify the face image into one of 4030 possible persons that the network had seen during training time.

Reactions

Industry

AI researcher Ben Goertzel said Facebook had "pretty convincingly solved face recognition" with the project, but said it would be incorrect to conclude that deep learning is the entire solution to AI.

Neeraj Kumar, a researcher at the University of Washington said that Facebook's DeepFace shows how large sets of outside data can result in a "higher capacity" model. Because of Facebook's wide access to images of individuals, their facial recognition software can perform better than other software with much smaller data sets. [14] [15]

Media

A Huffington Post piece called the technology "creepy", citing data privacy concerns, noted that some European governments had already required Facebook to delete facial-recognition data. [16] According to Broadcasting & Cable , both Facebook and Google had been invited by the Center for Digital Democracy to attend a 2014 National Telecommunications and Information Administration "stakeholder meeting" to help develop a consumer privacy Bill of Rights, but they both declined. Broadcasting & Cable also noted that Facebook had not released any press announcements concerning DeepFace, although their research paper had been published earlier in the month. Slate said that DeepFace was not being publicized by Facebook because it is wary of another round of headlines decrying DeepFace's creepiness.

Users

Many individuals fear facial recognition technology. [17] [18] The technology's nearly perfect accuracy allows social media companies to create digital profiles of millions of Americans. [19] However, an individual's fear of facial recognition and other privacy concerns does not correspond to a decrease in social media use. Instead, attitudes towards privacy and privacy settings do not have a large impact on an individual's intention to use Facebook apps. [20] [21] [22] Because Facebook is a social media site, individual fears about privacy get over ruled by a desire to participate in social media. [23]

Privacy concerns

BIPA lawsuit

Facebook users raised a class action lawsuit against Facebook under Illinois Biometric Information Privacy Act (BIPA). [24] Illinois has the most comprehensive biometric privacy legislation, regulating the collection of biometric information by commercial entities. [25] Illinois' BIPA requires a corporation that obtains a person's biometric information to obtain a written release, provide them notice that their information is being collected, and state the duration the information will be collected. The lawsuit raised against DeepFace alleges that Facebook's collection of facial identification information for the purpose of the tag suggestion tool violates BIPA. [26] Because Facebook does not give notice or consent to individuals when they use this tool, Facebook users argue that it violates BIPA. [27]  The Ninth Circuit denied Facebook's motion to dismiss the case and ultimately certified the case. Facebook sought to appeal to the certification of the Ninth Circuit decision which was ultimately granted. Facebook claims that the case should not have been verified because Plaintiffs have no alleged any harm beyond Facebook's violation of BIPA. Facebook removed their automatic facial recognition tagging feature in 2019, in response to the concerns raised in the lawsuit. [28] Facebook proposed a $550 million settlement to the case, which was rejected. When Facebook increased the settlement to $650 million, the court accepted it. Facebook was ordered to pay their $650 million  settlement in early March 2021. 1.6 million residents of Illinois will receive at least $345. [29]

In July 2020, Facebook announced that it is building teams that will look into racism in its algorithms. [30] Facebook's teams will work with Facebook's Responsible AI team to study bias in their systems. The implementation of these programs is recent, and it is still unclear what reforms will be made. [31]

Ten-year challenge

In 2019, a Facebook challenge went viral asking users to post a photo from 10 years ago and one from 2019. The challenge was coined the "10 Year challenge." More than 5 million people participated in the challenge, including many celebrities. Worry arose that Facebook's 10 year challenge was designed to train Facebook's facial recognition database. Kate O'Neill, a writer for Wired, wrote an op-ed that echoed this possibility. [32] Facebook denied that they played a role in generating the challenge. [33] However, individuals have argued that the concerns that underscore theories around the 10 year challenge are echoed by broader concerns about Facebook and the right to privacy. [34]

Racism in facial identification technology

Facial recognition algorithms are not universally successful. [35] While the algorithms are capable of classifying faces with over 90% accuracy in some cases, accuracy is lower when the algorithms are applied to women, black individuals, and young people. [36] The systems falsely identify black and Asian faces 10 to 100 times more than they do with white faces. [37] Because algorithms are primarily trained with white men, systems like DeepFace have a more difficult time identifying them. [38] It is projected that once facial recognition data bases are trained to identify people of color — exposing them to more diverse faces — they will be more successful at identification. [39]

See also

Related Research Articles

Biometrics are body measurements and calculations related to human characteristics and features. Biometric authentication is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance.

<span class="mw-page-title-main">Facial recognition system</span> Technology capable of matching a face from an image against a database of faces

A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.

<span class="mw-page-title-main">Three-dimensional face recognition</span> Mode of facial recognition

Three-dimensional face recognition is a modality of facial recognition methods in which the three-dimensional geometry of the human face is used. It has been shown that 3D face recognition methods can achieve significantly higher accuracy than their 2D counterparts, rivaling fingerprint recognition.

Private biometrics is a form of encrypted biometrics, also called privacy-preserving biometric authentication methods, in which the biometric payload is a one-way, homomorphically encrypted feature vector that is 0.05% the size of the original biometric template and can be searched with full accuracy, speed and privacy. The feature vector's homomorphic encryption allows search and match to be conducted in polynomial time on an encrypted dataset and the search result is returned as an encrypted match. One or more computing devices may use an encrypted feature vector to verify an individual person or identify an individual in a datastore without storing, sending or receiving plaintext biometric data within or between computing devices or any other entity. The purpose of private biometrics is to allow a person to be identified or authenticated while guaranteeing individual privacy and fundamental human rights by only operating on biometric data in the encrypted space. Some private biometrics including fingerprint authentication methods, face authentication methods, and identity-matching algorithms according to bodily features. Private biometrics are constantly evolving based on the changing nature of privacy needs, identity theft, and biotechnology.

Next Generation Identification (NGI) is a project of the Federal Bureau of Investigation (FBI). The project's goal is to expand the capabilities of the Integrated Automated Fingerprint Identification System (IAFIS), which is currently used by law enforcement to identify subjects by their fingerprints and to look up their criminal history. The NGI system will be a more modular system. It will also have more advanced lookup capabilities, incorporating palm print, iris, and facial identification. The FBI first used this system in February 2011.

<span class="mw-page-title-main">Face Recognition Grand Challenge</span>

The Face Recognition Grand Challenge (FRGC) was conducted from May 2004 until March 2006 to promote and advance face recognition technology. The FRGC v2 database created in 2005 has had a significant impact on the development of 3D face recognition. Although many other face databases have been created since then, as of 2022, FRGC v2 continued to be used as "a standard reference database for evaluating 3D face recognition algorithms".

Facial Profiler was a free Facebook app created to promote Coca-Cola Zero by the advertising agency Crispin Porter + Bogusky. The app used face recognition technology to search a database of voluntarily participating Facebook users to match people based on appearance. The software's algorithm analyzed face attributes like skin color, face structure and angles of the face. Once matched, users could contact their look-alike via their Facebook profile.

Identity-based security is a type of security that focuses on access to digital information or services based on the authenticated identity of an entity. It ensures that the users and services of these digital resources are entitled to what they receive. The most common form of identity-based security involves the login of an account with a username and password. However, recent technology has evolved into fingerprinting or facial recognition.

<span class="mw-page-title-main">Biometric device</span> Identification and authentication device

A biometric device is a security identification and authentication device. Such devices use automated methods of verifying or recognising the identity of a living person based on a physiological or behavioral characteristic. These characteristics include fingerprints, facial images, iris and voice recognition.

<span class="mw-page-title-main">Visage SDK</span> Software development kit

Visage SDK is a multi-platform software development kit (SDK) created by Visage Technologies AB. Visage SDK allows software programmers to build facial motion capture and eye tracking applications.

FindFace is a face recognition technology developed by the Russian company NtechLab that specializes in neural network tools. The company provides a line of services for the state and various business sectors based on FindFace algorithm. Previously, the technology was used as a web service that helped to find people on the VK social network using their photos.

The Biometric Information Privacy Act is a law set forth on October 3, 2008 in the U.S. state of Illinois, in an effort to regulate the collection, use, and handling of biometric identifiers and information by private entities. Notably, the Act does not apply to government entities. While Texas and Washington are the only other states that implemented similar biometric protections, BIPA is the most stringent. The Act prescribes $1,000 per violation, and $5,000 per violation if the violation is intentional or reckless. Because of this damages provision, the BIPA has spawned several class action lawsuits.

Local differential privacy (LDP) is a model of differential privacy with the added requirement that if an adversary has access to the personal responses of an individual in the database, that adversary will still be unable to learn much of the user's personal data. This is contrasted with global differential privacy, a model of differential privacy that incorporates a central aggregator with access to the raw data.

Amazon Rekognition is a cloud-based software as a service (SaaS) computer vision platform that was launched in 2016. It has been sold to, and used by, a number of United States government agencies, including U.S. Immigration and Customs Enforcement (ICE) and Orlando, Florida police, as well as private entities.

Clearview AI, Inc. is an American facial recognition company, providing software to law enforcement, government agencies, and other organizations. The company's algorithm matches faces to a database of more than 20 billion images collected from the Internet, including social media applications. Founded by Hoan Ton-That and Richard Schwartz, the company maintained a low profile until late 2019, until its usage by law enforcement was first reported.

<span class="mw-page-title-main">Adam Harvey (artist)</span> American artist and computer vision researcher

Adam Harvey is an American artist and researcher based in Berlin whose work focuses on computer vision, digital imaging technologies, and counter surveillance. His work includes projects combining art and technology as well as speaking and hosting talks on topics relating to data and computer vision.

DataWorks Plus LLC is a privately held biometrics systems integrator based in Greenville, South Carolina. The company started in 2000 and originally focused on mugshot management, adding facial recognition beginning in 2005. Brad Bylenga is the CEO, and Todd Pastorini is the EVP and GM. Usage of the technology by police departments has resulted in wrongful arrests.

Identity replacement technology is any technology that is used to cover up all or parts of a person's identity, either in real life or virtually. This can include face masks, face authentication technology, and deepfakes on the Internet that spread fake editing of videos and images. Face replacement and identity masking are used by either criminals or law-abiding citizens. Identity replacement tech, when operated on by criminals, leads to heists or robbery activities. Law-abiding citizens utilize identity replacement technology to prevent government or various entities from tracking private information such as locations, social connections, and daily behaviors.

<span class="mw-page-title-main">Fawkes (software)</span> Facial image cloaking software

Fawkes is a facial image cloaking software created by the SAND Laboratory of the University of Chicago. It is a free tool that is available as a standalone executable. The software creates small alterations in images using artificial intelligence to protect the images from being recognized and matched by facial recognition software. The goal of the Fawkes program is to enable individuals to protect their own privacy from large data collection. As of May 2022, Fawkes v1.0 has surpassed 840,000 downloads. Eventually, the SAND Laboratory hopes to implement the software on a larger scale to combat unwarranted facial recognition software.

PimEyes is a facial recognition search website that allows users to identify all images on the internet of a person given a sample image. The website is owned by EMEARobotics, a corporation based in Dubai. The owner and CEO of EMEARobotics and PimEye is Giorgi Gobronidze, who is based in Tbilisi, Georgia.

References

  1. "Facebook creates software that matches faces almost as well as you do", Technology Review , Massachusetts Institute of Technology, March 17, 2014
  2. "Facebook's DeepFace shows serious facial recognition skills", CBS News, March 19, 2014
  3. "DeepFace: Closing the Gap to Human-Level Performance in Face Verification". Facebook Research. Retrieved 2019-07-25.
  4. Metz, Rachel (2 November 2021). "Facebook is shutting down its facial recognition software". CNN. Retrieved 2021-11-05.
  5. 1 2 Hill, Kashmir; Mac, Ryan (2021-11-02). "Facebook, Citing Societal Concerns, Plans to Shut Down Facial Recognition System". The New York Times. ISSN   0362-4331 . Retrieved 2021-11-05.
  6. 1 2 Chowdhry, Amit. "Facebook's DeepFace Software Can Match Faces With 97.25% Accuracy". Forbes. Retrieved 2021-04-09.
  7. "Facebook settles facial recognition dispute". BBC News. 2020-01-30. Retrieved 2021-04-08.
  8. "Vol 23.1 – Winter 2017 | Journal of Science & Technology Law". www.bu.edu. Retrieved 2021-04-24.
  9. "Facial Recognition Technology: Ensuring Transparency in Government Use — FBI". www.fbi.gov. Retrieved 2021-04-09.
  10. "Facial recognition: top 7 trends (tech, vendors, markets, use cases & latest news)". Thales Group. Retrieved 2021-04-09.
  11. "What is the face recognition setting on Facebook and how does it work? | Facebook Help Center". www.facebook.com. Retrieved 2021-04-22.
  12. Glaser, April (2019-07-09). "Facebook's Face-ID Database Could Be the Biggest in the World. Yes, It Should Worry Us". Slate Magazine. Retrieved 2021-04-22.
  13. 1 2 Taigman, Yaniv; Yang, Ming; Ranzato, Marc'Aurelio; Wolf, Lior (June 2014). "DeepFace: Closing the Gap to Human-Level Performance in Face Verification". 2014 IEEE Conference on Computer Vision and Pattern Recognition. IEEE. pp. 1701–1708. doi:10.1109/cvpr.2014.220. ISBN   978-1-4799-5118-5. S2CID   2814088.
  14. "Facebook Creates Software That Matches Faces Almost as Well as You Do". MIT Technology Review. Retrieved 2021-04-22.
  15. Rubinstein, Ira; Good, Nathan (2012). "Privacy by Design: A Counterfactual Analysis of Google and Facebook Privacy Incidents". SSRN Electronic Journal. doi:10.2139/ssrn.2128146. ISSN   1556-5068.
  16. Grandoni, Dino (2014-03-18). "Facebook's New 'DeepFace' Program Is Just As Creepy As It Sounds". HuffPost. Retrieved 2021-04-22.
  17. "Privacy and identity on Facebook", Discourse and Identity on Facebook, Bloomsbury Academic, 2017, doi:10.5040/9781474289153.0014, ISBN   978-1-4742-8912-2 , retrieved 2021-04-24
  18. Barrett, Lindsey (2020-07-24). "Ban Facial Recognition Technologies for Children—And for Everyone Else". Rochester, NY. SSRN   3660118.{{cite journal}}: Cite journal requires |journal= (help)
  19. Huang, Michelle Yan. "Facial recognition is almost perfectly accurate — here's why that could be a problem". Business Insider. Retrieved 2021-04-22.
  20. Van Der Schyff, Karl; Flowerday, Stephen; Lowry, Paul Benjamin (2020-08-01). "Information privacy behavior in the use of Facebook apps: A personality-based vulnerability assessment". Heliyon. 6 (8): e04714. Bibcode:2020Heliy...604714V. doi: 10.1016/j.heliyon.2020.e04714 . ISSN   2405-8440. PMC   7452521 . PMID   32904276.
  21. Mathiyalakan, Sathasivam; Heilman, George; Ho, Kevin; Law, Wai (2018-01-01). "An Examination of the Impact of Gender and Culture on Facebook Privacy and Trust in Guam". Journal of International Technology and Information Management. 27 (1): 29–56. doi: 10.58729/1941-6679.1363 . ISSN   1941-6679. S2CID   159011924.
  22. "Facebook face recognition hits privacy protests". Biometric Technology Today. 2011 (7): 1. July 2011. doi:10.1016/s0969-4765(11)70120-5. ISSN   0969-4765.
  23. Rosenthal, Sonny; Wasenden, Ole-Christian; Gronnevet, Gorm-Andreas; Ling, Rich (2020-11-01). "A tripartite model of trust in Facebook: acceptance of information personalization, privacy concern, and privacy literacy". Media Psychology. 23 (6): 840–864. doi:10.1080/15213269.2019.1648218. hdl: 10356/145658 . ISSN   1521-3269. S2CID   201372342.
  24. "Power, Pervasiveness and Potential: The Brave New World of Facial Recognition Through a Criminal Law Lens (and Beyond)". nycbar.org. Retrieved 2021-03-31.
  25. "The rise and regulation of thermal facial recognition technology during the COVID-19 pandemic" - Google Search". www.google.com. Retrieved 2021-04-22.
  26. Center, Electronic Privacy Information. "EPIC - Patel v. Facebook". epic.org. Retrieved 2021-04-22.{{cite web}}: |first= has generic name (help)
  27. "Social Network or Social Nightmare: How California Courts Can Prevent Facebook's Frightening Foray Into Facial Recognition Technology From Haunting Consumer Privacy Rights Forever". vLex. Retrieved 2021-04-24.
  28. "An Update About Face Recognition on Facebook". About Facebook. 2019-09-03. Retrieved 2021-04-22.
  29. "Facebook will pay $650 million to settle class action suit centered on Illinois privacy law". TechCrunch. Retrieved 2021-04-22.
  30. Heilweil, Rebecca (2020-07-22). "Facebook is taking a hard look at racial bias in its algorithms". Vox. Retrieved 2021-04-23.
  31. Trautman, Lawrence J. (2020-03-27). "Governance of the Facebook Privacy Crisis". Pittsburgh Journal of Technology Law & Policy. 20 (1). doi: 10.5195/tlp.2020.234 . ISSN   2164-800X.
  32. "Facebook's '10 Year Challenge' Is Just a Harmless Meme—Right?". Wired. ISSN   1059-1028 . Retrieved 2021-04-22.
  33. Twitter https://twitter.com/facebook/status/1085675097766031360 . Retrieved 2021-04-22.{{cite web}}: Missing or empty |title= (help)
  34. Slobom, Michael (2020-01-01). "Consent, Appropriation by Manipulation, and the 10-Year Challenge: How an Internet Meme Complicated Biometric Information Privacy". Mitchell Hamline Law Review. 46 (5).
  35. Becerra-Riera, Fabiola; Morales-González, Annette; Méndez-Vázquez, Heydi (2019-08-01). "A survey on facial soft biometrics for video surveillance and forensic applications". Artificial Intelligence Review. 52 (2): 1155–1187. doi:10.1007/s10462-019-09689-5. ISSN   0269-2821. S2CID   186207594.
  36. Buolamwini, Joy; Gebru, Timnit (2018-01-21). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification". Conference on Fairness, Accountability and Transparency. PMLR: 77–91.
  37. "govinfo". www.govinfo.gov. Retrieved 2021-04-23.
  38. "Bias in, Bias out: Why Legislation Placing Requirements on the Procurement of Commercialized Facial Recognition Technology Must Be Passed to Protect People of Color". www.americanbar.org. Retrieved 2021-04-23.
  39. Kane, Kane; Young, Amber; Majchrzak, Ann; Ransbotham, Sam (2021-03-01). "Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants". Management Information Systems Quarterly. 45 (1): 371–396. doi:10.25300/MISQ/2021/1578. ISSN   0276-7783. S2CID   232369411.

Further reading