Amazon Rekognition

Last updated
Amazon Rekognition
Developer(s) Amazon, Amazon Web Services
Initial release30 November 2016;7 years ago (2016-11-30) [1]
Type Software as a service
Website aws.amazon.com/rekognition

Amazon Rekognition is a cloud-based software as a service (SaaS) computer vision platform that was launched in 2016. It has been sold to, and used by, a number of United States government agencies, including U.S. Immigration and Customs Enforcement (ICE) and Orlando, Florida police, as well as private entities.

Contents

Capabilities

Rekognition provides a number of computer vision capabilities, which can be divided into two categories: Algorithms that are pre-trained on data collected by Amazon or its partners, and algorithms that a user can train on a custom dataset.

As of July 2019, Rekognition provides the following computer vision capabilities. [1] [2]

Pre-trained algorithms

Algorithms that a user can train on a custom dataset

History and use

2017

In late 2017, the Washington County, Oregon Sheriff's Office began using Rekognition to identify suspects' faces. Rekognition was marketed as a general-purpose computer vision tool, and an engineer working for Washington County decided to use the tool for facial analysis of suspects. [12] [13] Rekognition was offered to the department for free, [14] and Washington County became the first US law enforcement agency known to use Rekognition. In 2018, the agency logged over 1,000 facial searches. The county, according to the Washington Post, by 2019 was paying about $7 a month for all of its searches. [15] The relationship was unknown to the public until May 2018. [14] In 2018, Rekognition was also used to help identify celebrities during a royal wedding telecast. [16]

2018

In April 2018, it was reported that FamilySearch was using Rekognition to enable their users to "see which of their ancestors they most resemble based on family photographs". [17] In early 2018, the FBI also began using it as a pilot program for analyzing video surveillance. [16]

In May 2018, it was reported by the ACLU that Orlando, Florida was running a pilot using Rekognition for facial analysis in law enforcement, [18] with that pilot ending in July 2019. [19] After the report, [20] [21] on June 22, 2018, Gizmodo reported that Amazon workers had written a letter to CEO Jeff Bezos requesting he cease selling Rekognition to US law enforcement, particularly ICE and Homeland Security. [21] A letter was also sent to Bezos by the ACLU. [20] On June 26, 2018, it was reported that the Orlando police force had ceased using Rekognition after their trial contract expired, reserving the right to use it in the future. [20] The Orlando Police Department said that they had "never gotten to the point to test images" due to old infrastructure and low bandwidth. [14]

In July 2018, the ACLU released a test showing that Rekognition had falsely matched 28 members of Congress with mugshot photos, particularly Congresspeople of color. 25 House members afterwards sent a letter to Bezos, expressing concern about Rekognition. [22] Amazon responded saying the Rekognition test had generated 80 percent confidence, while it recommended law enforcement only use matches rated at 99 percent confidence. [23] The Washington Post states that Oregon instead has officers pick a "best of five" result, instead of adhering to the recommendation. [15]

In September 2018, it was reported that Mapillary was using Rekognition to read the text on parking signs (e.g. no stopping, no parking, or specific parking hours) in cities. [9]

In October 2018, it was reported that Amazon had earlier that year pitched Rekognition to U.S. Immigration and Customs Enforcement agency. [22] [24] Amazon defended government use of Rekognition. [23]

On December 1, 2018, it was reported that 8 Democratic lawmakers had said in a letter that Amazon had "failed to provide sufficient answers" about Rekognition, writing that they had "serious concerns that this type of product has significant accuracy issues, places disproportionate burdens on communities of color, and could stifle Americans' willingness to exercise their First Amendment rights in public." [25]

2019

In January 2019, MIT researchers published a peer-reviewed study asserting that Rekognition had more difficulty in identifying dark-skinned females than competitors such as IBM and Microsoft. [16] In the study, Rekognition misidentified darker-skinned women as men 31% of the time, but made no mistakes for light-skinned men. [14] Amazon called the report "misinterpreted results" of the research with an improper "default confidence threshold." [16]

In January 2019, Amazon's shareholders "urged Amazon to stop selling Rekognition software to law enforcement agencies." Amazon in response defended its use of Rekognition, but supported new federal oversight and guidelines to "make sure facial recognition technology cannot be used to discriminate." [26] In February 2019, it was reported that Amazon was collaborating with the National Institute of Standards and Technology (NIST) on developing standardized tests to improve accuracy and remove bias with facial recognition. [27] [28]

In March 2019, an open letter regarding Rekognition was sent by a group of prominent AI researchers to Amazon, criticizing its sale to law enforcement [26] with around 50 signatures. [16]

In April 2019, Amazon was told by the Securities and Exchange Commission that they had to vote on two shareholder proposals seeking to limit Rekognition. Amazon argued that the proposals were an "insignificant public policy issue for the Company" not related to Amazon's ordinary business, but their appeal was denied. [16] The vote was set for May. [15] [29] The first proposal was tabled by shareholders. [30] On May 24, 2019, 2.4% of shareholders voted to stop selling Rekognition to government agencies, while a second proposal calling for a study into Rekognition and civil rights had 27.5% support. [31]

In August 2019, the ACLU again used Rekognition on members of government, with 26 of 120 lawmakers in California flagged as matches to mugshots. Amazon stated the ACLU was "misusing" the software in the tests, by not dismissing results that did not meet Amazon's recommended accuracy threshold of 99%. [32] By August 2019, there had been protests against ICE's use of Rekognition to surveil immigrants. [33]

In March 2019, Amazon announced a Rekognition update that would improve emotional detection, [15] and in August 2019, "fear" was added to emotions that Rekognition could detect. [34] [35] [36]

2020

In June 2020, Amazon announced it was implementing a one-year moratorium on police use of Rekognition, in response to the George Floyd protests. [37]

2024

The Department of Justice disclosed that the FBI is initiating the use of Amazon Rekognition. [38] The DOJ's AI inventory revealed the FBI's "Project Tyr" aims to customize Rekognition to identify nudity, weapons, explosives, and other information from lawfully acquired media. [39]

Controversy regarding facial analysis

Racial and gender bias

In 2018, MIT researchers Joy Buolamwini and Timnit Gebru published a study called Gender Shades. [40] [41] In this study, a set of images was collected, and faces in the images were labeled with face position, gender, and skin tone information. The images were run through SaaS facial recognition platforms from Face++, IBM, and Microsoft. In all three of these platforms, the classifiers performed best on male faces (with error rates on female faces being 8.1% to 20.6% higher than error rates on male faces), and they performed worst on dark female faces (with error rates ranging from 20.8% to 30.4%). The authors hypothesized that this discrepancy is due principally to Megvii, IBM, and Microsoft having more light males than dark females in their training data, i.e. dataset bias.

In January 2019, researchers Inioluwa Deborah Raji and Joy Buolamwini published a follow-up paper that ran the experiment again a year later, on the latest versions same three SaaS facial recognition platforms, plus two additional platforms: Kairos, and Amazon Rekognition. [42] [43] While the systems' overall error-rates improved over the previous year, all five of the systems again performed better on male faces than on dark female faces.

See also

Related Research Articles

<span class="mw-page-title-main">Facial recognition system</span> Technology capable of matching a face from an image against a database of faces

A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.

<span class="mw-page-title-main">Automatic number-plate recognition</span> Optical character recognition technology

Automatic number-plate recognition is a technology that uses optical character recognition on images to read vehicle registration plates to create vehicle location data. It can use existing closed-circuit television, road-rule enforcement cameras, or cameras specifically designed for the task. ANPR is used by police forces around the world for law enforcement purposes, including checking if a vehicle is registered or licensed. It is also used for electronic toll collection on pay-per-use roads and as a method of cataloguing the movements of traffic, for example by highways agencies.

<span class="mw-page-title-main">Ring (company)</span> Home security products manufacturer

Ring LLC is a manufacturer of home security and smart home devices owned by Amazon. It manufactures a titular line of smart doorbells, home security cameras, and alarm systems. It also operates Neighbors, a social network that allows users to discuss local safety and security issues, and share footage captured with Ring products. Via Neighbors, Ring may also provide footage and data to law enforcement agencies to assist in investigations.

IDEMIA is a multinational technology company headquartered in Courbevoie, France. It provides identity-related security services, and sells facial recognition and other biometric identification products and software to private companies and governments.

In the United States, the practice of predictive policing has been implemented by police departments in several states such as California, Washington, South Carolina, Alabama, Arizona, Tennessee, New York, and Illinois. Predictive policing refers to the usage of mathematical, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. Predictive policing methods fall into four general categories: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators' identities, and methods for predicting victims of crime.

<span class="mw-page-title-main">Mass surveillance in China</span>

Mass surveillance in the People's Republic of China (PRC) is the network of monitoring systems used by the Chinese central government to monitor Chinese citizens. It is primarily conducted through the government, although corporate surveillance in connection with the Chinese government has been reported to occur. China monitors its citizens through Internet surveillance, camera surveillance, and through other digital technologies. It has become increasingly widespread and grown in sophistication under General Secretary of the Chinese Communist Party (CCP) Xi Jinping's administration.

Identity-based security is a type of security that focuses on access to digital information or services based on the authenticated identity of an entity. It ensures that the users and services of these digital resources are entitled to what they receive. The most common form of identity-based security involves the login of an account with a username and password. However, recent technology has evolved into fingerprinting or facial recognition.

DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users. The Facebook Research team has stated that the DeepFace method reaches an accuracy of 97.35% ± 0.25% on Labeled Faces in the Wild (LFW) data set where human beings have 97.53%. This means that DeepFace is sometimes more successful than human beings. As a result of growing societal concerns Meta announced that it plans to shut down Facebook facial recognition system, deleting the face scan data of more than one billion users. This change will represent one of the largest shifts in facial recognition usage in the technology's history. Facebook planned to delete by December 2021 more than one billion facial recognition templates, which are digital scans of facial features. However, it did not plan to eliminate DeepFace which is the software that powers the facial recognition system. The company has also not ruled out incorporating facial recognition technology into future products, according to Meta spokesperson.

FindFace is a face recognition technology developed by the Russian company NtechLab that specializes in neural network tools. The company provides a line of services for the state and various business sectors based on FindFace algorithm. Previously, the technology was used as a web service that helped to find people on the VK social network using their photos.

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

<span class="mw-page-title-main">Joy Buolamwini</span> Computer scientist and digital activist

Joy Adowaa Buolamwini is a Canadian-American computer scientist and digital activist based at the MIT Media Lab. She founded the Algorithmic Justice League (AJL), an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).

<span class="mw-page-title-main">Timnit Gebru</span> Computer scientist

Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of Black researchers working in AI. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).

<span class="mw-page-title-main">Police surveillance in New York City</span>

The New York City Police Department (NYPD) actively monitors public activity in New York City, New York, United States. Historically, surveillance has been used by the NYPD for a range of purposes, including against crime, counter-terrorism, and also for nefarious or controversial subjects such as monitoring political demonstrations, activities, and protests, and even entire ethnic and religious groups.

Andrea Frome is an American computer scientist who works in computer vision and machine learning.

Clearview AI is an American facial recognition company, providing software to law enforcement and government agencies and other organizations. The company's algorithm matches faces to a database of more than 20 billion images collected from the Internet, including social media applications. Founded by Hoan Ton-That and Richard Schwartz, the company maintained a low profile until late 2019, when its usage by law enforcement was reported. U.S. police have used the software to apprehend suspected criminals. Clearview's practices have led to fines by EU nations for violating privacy laws and investigations in the U.S. and other countries as well.

DataWorks Plus LLC is a privately held biometrics systems integrator based in Greenville, South Carolina. The company started in 2000 and originally focused on mugshot management, adding facial recognition beginning in 2005. Brad Bylenga is the CEO, and Todd Pastorini is the EVP and GM. Usage of the technology by police departments has resulted in wrongful arrests.

<i>Coded Bias</i> 2020 American documentary film

Coded Bias is an American documentary film directed by Shalini Kantayya that premiered at the 2020 Sundance Film Festival. The film includes contributions from researchers Joy Buolamwini, Deborah Raji, Meredith Broussard, Cathy O’Neil, Zeynep Tufekci, Safiya Noble, Timnit Gebru, Virginia Eubanks, and Silkie Carlo, and others.

<span class="mw-page-title-main">Hyper-surveillance</span> Form of surveillance

Hyper-surveillance is the intricate surveillance of an entire or a substantial fraction of a population in order to monitor that group of citizens that specifically utilizes technology and security breaches to access information. As the reliance on the internet economy grows, smarter technology with higher surveillance concerns and snooping means workers to have increased surveillance at their workplace. Hyper surveillance is highly targeted and intricate observation and monitoring among an individual, group of people, or faction.

<span class="mw-page-title-main">Algorithmic Justice League</span> Digital advocacy non-profit organization

The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.

<span class="mw-page-title-main">Deborah Raji</span> Nigerian-Canadian computer scientist and activist

Inioluwa Deborah Raji is a Nigerian-Canadian computer scientist and activist who works on algorithmic bias, AI accountability, and algorithmic auditing. Raji has previously worked with Joy Buolamwini, Timnit Gebru, and the Algorithmic Justice League on researching gender and racial bias in facial recognition technology. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on how to operationalize ethical considerations in machine learning engineering practice. A current Mozilla fellow, she has been recognized by MIT Technology Review and Forbes as one of the world's top young innovators.

References

  1. 1 2 3 4 Lardinois, Frederic (2016-11-30). "Amazon launches Amazon AI to bring its machine learning smarts to developers". TechCrunch. Retrieved 2019-07-21.
  2. "What Is Amazon Rekognition?". Amazon Rekognition Developer Guide. Retrieved 2019-07-21.
  3. "What is the Celebrity Recognition API? Is that the same or different than doing a face search?". AWS. Retrieved 2019-07-21.
  4. Lardinois, Frederic (2016-06-08). "Amazon Rekognition can now recognize celebrities". TechCrunch. Retrieved 2019-07-21.
  5. "Detecting Faces in an Image". Amazon Rekognition Developer Guide. Retrieved 2019-07-21.
  6. "Amazon Rekognition launches enhanced face analysis". Planet Biometrics. 2019-03-19. Retrieved 2019-07-21.
  7. "People Pathing". Amazon Rekognition Developer Guide. Retrieved 2019-07-21.
  8. "Detecting Text". Amazon Rekognition Developer Guide. Retrieved 2019-07-21.
  9. 1 2 O'Brien, Chris (2018-09-13). "Mapillary will use Amazon Rekognition in effort to ease urban parking crunch". Venture Beat. Retrieved 2019-07-21.
  10. "Detecting Unsafe Content". Amazon Rekognition Developer Guide. Retrieved 2019-07-21.
  11. "Searching Faces in a Collection". Amazon Rekognition Developer Guide. Retrieved 2019-07-21.
  12. 1 2 "Amazon's facial-recognition technology is supercharging Washington County police". Oregon Live. Retrieved 2019-07-21.
  13. "Amazon Rekognition Customers". AWS. Retrieved 2019-07-21.
  14. 1 2 3 4 Glaser, April (July 19, 2019). "How to Not Build a Panopticon". Slate . Retrieved August 27, 2019.
  15. 1 2 3 4 Harwell, Drew (April 30, 2019). "Oregon became a testing ground for Amazon's facial-recognition policing. But what if Rekognition gets it wrong?". The Washington Post . Retrieved August 27, 2019.
  16. 1 2 3 4 5 6 Pasternack, Alex (April 4, 2019). "Amazon says face recognition fears are "insignificant." The SEC disagrees". Fast Company . Retrieved August 27, 2019.
  17. "Amazon Rekognition Improves Accuracy of Real-Time Face Recognition and Verification". AWS. 2018-04-02. Retrieved 2019-07-21.
  18. Brandom, Russell (2018-05-22). "Amazon is selling police departments a real-time facial recognition system". The Verge. Retrieved 2019-07-21.
  19. Statt, Nick (2019-07-18). "Orlando police once again ditch Amazon's facial recognition software". The Verge. Retrieved 2019-07-21.
  20. 1 2 3 Zhou, Marrian (June 26, 2018). "Orlando stops using Amazon's controversial facial recognition tech". CNET . Retrieved August 27, 2019.
  21. 1 2 Keane, Sean (June 22, 2018). "Amazon employees protest sale of face recognition software to police". CNET . Retrieved August 27, 2019.
  22. 1 2 Singh Guliani, Neema (October 24, 2018). "Amazon Met With ICE Officials to Market Its Facial Recognition Product". ACLU . Retrieved August 27, 2019.
  23. 1 2 Statt, Nick (November 8, 2018). "Amazon told employees it would continue to sell facial recognition software to law enforcement". The Verge . Retrieved August 27, 2019.
  24. Day, Matt (October 23, 2018). "Amazon Officials Pitched Their Facial Recognition Software to ICE". The Seattle Times . Retrieved August 27, 2019.
  25. Boyce, Jasmin (December 1, 2018). "Lawmakers demand answers from Amazon on facial recognition tech". NBC News . Retrieved August 27, 2019.
  26. 1 2 Crist, Ry (March 19, 2019). "Amazon's Rekognition software lets cops track faces: Here's what you need to know". CNET . Retrieved August 27, 2019.
  27. Lacy, Lisa (February 19, 2019). "Amazon Rekognition May Finally Be Audited and Ranked Alongside Other Vendors". Adweek . Retrieved August 27, 2019.
  28. Hale, Kori (March 12, 2019). "Auditing Amazon's 'Rekognition' A.I. Could Remove Bias". Forbes . Retrieved August 27, 2019.
  29. Singer, Natasha (May 5, 2019). "Amazon Faces Investor Pressure Over Facial Recognition". The New York Times . Retrieved August 27, 2019.
  30. Whittaker, Zack (May 20, 2019). "Amazon under greater shareholder pressure to limit sale of facial recognition tech to the government". TechCrunch . Retrieved August 27, 2019.
  31. Dastin, Jeffrey (May 24, 2019). "Amazon facial recognition ban won just 2% of shareholder vote". Reuters . Retrieved August 27, 2019.
  32. Wehner, Mike (August 14, 2019). "Amazon's facial recognition system flags dozens of California lawmakers as criminals". BGR . Retrieved August 27, 2019.
  33. Protalinski, Emil (August 16, 2019). "ProBeat: Breakthrough or BS, Amazon's Rekognition is dangerous". VentureBeat . Retrieved August 27, 2019.
  34. Menegus, Bryan (August 13, 2019). "Amazon Rekognition Can Now Identify the Emotion It Provokes in Rational People". Gizmodo . Retrieved August 27, 2019.
  35. Crowe, Michael (August 15, 2019). "Amazon says facial recognition can detect fear, raising concern for some privacy advocates". King5 . Retrieved August 27, 2019.
  36. Mihalcik, Carrie (August 15, 2019). "Amazon's Rekognition software can now spot fear". CNET . Retrieved August 27, 2019.
  37. "We are implementing a one-year moratorium on police use of Rekognition". June 10, 2020. Retrieved June 19, 2020.
  38. mbracken (2024-01-25). "Justice Department discloses FBI project with Amazon Rekognition tool". FedScoop. Retrieved 2024-04-12.
  39. "Artificial intelligence | Digital Watch Observatory" . Retrieved 2024-04-17.
  40. Buolamwini, Joy; Gebru, Timnit (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification" (PDF). Proceedings of Machine Learning Research.
  41. Quach, Katyanna (2018-02-13). "Facial recognition software easily IDs white men, but error rates soar for black women". The Register. Retrieved 2019-07-21.
  42. Raji, Inioluwa Deborah; Buolamwini, Joy (2019-01-27). "Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products" (PDF). AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society.
  43. Wiggers, Kyle (2019-01-24). "MIT researchers: Amazon's Rekognition shows gender and ethnic bias (updated)". Venture Beat. Retrieved 2019-07-21.