Developer(s) | Amazon, Amazon Web Services |
---|---|
Initial release | 30 November 2016 [1] |
Type | Software as a service |
Website | aws |
Amazon Rekognition is a cloud-based software as a service (SaaS) computer vision platform that was launched in 2016. It has been sold to, and used by, a number of United States government agencies, including U.S. Immigration and Customs Enforcement (ICE) and Orlando, Florida police, as well as private entities.
Rekognition provides a number of computer vision capabilities, which can be divided into two categories: Algorithms that are pre-trained on data collected by Amazon or its partners, and algorithms that a user can train on a custom dataset.
As of July 2019, Rekognition provides the following computer vision capabilities. [1] [2]
In late 2017, the Washington County, Oregon Sheriff's Office began using Rekognition to identify suspects' faces. Rekognition was marketed as a general-purpose computer vision tool, and an engineer working for Washington County decided to use the tool for facial analysis of suspects. [12] [13] Rekognition was offered to the department for free, [14] and Washington County became the first US law enforcement agency known to use Rekognition. In 2018, the agency logged over 1,000 facial searches. The county, according to the Washington Post, by 2019 was paying about $7 a month for all of its searches. [15] The relationship was unknown to the public until May 2018. [14] In 2018, Rekognition was also used to help identify celebrities during a royal wedding telecast. [16]
In April 2018, it was reported that FamilySearch was using Rekognition to enable their users to "see which of their ancestors they most resemble based on family photographs". [17] In early 2018, the FBI also began using it as a pilot program for analyzing video surveillance. [16]
In May 2018, it was reported by the ACLU that Orlando, Florida was running a pilot using Rekognition for facial analysis in law enforcement, [18] with that pilot ending in July 2019. [19] After the report, [20] [21] on June 22, 2018, Gizmodo reported that Amazon workers had written a letter to CEO Jeff Bezos requesting he cease selling Rekognition to US law enforcement, particularly ICE and Homeland Security. [21] A letter was also sent to Bezos by the ACLU. [20] On June 26, 2018, it was reported that the Orlando police force had ceased using Rekognition after their trial contract expired, reserving the right to use it in the future. [20] The Orlando Police Department said that they had "never gotten to the point to test images" due to old infrastructure and low bandwidth. [14]
In July 2018, the ACLU released a test showing that Rekognition had falsely matched 28 members of Congress with mugshot photos, particularly Congresspeople of color. 25 House members afterwards sent a letter to Bezos, expressing concern about Rekognition. [22] Amazon responded saying the Rekognition test had generated 80 percent confidence, while it recommended law enforcement only use matches rated at 99 percent confidence. [23] The Washington Post states that Oregon instead has officers pick a "best of five" result, instead of adhering to the recommendation. [15]
In September 2018, it was reported that Mapillary was using Rekognition to read the text on parking signs (e.g. no stopping, no parking, or specific parking hours) in cities. [9]
In October 2018, it was reported that Amazon had earlier that year pitched Rekognition to U.S. Immigration and Customs Enforcement agency. [22] [24] Amazon defended government use of Rekognition. [23]
On December 1, 2018, it was reported that 8 Democratic lawmakers had said in a letter that Amazon had "failed to provide sufficient answers" about Rekognition, writing that they had "serious concerns that this type of product has significant accuracy issues, places disproportionate burdens on communities of color, and could stifle Americans' willingness to exercise their First Amendment rights in public." [25]
In January 2019, MIT researchers published a peer-reviewed study asserting that Rekognition had more difficulty in identifying dark-skinned females than competitors such as IBM and Microsoft. [16] In the study, Rekognition misidentified darker-skinned women as men 31% of the time, but made no mistakes for light-skinned men. [14] Amazon called the report "misinterpreted results" of the research with an improper "default confidence threshold." [16]
In January 2019, Amazon's shareholders "urged Amazon to stop selling Rekognition software to law enforcement agencies." Amazon in response defended its use of Rekognition, but supported new federal oversight and guidelines to "make sure facial recognition technology cannot be used to discriminate." [26] In February 2019, it was reported that Amazon was collaborating with the National Institute of Standards and Technology (NIST) on developing standardized tests to improve accuracy and remove bias with facial recognition. [27] [28]
In March 2019, an open letter regarding Rekognition was sent by a group of prominent AI researchers to Amazon, criticizing its sale to law enforcement [26] with around 50 signatures. [16]
In April 2019, Amazon was told by the Securities and Exchange Commission that they had to vote on two shareholder proposals seeking to limit Rekognition. Amazon argued that the proposals were an "insignificant public policy issue for the Company" not related to Amazon's ordinary business, but their appeal was denied. [16] The vote was set for May. [15] [29] The first proposal was tabled by shareholders. [30] On May 24, 2019, 2.4% of shareholders voted to stop selling Rekognition to government agencies, while a second proposal calling for a study into Rekognition and civil rights had 27.5% support. [31]
In August 2019, the ACLU again used Rekognition on members of government, with 26 of 120 lawmakers in California flagged as matches to mugshots. Amazon stated the ACLU was "misusing" the software in the tests, by not dismissing results that did not meet Amazon's recommended accuracy threshold of 99%. [32] By August 2019, there had been protests against ICE's use of Rekognition to surveil immigrants. [33]
In March 2019, Amazon announced a Rekognition update that would improve emotional detection, [15] and in August 2019, "fear" was added to emotions that Rekognition could detect. [34] [35] [36]
In June 2020, Amazon announced it was implementing a one-year moratorium on police use of Rekognition, in response to the George Floyd protests. [37]
The Department of Justice disclosed that the FBI is initiating the use of Amazon Rekognition. [38] The DOJ's AI inventory revealed the FBI's "Project Tyr" aims to customize Rekognition to identify nudity, weapons, explosives, and other information from lawfully acquired media. [39]
In 2018, MIT researchers Joy Buolamwini and Timnit Gebru published a study called Gender Shades. [40] [41] In this study, a set of images was collected, and faces in the images were labeled with face position, gender, and skin tone information. The images were run through SaaS facial recognition platforms from Face++, IBM, and Microsoft. In all three of these platforms, the classifiers performed best on male faces (with error rates on female faces being 8.1% to 20.6% higher than error rates on male faces), and they performed worst on dark female faces (with error rates ranging from 20.8% to 30.4%). The authors hypothesized that this discrepancy is due principally to Megvii, IBM, and Microsoft having more light males than dark females in their training data, i.e. dataset bias.
In January 2019, researchers Inioluwa Deborah Raji and Joy Buolamwini published a follow-up paper that ran the experiment again a year later, on the latest versions same three SaaS facial recognition platforms, plus two additional platforms: Kairos, and Amazon Rekognition. [42] [43] While the systems' overall error-rates improved over the previous year, all five of the systems again performed better on male faces than on dark female faces.
A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.
Automatic number-plate recognition is a technology that uses optical character recognition on images to read vehicle registration plates to create vehicle location data. It can use existing closed-circuit television, road-rule enforcement cameras, or cameras specifically designed for the task. ANPR is used by police forces around the world for law enforcement purposes, including checking if a vehicle is registered or licensed. It is also used for electronic toll collection on pay-per-use roads and as a method of cataloguing the movements of traffic, for example by highways agencies.
Ring LLC is a manufacturer of home security and smart home devices owned by Amazon. It manufactures a titular line of smart doorbells, home security cameras, and alarm systems. It also operates Neighbors, a social network that allows users to discuss local safety and security issues, and share footage captured with Ring products. Via Neighbors, Ring may also provide footage and data to law enforcement agencies to assist in investigations.
IDEMIA is a multinational technology company headquartered in Courbevoie, France. It provides identity-related security services, and sells facial recognition and other biometric identification products and software to private companies and governments.
In the United States, the practice of predictive policing has been implemented by police departments in several states such as California, Washington, South Carolina, Alabama, Arizona, Tennessee, New York, and Illinois. Predictive policing refers to the usage of mathematical, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. Predictive policing methods fall into four general categories: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators' identities, and methods for predicting victims of crime.
Mass surveillance in the People's Republic of China (PRC) is the network of monitoring systems used by the Chinese central government to monitor Chinese citizens. It is primarily conducted through the government, although corporate surveillance in connection with the Chinese government has been reported to occur. China monitors its citizens through Internet surveillance, camera surveillance, and through other digital technologies. It has become increasingly widespread and grown in sophistication under General Secretary of the Chinese Communist Party (CCP) Xi Jinping's administration.
Identity-based security is a type of security that focuses on access to digital information or services based on the authenticated identity of an entity. It ensures that the users and services of these digital resources are entitled to what they receive. The most common form of identity-based security involves the login of an account with a username and password. However, recent technology has evolved into fingerprinting or facial recognition.
DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users. The Facebook Research team has stated that the DeepFace method reaches an accuracy of 97.35% ± 0.25% on Labeled Faces in the Wild (LFW) data set where human beings have 97.53%. This means that DeepFace is sometimes more successful than human beings. As a result of growing societal concerns Meta announced that it plans to shut down Facebook facial recognition system, deleting the face scan data of more than one billion users. This change will represent one of the largest shifts in facial recognition usage in the technology's history. Facebook planned to delete by December 2021 more than one billion facial recognition templates, which are digital scans of facial features. However, it did not plan to eliminate DeepFace which is the software that powers the facial recognition system. The company has also not ruled out incorporating facial recognition technology into future products, according to Meta spokesperson.
FindFace is a face recognition technology developed by the Russian company NtechLab that specializes in neural network tools. The company provides a line of services for the state and various business sectors based on FindFace algorithm. Previously, the technology was used as a web service that helped to find people on the VK social network using their photos.
Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
Joy Adowaa Buolamwini is a Canadian-American computer scientist and digital activist based at the MIT Media Lab. She founded the Algorithmic Justice League (AJL), an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).
Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of Black researchers working in AI. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).
The New York City Police Department (NYPD) actively monitors public activity in New York City, New York, United States. Historically, surveillance has been used by the NYPD for a range of purposes, including against crime, counter-terrorism, and also for nefarious or controversial subjects such as monitoring political demonstrations, activities, and protests, and even entire ethnic and religious groups.
Andrea Frome is an American computer scientist who works in computer vision and machine learning.
Clearview AI is an American facial recognition company, providing software to law enforcement and government agencies and other organizations. The company's algorithm matches faces to a database of more than 20 billion images collected from the Internet, including social media applications. Founded by Hoan Ton-That and Richard Schwartz, the company maintained a low profile until late 2019, when its usage by law enforcement was reported. U.S. police have used the software to apprehend suspected criminals. Clearview's practices have led to fines by EU nations for violating privacy laws and investigations in the U.S. and other countries as well.
DataWorks Plus LLC is a privately held biometrics systems integrator based in Greenville, South Carolina. The company started in 2000 and originally focused on mugshot management, adding facial recognition beginning in 2005. Brad Bylenga is the CEO, and Todd Pastorini is the EVP and GM. Usage of the technology by police departments has resulted in wrongful arrests.
Coded Bias is an American documentary film directed by Shalini Kantayya that premiered at the 2020 Sundance Film Festival. The film includes contributions from researchers Joy Buolamwini, Deborah Raji, Meredith Broussard, Cathy O’Neil, Zeynep Tufekci, Safiya Noble, Timnit Gebru, Virginia Eubanks, and Silkie Carlo, and others.
Hyper-surveillance is the intricate surveillance of an entire or a substantial fraction of a population in order to monitor that group of citizens that specifically utilizes technology and security breaches to access information. As the reliance on the internet economy grows, smarter technology with higher surveillance concerns and snooping means workers to have increased surveillance at their workplace. Hyper surveillance is highly targeted and intricate observation and monitoring among an individual, group of people, or faction.
The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.
Inioluwa Deborah Raji is a Nigerian-Canadian computer scientist and activist who works on algorithmic bias, AI accountability, and algorithmic auditing. Raji has previously worked with Joy Buolamwini, Timnit Gebru, and the Algorithmic Justice League on researching gender and racial bias in facial recognition technology. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on how to operationalize ethical considerations in machine learning engineering practice. A current Mozilla fellow, she has been recognized by MIT Technology Review and Forbes as one of the world's top young innovators.