Company type | Private |
---|---|
Industry | Facial recognition, software |
Founded | 2017[1] |
Founders | Hoan Ton-That Richard Schwartz |
Headquarters | Manhattan, New York City, United States |
Areas served | Globally excluding EU, UK, NZ, Canada, Australia |
Products | Clearview AI Software Clearview AI Search Engine |
Website | clearview |
Clearview AI, Inc. is an American facial recognition company, providing software primarily to law enforcement and other government agencies. [2] The company's algorithm matches faces to a database of more than 20 billion images collected from the Internet, including social media applications. [1] Founded by Hoan Ton-That and Richard Schwartz, the company maintained a low profile until late 2019, until its usage by law enforcement was first reported. [3]
Use of the facial recognition tool has been controversial. Several U.S. senators have expressed concern about privacy rights and the American Civil Liberties Union (ACLU) has sued the company for violating privacy laws on several occasions. U.S. police have used the software to apprehend suspected criminals. [4] [5] [6] Clearview's practices have led to fines and bans by EU nations for violating privacy laws, and investigations in the U.S. and other countries. [7] [8] [9] In 2022, Clearview reached a settlement with the ACLU, in which they agreed to restrict U.S. market sales of facial recognition services to government entities.
Clearview AI was the victim of a data breach in 2020 which exposed their customer list. This demonstrated 2,200 organizations in 27 countries had accounts with facial recognition searches. [10]
Clearview AI was founded in 2017 by Hoan Ton-That and Richard Schwartz after transferring the assets of another company, SmartCheckr, which the pair originally founded in 2017 alongside Charles C. Johnson. [11] [3] The company was founded in Manhattan after the founders met at the Manhattan Institute. [1] The company initially raised $8.4 million from investors including Kirenaga Partners and Peter Thiel. [12] Additional fundraising, in 2020, collected $8.625 million in exchange for equity. The company did not disclose investors in the second round. In 2021, another fundraising round received $30 million. [13] Early use of Clearview's app was given to potential investors in their Series A fundraising round. Billionaire John Catsimatidis used it to identify someone his daughter dated and piloted it at one of his Gristedes grocery markets in New York City to identify shoplifters. [14] [15]
In October 2020, a company spokesperson claimed that Clearview AI's valuation was more than $100 million. [16] The company announced its first chief strategy officer, chief revenue officer, and chief marketing officer in May 2021. Devesh Ashra, a former deputy assistant secretary with the United States Department of the Treasury, became its chief strategy officer. Chris Metaxas, a former executive at LexisNexis Risk Solutions, became its chief revenue officer. Susan Crandall, a former marketing executive at LexisNexis Risk Solutions and Motorola Solutions, became its chief marketing officer. [17] Devesh Ashra and Chris Metaxas left the company in 2021. [13] In August 2021, Clearview AI announced the formation of an advisory board including Raymond Kelly, Richard A. Clarke, Rudy Washington, Floyd Abrams, Lee S. Wolosky, and Owen West. [18] The company claimed to have scraped more than 10 billion images as of October 2021. [19] In May 2022, Clearview AI announced that it would be expanding sales of its facial recognition software to schools and lending platforms outside the U.S. [20]
Clearview AI hired a notable legal team to defend the company against several lawsuits that threatened their business model. Their legal staff includes Tor Ekeland, Lee S. Wolosky, Paul Clement, Floyd Abrams, and Jack Mulcaire. [21] [1] [22] Abrams stated the issue of privacy rights versus free speech in the First Amendment could reach the Supreme Court. [21]
Clearview AI provides facial recognition software where users can upload an image of a face and match it against their database. [23] The software then supplies links to where the "match" can be found online. [24] The company operated in near secrecy until the release of an investigative report in The New York Times titled "The Secretive Company That Might End Privacy as We Know It" in January 2020. It maintained this secrecy by publishing fake information about the company's location and employees and erasing social media for the founders. [3] [1] [25] Citing the article, over 40 tech and civil rights organizations sent a letter to the Privacy and Civil Liberties Oversight Board (PCLOB) and four congressional committees, outlining their concerns with facial recognition and Clearview, and asking the PCLOB to suspend use of facial recognition. [26] [27] [28] [1]
Clearview served to accelerate a global debate on the regulation of facial recognition technology by governments and law enforcement. [29] [30] Law enforcement officers have stated that Clearview's facial recognition is far superior in identifying perpetrators from any angle than previously used technology. [31] After discovering Clearview AI was scraping images from their site, Twitter sent a cease-and-desist letter to Clearview, insisting that they remove all images as scraping is against Twitter's policies. [32] On February 5 and 6, 2020, Google, YouTube, Facebook, and Venmo sent cease and desist letters as it is against their policies. [33] [34] Ton-That responded in an interview that there is a First Amendment right to access public data. He later stated that Clearview has scraped over 50 billion images from across the web. [29] [35] [36]
The New Zealand Police used it in a trial after being approached by Clearview's Marko Jukic in January 2020. Jukic said it would have helped identify the Christchurch mosque shooter had the technology been available. The usage of Clearview's software in this case raised strong objections once exposed, as neither the users' supervisors or the Privacy Commissioner were aware or approved of its use. After it was revealed by RNZ, Justice Minister Andrew Little stated, "It clearly wasn't endorsed, from the senior police hierarchy, and it clearly didn't get the endorsement from the [Police] Minister... that is a matter of concern." [37] [38]
Clearview's technology was used for identifying an individual at a May 30, 2020 George Floyd police violence protest in Miami, Florida. Miami's WTVJ confirmed this, as the arrest report only said she was "identified through investigative means". The defendant's attorney did not even know it was with Clearview. Ton-That confirmed its use, noting that it was not being used for surveillance, but only to investigate a crime. [39]
In December 2020, the ACLU of Washington sent a letter to Seattle mayor Jenny Durkan, asking her to ban the Seattle Police Department from using Clearview AI. [40] The letter cited public records retrieved by a local blogger, which showed one officer signing up for and repeatedly logging into the service, as well as corresponding with a company representative. While the ACLU letter raised concerns that the officer's usage violated the Seattle Surveillance Ordinance, an auditor at the City of Seattle Office of the Inspector General argued that the ordinance was designed to address the usage of surveillance technologies by the Department itself, not by an officer without the Department's knowledge. [41]
After the January 6 riot at the United States Capitol, the Oxford Police Department in Alabama used Clearview's software to run a number of images posted by the Federal Bureau of Investigation in its public request for suspect information to generate leads for people present during the riot. Photo matches and information were sent to the FBI who declined to comment on its techniques. [5]
In March 2022, Ukraine's Ministry of Defence began using Clearview AI's facial recognition technology "to uncover Russian assailants, combat misinformation and identify the dead". Ton-That also claimed that Ukraine's MoD has "more than 2 billion images from the Russian social media service VKontakte at its disposal". [42] Ukrainian government agencies used Clearview over 5,000 times as of April 2022. [43] [44] The company provided these accounts and searches for free. [45]
In a Florida case, Clearview's technology was used by defense attorneys to successfully locate a witness, resulting in the dismissal of vehicular homicide charges against the defendant. [46]
Law enforcement use of the facial recognition software grew rapidly in the United States. In 2022 more than one million searches were conducted. In 2023, this usage doubled. [36]
Clearview AI encouraged user adoption by offering free trials to law enforcement officers rather than departments as a whole. The company additionally used its significant connections to the Republican Party to connect with police departments. [1] [47] In onboarding emails, new users were encouraged to go beyond running one or two searches to "[s]ee if you can reach 100 searches". [48] During 2020, Clearview sold their facial recognition software for one tenth the cost of competitors. [3]
Clearview's marketing claimed their facial recognition led to a terrorist arrest. The identification was submitted to the New York Police Department tip line. [49] Clearview claims to have solved two other New York cases and 40 cold cases, later stating they submitted them to tip lines. NYPD stated they have no institutional relationship with Clearview, but their policies do not ban its use by individual officers. In 2020, thirty NYPD officers were confirmed to have Clearview accounts. [3] In April 2021, documents obtained by the Legal Aid Society under New York's Freedom Of Information Law demonstrated that Clearview had collaborated with the NYPD for years, contrary to past NYPD denials. [50] Clearview met with senior NYPD leadership and entered into a vendor contract with the NYPD. [48] Clearview came under renewed scrutiny for enabling officers to conduct large numbers of searches without formal oversight or approval. [50] [48]
The company was sent a cease and desist letter from the office of New Jersey Attorney General Gurbir Grewal after including a promotional video on its website with images of Grewal. [51] Clearview had claimed that its app played a role in a New Jersey police sting. Grewal confirmed the software was used to identify a child predator, but he also banned the use of Clearview in New Jersey. Tor Ekeland, a lawyer for Clearview, confirmed the marketing video was taken down the same day. [4] [52]
In March 2020, Clearview pitched their technology to states for use in contact tracing to assist with the COVID-19 pandemic. [53] [54] A reporter found Clearview's search could identify him while he covered his nose and mouth like a COVID mask would. [45] The idea brought criticism from US senators and other commentators because it seemed the crisis was being used to push unreliable tools that violate personal privacy. [55] [56]
Contrary to Clearview's initial claims that its service was sold only to law enforcement, a data breach in early 2020 revealed that numerous commercial organizations were on Clearview's customer list. For example, Clearview marketed to private security firms and to casinos. [57] Additionally, Clearview planned expansion to many countries, including authoritarian regimes. [58]
Senator Edward J. Markey wrote to Clearview and Ton-That, stating "Widespread use of your technology could facilitate dangerous behavior and could effectively destroy individuals' ability to go about their daily lives anonymously." Markey asked Clearview to detail aspects of its business, in order to understand these privacy, bias, and security concerns. [32] [59] Clearview responded through an attorney, declining to reveal information. [60] In response to this, Markey wrote a second letter, saying their response was unacceptable and contained dubious claims, and that he was concerned about Clearview "selling its technology to authoritarian regimes" and possible violations of COPPA. [8] [61] Senator Markey wrote a third letter to the company with concerns, stating "this health crisis cannot justify using unreliable surveillance tools that could undermine our privacy rights." Markey asked a series of questions about what government entities Clearview has been talking with, in addition to unanswered privacy concerns. [55]
Senator Ron Wyden voiced concerns about Clearview and had meetings with Ton-That cancelled on three occasions. [62] [8]
In April 2021, Time magazine listed Clearview AI as one of the 100 most influential companies of the year. [63]
In October 2021 Clearview submitted its algorithm to one of two facial recognition accuracy tests conducted by the National Institute of Standards and Technology (NIST) every few months. Clearview ranked amongst the top 10 of 300 facial recognition algorithms in a test to determine accuracy in matching two different photos of the same person. Clearview did not submit to the NIST test for matching an unknown face to a 10 billion image database, which more-closely matches the algorithm's intended purpose. This was the first third-party test of the software. [19]
Clearview, at various times throughout 2020, has claimed 98.6%, 99.6%, or 100% accuracy. However, these results are from tests conducted by people affiliated with the company and have not used representative samples of the population. [29] [64] [65]
In 2021, Clearview announced that it was developing "deblur" and "mask removal" tools to sharpen blurred images and envision the covered part of an individual's face. These tools would be implemented using machine learning models that fill in the missing details based on statistical patterns found in other images. Clearview acknowledged that deblurring an image and/or removing a mask could potentially make errors more frequent and would only be used to generate leads for police investigations. [35]
Assistant Chief of Police of Miami, Armando Aguilar, said in 2023 that Clearview's AI tool had contributed to the resolution of several murder cases, and that his team had used the technology around 450 times a year. Aguilar emphasized that they do not make arrests based on Clearview's matches alone, and instead use the data as a lead and then proceed via conventional methods of case investigation. [24]
Several cases of mistaken identity using Clearview facial recognition have been documented, but "the lack of data and transparency around police use means the true figure is likely far higher." Ton-That claims the technology has approximately 100% accuracy, and attributes mistakes to potential poor policing practices. Ton-That's claimed accuracy level is based on mugshots and would be affected by the quality of the image uploaded. [24]
Clearview AI experienced a data breach in February 2020 which exposed its list of customers. Clearview's attorney, Tor Ekeland stated the security flaw was corrected. [66] In response to the leaks, the United States House Committee on Science, Space, and Technology sent a letter to the company requesting further insight into their bio-metric and security practices. [67]
While Clearview's app is only supposed to be privately accessible to customers, the Android application package and iOS applications were found in unsecured Amazon S3 buckets. [68] The instructions showed how to load an enterprise (developer) certificate so the app could be installed without being published on the App Store. Clearview's access was suspended, as it was against Apple's terms of service for developers, and as a result the app was disabled. [69] In addition to application tracking (Google Analytics, Crashlytics), examination of the source code for the Android version found references to Google Play Services, requests for precise phone location data, voice search, sharing a free demo account to other users, augmented reality integration with Vuzix, and sending gallery photos or taking photos from the app itself. There were also references to scanning barcodes on a drivers license and to RealWear. [70]
In April 2020, Mossab Hussein of SpiderSilk, a security firm, discovered Clearview's source code repositories were exposed due to misconfigured user security settings. This included secret keys and credentials, including cloud storage and Slack tokens. The compiled apps and pre-release apps were accessible, allowing Hussein to run the macOS and iOS apps against Clearview's services. Hussein reported the breach to Clearview but refused to sign a non-disclosure agreement necessary for Clearview's bug bounty program. Ton-That reacted by calling Hussein's disclosure of the bug as an act of extortion. Hussein also found 70,000 videos in one storage bucket from a Rudin Management apartment building's entrance. [71]
Clearview also operates a secondary business, Insight Camera, which provides AI-enabled security cameras. It is targeted at "retail, banking and residential buildings". Two customers have used the technology, United Federation of Teachers and Rudin Management. [72] [73] The website for Insight Camera was taken down following BuzzFeed's investigation into he connection between Clearview AI and Insight Camera. [74]
Following a data leak of Clearview's customer list, BuzzFeed confirmed that 2,200 organizations in 27 countries had accounts with activity. BuzzFeed has the exclusive right to publish this list and has chosen not publish it in its entirety. [10] Clearview AI claims that at least 600 of these users are police departments. These are primarily in the U.S. and Canada, but Clearview has expanded to other countries as well. [3] Although the company claims their services are for law enforcement, they have had contracts with Bank of America, Kohls, and Macy's. Several universities and high schools have done trials with Clearview. [10] The list below highlights particularly notable users.
Clearview AI has had its business model challenged by several lawsuits in multiple jurisdictions. It responded by defending itself, settling in some cases, and exiting several markets.
The company's claim of a First Amendment right to public information has been disputed by privacy lawyers such as Scott Skinner-Thompson and Margot Kaminski, highlighting the problems and precedents surrounding persistent surveillance and anonymity. [34] [89] Former New York City Police Commissioner and executive chairman of Teneo Risk Chief Bill Bratton challenged privacy concerns and recommended strict procedures for law enforcement usage in an op-ed in New York Daily News. [90]
After the release of The New York Times January 2020 article, lawsuits were filed by the states of Illinois, California, Virginia and New York, citing violations of privacy and safety laws. [91] Most of the lawsuits were transferred to New York's Southern District. [92] Two lawsuits were filed in state courts; in Vermont by the attorney general and in Illinois on behalf of the American Civil Liberties Union (ACLU), which cited a statute that forbids the corporate use of residents' faceprints without explicit consent. Clearview countered that an Illinois law does not apply to a company based in New York. [21]
In response to a class action lawsuit filed in Illinois for violating the Biometric Information Privacy Act (BIPA), in May 2020 Clearview stated that they instituted a policy to stop working with non-government entities and to remove any photos geolocated in Illinois. [93] [94] [75] On May 28, 2020, ACLU and Edelson filed a new suit Clearview in Illinois using the BIPA. [95] [96] Clearview agreed to a settlement in June 2024, offering 23% of the company (valued at $52 million at the time) rather than a cash settlement, which was likely to bankrupt the company. [97]
In May 2022, Clearview agreed to settle the 2020 lawsuit from the ACLU. The settlement prohibited the sale of its facial recognition database to private individuals and businesses. [98]
In the Vermont case, Clearview AI invoked Section 230 immunity. The court denied the use of Section 230 immunity in this case because Vermont's claims were "based on the means by which Clearview acquired the photographs" rather than third party content. [99]
In July 2020, Clearview AI announced that it was exiting the Canadian market amidst joint investigations into the company and the use of its product by police forces. [100] Daniel Therrien, the Privacy Commissioner of Canada condemned Clearview AI's use of scraped biometric data: "What Clearview does is mass surveillance and it is illegal. It is completely unacceptable for millions of people who will never be implicated in any crime to find themselves continually in a police lineup." [101] In June 2021, Therrien found that the Royal Canadian Mounted Police had broken Canadian privacy law through hundreds of illegal searches using Clearview AI. [102]
In January 2021, Clearview AI's biometric photo database was deemed illegal in the European Union (EU) by the Hamburg Data Protection Authority (DPA). The deletion of an affected person's biometric data was ordered. The authority stated that the General Data Protection Regulation (GDPR) is applicable despite the fact that Clearview AI has no European branch. [103] In March 2020, they had requested Clearview AI's customer list, as data protection obligations would also apply to the customers. [104] The data protection advocacy organization NOYB criticized the DPA's decision as the DPA issued an order protecting only the individual complainant instead of an order banning the collection of any European resident's photos. [105]
In May 2021, the company had numerous legal complaints filed in Austria, France, Greece, Italy and the United Kingdom for violating European privacy laws in its method of documenting and collecting Internet data. [106] In November 2021, Clearview received a provisional notice by the UK's Information Commissioner's Office (ICO) to stop processing its citizens' data citing a range of alleged breaches. The company was also notified of a potential fine of approximately $22.6 million. Clearview claimed that the ICO's allegations were factually inaccurate as the company "does not do business in the UK, and does not have any UK customers at this time". The BBC reported on 23 May that the company had been fined "more than £7.5m by the UK's privacy watchdog and told to delete the data of UK residents". [107] Clearview was also ordered to delete all facial recognition data of UK residents. This fine marked the fourth of its type placed on Clearview, after similar orders and fines issued from Australia, France, and Italy. [9] However, in October 2023, this fine was overturned following an appeal based on the jurisdiction of the ICO over acts of foreign governments. [108]
In September 2024, Clearview AI was fined €30.5 million by the Dutch Data Protection Authority (DPA) for constructing what the agency described as an illegal database. [109] The DPA's ruling highlighted that Clearview AI unlawfully collected facial images, including those of Dutch citizens, without obtaining their consent. This practice constitutes a significant violation of the EU's GDPR due to the intrusive nature of facial recognition technology and the lack of transparency regarding the use of individuals' biometric data. [110]
Surveillance is the monitoring of behavior, many activities, or information for the purpose of information gathering, influencing, managing, or directing. This can include observation from a distance by means of electronic equipment, such as closed-circuit television (CCTV), or interception of electronically transmitted information like Internet traffic. It can also include simple technical methods, such as human intelligence gathering and postal interception.
A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.
Center for Democracy & Technology (CDT) is a Washington, D.C.-based 501(c)(3) nonprofit organisation that advocates for digital rights and freedom of expression. CDT seeks to promote legislation that enables individuals to use the internet for purposes of well-intent, while at the same time reducing its potential for harm. It advocates for transparency, accountability, and limiting the collection of personal information.
Automatic number-plate recognition is a technology that uses optical character recognition on images to read vehicle registration plates to create vehicle location data. It can use existing closed-circuit television, road-rule enforcement cameras, or cameras specifically designed for the task. ANPR is used by police forces around the world for law enforcement purposes, including checking if a vehicle is registered or licensed. It is also used for electronic toll collection on pay-per-use roads and as a method of cataloguing the movements of traffic, for example by highways agencies.
Banjo is a Utah-based surveillance software company that claimed to use AI to identify events for public safety agencies. It was founded in 2010 by Damien Patton. The company gained notoriety in 2020 when the State of Utah signed a $20 million contract for their "panopticon" software. In May, the company experienced backlash and suspending of contracts after Patton's membership in the Ku Klux Klan and participation in a drive-by terrorist attack on a synagogue was revealed.
Ring LLC is a manufacturer of home security and smart home devices owned by Amazon. It manufactures a titular line of smart doorbells, home security cameras, and alarm systems. It also operates Neighbors, a social network that allows users to discuss local safety and security issues, and share footage captured with Ring products. Via Neighbors, Ring could also provide footage and data to law enforcement agencies to assist in investigations.
In the United States, the practice of predictive policing has been implemented by police departments in several states such as California, Washington, South Carolina, Alabama, Arizona, Tennessee, New York, and Illinois. Predictive policing refers to the usage of mathematical, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. Predictive policing methods fall into four general categories: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators' identities, and methods for predicting victims of crime.
Mass surveillance in the People's Republic of China (PRC) is the network of monitoring systems used by the Chinese central government to monitor Chinese citizens. It is primarily conducted through the government, although corporate surveillance in connection with the Chinese government has been reported to occur. China monitors its citizens through Internet surveillance, camera surveillance, and through other digital technologies. It has become increasingly widespread and grown in sophistication under General Secretary of the Chinese Communist Party (CCP) Xi Jinping's administration.
DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users. The Facebook Research team has stated that the DeepFace method reaches an accuracy of 97.35% ± 0.25% on Labeled Faces in the Wild (LFW) data set where human beings have 97.53%. This means that DeepFace is sometimes more successful than human beings. As a result of growing societal concerns Meta announced that it plans to shut down Facebook facial recognition system, deleting the face scan data of more than one billion users. This change will represent one of the largest shifts in facial recognition usage in the technology's history. Facebook planned to delete by December 2021 more than one billion facial recognition templates, which are digital scans of facial features. However, it did not plan to eliminate DeepFace which is the software that powers the facial recognition system. The company has also not ruled out incorporating facial recognition technology into future products, according to Meta spokesperson.
Visual Analytics for Sense-making in Criminal Intelligence Analysis (VALCRI) is a software tool that helps investigators to find related or relevant information in several criminal databases. The software uses big data processes to aggregate information from a wide array of different sources and formats and compiles it into visual and readable arrangements for users. It is used by various law enforcement agencies and aims to allow officials to utilize statistical information in their operations and strategy. The project is funded by the European Commission and is led by Professor William Wong at Middlesex University.
The New York City Police Department (NYPD) actively monitors public activity in New York City, New York, United States. Historically, surveillance has been used by the NYPD for a range of purposes, including against crime, counter-terrorism, and also for nefarious or controversial subjects such as monitoring political demonstrations, activities, and protests, and even entire ethnic and religious groups.
Amazon Rekognition is a cloud-based software as a service (SaaS) computer vision platform that was launched in 2016. It has been sold to, and used by, a number of United States government agencies, including U.S. Immigration and Customs Enforcement (ICE) and Orlando, Florida police, as well as private entities.
Hoan Ton-That is an Australian entrepreneur. He is the co-founder and chief executive officer of Clearview AI, a United States-based technology company that creates facial recognition software.
DataWorks Plus LLC is a privately held biometrics systems integrator based in Greenville, South Carolina. The company started in 2000 and originally focused on mugshot management, adding facial recognition in 2005. Brad Bylenga is the CEO, and Todd Pastorini is the EVP and GM. Usage of the technology by police departments has resulted in wrongful arrests.
Meta Platforms Inc., or Meta for short, has faced a number of privacy concerns. These stem partly from the company's revenue model that involves selling information collected about its users for many things including advertisement targeting. Meta Platforms Inc. has also been a part of many data breaches that have occurred within the company. These issues and others are further described including user data concerns, vulnerabilities in the company's platform, investigations by pressure groups and government agencies, and even issues with students. In addition, employers and other organizations/individuals have been known to use Meta Platforms Inc. for their own purposes. As a result, individuals’ identities and private information have sometimes been compromised without their permission. In response to these growing privacy concerns, some pressure groups and government agencies have increasingly asserted the users’ right to privacy and to be able to control their personal data.
Hyper-surveillance is the intricate surveillance of an entire or a substantial fraction of a population in order to monitor that group of citizens that specifically utilizes technology and security breaches to access information. As the reliance on the internet economy grows, smarter technology with higher surveillance concerns and snooping means workers to have increased surveillance at their workplace. Hyper surveillance is highly targeted and intricate observation and monitoring among an individual, group of people, or faction.
The GeorgetownCenter on Privacy and Technology is a think tank at Georgetown University in Washington, DC dedicated to the study of privacy and technology. Established in 2014, it is housed within the Georgetown University Law Center. The goal of the Center is to conduct research and empower legal and legislative advocacy around issues of privacy and surveillance, with a focus on how such issues affect groups of different social class and race. In May 2022, the Center's founding director Alvaro Bedoya was confirmed as a commissioner of the United States Federal Trade Commission.
Fawkes is a facial image cloaking software created by the SAND Laboratory of the University of Chicago. It is a free tool that is available as a standalone executable. The software creates small alterations in images using artificial intelligence to protect the images from being recognized and matched by facial recognition software. The goal of the Fawkes program is to enable individuals to protect their own privacy from large data collection. As of May 2022, Fawkes v1.0 has surpassed 840,000 downloads. Eventually, the SAND Laboratory hopes to implement the software on a larger scale to combat unwarranted facial recognition software.
Mass surveillance in Iran looks into Iranian government surveillance of its citizens.
An anti-facial recognition mask is a mask which can be worn to confuse facial recognition software. This type of mask is designed to thwart the surveillance of people by confusing the biometric data. There are many different types of masks which are used to trick facial recognition technology.
Tor Ekeland, a Clearview lawyer, wrote in an email that they would take the video down, and it was no longer at the top of the company's website Friday evening.
{{cite web}}
: CS1 maint: multiple names: authors list (link)I see you have a lot of photos on the internet you should be in the app but you're not here... A couple of minutes later he said he got a call from someone who worked for Clearview AI and they wanted to know why he'd been running my photo.
'Obvious problems with bias and discrimination in the systems' show the need for a moratorium, 40 organizations wrote in a letter to the Privacy and Civil Liberties Oversight Board.
I wrote about Ton-That in February 2009 ('scathingly,' Hill writes), when he was living in San Francisco, developing first Facebook and then iPhone apps. He made the news for creating ViddyHo, a website that tricked users into sharing access to their Gmail accounts — a hacking technique known as 'phishing' — and then spammed their contacts on the Google Talk chat app. (The episode does not appear on Ton-That's sanitized personal website.)
Twitter sent a letter this week to the small start-up company, Clearview AI, demanding that it stop taking photos and any other data from the social media website "for any reason" and delete any data that it previously collected, a Twitter spokeswoman said. The cease-and-desist letter...accused Clearview of violating Twitter's policies.
Following Twitter, Google and YouTube have become the latest companies to send a cease-and-desist letter to Clearview AI, the startup behind a controversial facial recognition program that more than 600 police departments across North American use.
Official emails released to RNZ show how police first used the technology: by submitting images of wanted people who police say looked "to be of Māori or Polynesian ethnicity", as well as "Irish roof contractors".
The NBC 6 Investigators found police used the facial recognition program Clearview AI to find her.
'We've received the attorney general's letter and are complying,' said Tor Ekeland, Clearview's lawyer. 'The video has been removed.'
Clearview A.I. Inc., a facial-recognition startup that has sparked controversy among privacy advocates over its use by police departments, is in discussions with state agencies about using its technology to track patients infected by the coronavirus, according to people familiar with the matter. The technology has yet to be adopted by any agency, but the New York-based company hopes it will be helpful in what's known as "contact tracing"—figuring out who else might have been with a person known to have the virus.
Strahilevitz, for his part, alluded to recent news reports that the facial recognition company Clearview AI has offered to help federal and state governments with contract tracing during the pandemic. "When I hear about potential collaborations between the government and Clearview AI to use facial recognition I shudder," he said.
Sattar spoke Thursday at a G2E panel discussion on "Customer Identification Using Facial Recognition Technology: The Future is Now." Also on the panel were Jessica Medeiros Garrison, president of MDM27 Holdings, whose company Clearview offers facial recognition technology to law enforcement agencies
Ton-That accused the research firm of extortion, but emails between Clearview and SpiderSilk paint a different picture.
United Federation of Teachers (UFT) and New York City real estate firm Rudin Management
{{cite web}}
: CS1 maint: multiple names: authors list (link){{cite web}}
: CS1 maint: multiple names: authors list (link)The RCMP confirmed Thursday that the police force has been using the controversial facial recognition technology Clearview AI for roughly four months as part of online child sexual exploitation investigations and resulted in the rescue of two children.
On Sunday, the Ontario Provincial Police admitted to previously using Clearview AI, a New York City based facial recognition software company which scrapes billions of images off both public and social media websites.
A review is being done after three Edmonton Police Service officers used a new cutting edge facial recognition software before the technology has been approved by the department.
"Initial checks revealed that we were not using Clearview. That was wrong," Williams said, adding that after police had a published a statement denying the force's use of the software, a followup investigation revealed otherwise.
At the London Police Services Board (LPSB) meeting on Thursday, London police Chief Stephen Williams revealed that seven officers accessed the software, with one of those officers using it in an investigation. 'Some of the members were made aware of the Clearview technology at a training seminar in November 2019, and it all surfaced at other training courses and other seminars,' Williams said.
Hoan Ton-That, the CEO of Clearview AI, a company that sells the use of its facial recognition software to law enforcement, recently claimed that the First Amendment gives the company the right to scrape face photographs on public social media platforms. This claim not only ignores valid concerns about facial recognition technologies—their tendency toward discrimination, their use in pervasive location-tracking, including of activists or dissidents—but also gets the First Amendment wrong.
The New York-based company says it's not subject to the BIPA because the alleged wrongful conduct occurred primarily and substantially in New York, not Illinois. It says it is voluntarily changing its business practices "to avoid including data from Illinois residents and to avoid transacting with non-governmental customers anywhere." "Specifically, Clearview is canceling the accounts of every customer who was not either associated with law enforcement or some other federal, state, or local government department, office, or agency," the company said. "Clearview is also canceling all accounts belonging to any entity based in Illinois. All photos in Clearview's database that were geolocated in Illinois have been blocked from being searched through Clearview's app."
The lawsuit was filed in Illinois state court on behalf of the ACLU, the ACLU of Illinois, the Chicago Alliance Against Sexual Exploitation, the Sex Workers Outreach Project, the Illinois State Public Interest Research Group (PIRG), and Mujeres Latinas en Acción. The groups argue that Clearview AI violated — and continues to violate — the privacy rights of Illinois residents under the Illinois Biometric Information Privacy Act (BIPA).
{{cite news}}
: CS1 maint: multiple names: authors list (link)