Algorithmic Justice League

Last updated

Algorithmic Justice League
AbbreviationAJL
Formation2016
Founder Joy Buolamwini
PurposeAI activism
Location
Website www.ajl.org

The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. [1] The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world. [2] [3]

Contents

History

Buolamwini founded the Algorithmic Justice League in 2016 as a graduate student in the MIT Media Lab. While experimenting with facial detection software in her research, she found that the software could not detect her "highly melanated" face until she donned a white mask. [4] After this incident, Buolamwini became inspired to found AJL to draw public attention to the existence of bias in artificial intelligence and the threat it can poses to civil rights. [4] Early AJL campaigns focused primarily on bias in face recognition software; recent campaigns have dealt more broadly with questions of equitability and accountability in AI, including algorithmic bias, algorithmic decision-making, algorithmic governance, and algorithmic auditing.

Additionally there is a community of other organizations working towards similar goals, including Data and Society, Data for Black Lives, the Distributed Artificial Intelligence Research Institute (DAIR), and Fight for the Future. [5] [6] [7]

Notable work

Facial recognition

AJL founder Buolamwini collaborated with AI ethicist Timnit Gebru to release a 2018 study on racial and gender bias in facial recognition algorithms used by commercial systems from Microsoft, IBM, and Face++. Their research, entitled "Gender Shades", determined that machine learning models released by IBM and Microsoft were less accurate when analyzing dark-skinned and feminine faces compared to performance on light-skinned and masculine faces. [8] [9] [10] The "Gender Shades" paper was accompanied by the launch of the Safe Face Pledge, an initiative designed with the Georgetown Center on Privacy & Technology that urged technology organizations and governments to prohibit lethal use of facial recognition technologies. [11] The Gender Shades project and subsequent advocacy undertaken by AJL and similar groups led multiple tech companies, including Amazon and IBM, to address biases in the development of their algorithms and even temporarily ban the use of their products by police in 2020. [12] [13]

Buolamwini and AJL were featured in the 2020 Netflix documentary Coded Bias, which premiered at the Sundance Film Festival. [2] [14] [15] This documentary focused on the AJL's research and advocacy efforts to spread awareness of algorithmic bias in facial recognition systems. [4] [15]

A research collaboration involving AJL released a white paper in May 2020 calling for the creation of a new United States federal government office to regulate the development and deployment of facial recognition technologies. [16] The white paper proposed that creating a new federal government office for this area would help reduce the risks of mass surveillance and bias posed by facial recognition technologies towards vulnerable populations. [17]

Bias in speech recognition

The AJL has run initiatives to increase public awareness of algorithmic bias and inequities in the performance of AI systems for speech and language modeling across gender and racial populations. The AJL's work in this space centers on highlighting gender and racial disparities in the performance of commercial speech recognition and natural language processing systems, which have been shown to underperform on racial minorities and reinforced gender stereotypes. [18] [19] [20]

In March 2020, AJL released a spoken word artistic piece, titled Voicing Erasure, that increased public awareness of racial bias in automatic speech recognition (ASR) systems. [21] [22] The piece was performed by numerous female and non-binary researchers in the field, including Ruha Benjamin, Sasha Costanza-Chock, Safiya Noble, and Kimberlé Crenshaw. [22] [21] AJL based their development of "Voicing Erasure" on a 2020 PNAS paper, titled, "Racial disparities in automated speech recognition" that identified racial disparities in performance of five commercial ASR systems. [20]

Algorithmic governance

In 2019, Buolamwini represented AJL at a congressional hearing of the US House Committee on Science, Space, and Technology, to discuss the applications of facial recognition technologies commercially and in the government. [23] [24] Buolamwini served as a witness at the hearing and spoke on underperformance of facial recognition technologies in identifying people with darker skin and feminine features and supported her position with research from the AJL project "Gender Shades". [24] [25] [26]

In January 2022, the AJL collaborated with Fight for the Future and the Electronic Privacy Information Center to release an online petition called DumpID.me, calling for the IRS to halt their use of ID.me, a facial recognition technology they were using on users when they log in. [7] The AJL and other organizations sent letters to legislators and requested them to encourage the IRS to stop the program. In February 2022, the IRS agreed to halt the program and stop using facial recognition technology. [27] AJL has now shifted efforts to convince other government agencies to stop using facial recognition technology; as of March 2022, the DumpID.me petition has pivoted to stop the use of ID.me in all government agencies. [28]

Olay Decode the Bias campaign

In September 2021, Olay collaborated with AJL and O'Neil Risk Consulting & Algorithmic Auditing (ORCAA) to conduct the Decode the Bias campaign, which included an audit that explored whether the Olay Skin Advisor (OSA) System included bias against women of color. [29] The AJL chose to collaborate with Olay due to Olay's commitment to obtaining customer consent for their selfies and skin data to be used in this audit. [30] The AJL and ORCAA audit revealed that the OSA system contained bias in its performance across participants' skin color and age. [30] The OSA system demonstrated higher accuracy for participants with lighter skin tones, per the Fitzpatrick Skin Type and individual typology angle skin classification scales. The OSA system also demonstrated higher accuracy for participants aged 30–39. [31] Olay has, since, taken steps to internally audit and mitigate against the bias of the OSA system. [30] Olay has also funded 1,000 girls to attend the Black Girls Code camp, to encourage African-American girls to pursue STEM careers. [30]

CRASH project

In July 2020, the Community Reporting of Algorithmic System Harms (CRASH) Project was launched by AJL. [32] This project began in 2019 when Buolamwini and digital security researcher Camille François met at the Bellagio Center Residency Program, hosted by The Rockefeller Foundation. Since then, the project has also been co-led by MIT professor and AJL research director Sasha Costanza-Chock. The CRASH project focused on creating the framework for the development of bug-bounty programs (BBPs) that would incentivize individuals to uncover and report instances of algorithmic bias in AI technologies. [32] [33] After conducting interviews with BBP participants and a case study of Twitter's BBP program, [34] AJL researchers developed and proposed a conceptual framework for designing BBP programs that compensate and encourage individuals to locate and disclose the existence of bias in AI systems. [35] AJL intends for the CRASH framework to give individuals the ability to report algorithmic harms and stimulate change in AI technologies deployed by companies, especially individuals who have traditionally been excluded from the design of these AI technologies [20, DataSociety report]. [36] [37]

Support and media appearances

AJL initiatives have been funded by the Ford Foundation, the MacArthur Foundation, the Alfred P. Sloan Foundation, the Rockefeller Foundation, the Mozilla Foundation and individual private donors. [36] [38] Fast Company recognized AJL as one of the 10 most innovative AI companies in 2021. [2] Additionally, venues such as Time magazine, The New York Times , NPR, and CNN have featured Buolamwini's work with the AJL in several interviews and articles. [7] [24] [39]

See also

Related Research Articles

<span class="mw-page-title-main">Facial recognition system</span> Technology capable of matching a face from an image against a database of faces

A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation.

Google Brain was a deep learning artificial intelligence research team under the umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, it combined open-ended machine learning research with information systems and large-scale computing resources. It created tools such as TensorFlow, which allow neural networks to be used by the public, and multiple internal AI research projects, and aimed to create research opportunities in machine learning and natural language processing. It was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.

African-American women in computer science were among early pioneers in computing in the United States, and there are notable African-American women working in computer science.

DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users. The Facebook Research team has stated that the DeepFace method reaches an accuracy of 97.35% ± 0.25% on Labeled Faces in the Wild (LFW) data set where human beings have 97.53%. This means that DeepFace is sometimes more successful than human beings. As a result of growing societal concerns Meta announced that it plans to shut down Facebook facial recognition system, deleting the face scan data of more than one billion users. This change will represent one of the largest shifts in facial recognition usage in the technology's history. Facebook planned to delete by December 2021 more than one billion facial recognition templates, which are digital scans of facial features. However, it did not plan to eliminate DeepFace which is the software that powers the facial recognition system. The company has also not ruled out incorporating facial recognition technology into future products, according to Meta spokesperson.

Labeled data is a group of samples that have been tagged with one or more labels. Labeling typically takes a set of unlabeled data and augments each piece of it with informative tags. For example, a data label might indicate whether a photo contains a horse or a cow, which words were uttered in an audio recording, what type of action is being performed in a video, what the topic of a news article is, what the overall sentiment of a tweet is, or whether a dot in an X-ray is a tumor.

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

<span class="mw-page-title-main">Joy Buolamwini</span> Computer scientist and digital activist

Joy Adowaa Buolamwini is a Canadian-American computer scientist and digital activist formerly based at the MIT Media Lab. She founded the Algorithmic Justice League (AJL), an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).

<span class="mw-page-title-main">Meredith Whittaker</span> American artificial intelligence research scientist

Meredith Whittaker is the president of the Signal Foundation and serves on its board of directors. She was formerly the Minderoo Research Professor at New York University (NYU), and the co-founder and faculty director of the AI Now Institute. She also served as a senior advisor on AI to Chair Lina Khan at the Federal Trade Commission. Whittaker was employed at Google for 13 years, where she founded Google's Open Research group and co-founded the M-Lab. In 2018, she was a core organizer of the Google Walkouts and resigned from the company in July 2019.

<span class="mw-page-title-main">Timnit Gebru</span> Computer scientist

Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is a co-founder of Black in AI, an advocacy group that has pushed for more Black roles in AI development and research. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).

Amazon Rekognition is a cloud-based software as a service (SaaS) computer vision platform that was launched in 2016. It has been sold to, and used by, a number of United States government agencies, including U.S. Immigration and Customs Enforcement (ICE) and Orlando, Florida police, as well as private entities.

<i>Coded Bias</i> 2020 American documentary film

Coded Bias is an American documentary film directed by Shalini Kantayya that premiered at the 2020 Sundance Film Festival. The film includes contributions from researchers Joy Buolamwini, Deborah Raji, Meredith Broussard, Cathy O’Neil, Zeynep Tufekci, Safiya Noble, Timnit Gebru, Virginia Eubanks, and Silkie Carlo, and others.

<span class="mw-page-title-main">Rashida Richardson</span> American attorney and scholar

Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law and an attorney advisor to the Federal Trade Commission. She is also an assistant professor of law and political science at the Northeastern University School of Law and the Northeastern University Department of Political Science in the College of Social Sciences and Humanities.

<span class="mw-page-title-main">Black in AI</span> Technology research organization

Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship, and advocacy.

<span class="mw-page-title-main">Margaret Mitchell (scientist)</span> U.S. computer scientist

Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models, as well as more transparent reporting of their intended use.

<span class="mw-page-title-main">Deborah Raji</span> Nigerian-Canadian computer scientist and activist

Inioluwa Deborah Raji is a Nigerian-Canadian computer scientist and activist who works on algorithmic bias, AI accountability, and algorithmic auditing. Raji has previously worked with Joy Buolamwini, Timnit Gebru, and the Algorithmic Justice League on researching gender and racial bias in facial recognition technology. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on how to operationalize ethical considerations in machine learning engineering practice. A current Mozilla fellow, she has been recognized by MIT Technology Review and Forbes as one of the world's top young innovators.

Karen Hao is an American journalist and data scientist. Currently a contributing writer for The Atlantic and previously a foreign correspondent based in Hong Kong for The Wall Street Journal and senior artificial intelligence editor at the MIT Technology Review, she is best known for her coverage on AI research, technology ethics and the social impact of AI. Hao also co-produces the podcast In Machines We Trust and writes the newsletter The Algorithm.

<span class="mw-page-title-main">Tawana Petty</span>

Tawana Petty is an American author, poet, social justice organizer, mother and youth advocate who works to counter systemic racism. Petty formerly served as Director of Policy and Advocacy for the Algorithmic Justice League representing AJL in national and international processes shaping AI governance.

<span class="mw-page-title-main">Allison Koenecke</span> American computer scientist and academic

Allison Koenecke is an American computer scientist and an assistant professor in the Department of Information Science at Cornell University. Her research considers computational social science and algorithmic fairness. In 2022, Koenecke was named one of the Forbes 30 Under 30 in Science.

Discussions on regulation of artificial intelligence in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.

References

  1. "Learn More". The Algorithmic Justice League. Archived from the original on March 29, 2022. Retrieved April 7, 2022.
  2. 1 2 3 "The 10 most innovative companies in artificial intelligence". Fast Company . March 9, 2021. Archived from the original on April 7, 2022. Retrieved April 7, 2022.
  3. Villoro, Elías (February 16, 2023). "Coded Bias and the Algorithm Justice League". Boing Boing.
  4. 1 2 3 "Documentary 'Coded Bias' Unmasks The Racism Of Artificial Intelligence". WBUR-FM. November 18, 2020. Archived from the original on January 4, 2022. Retrieved April 7, 2022.
  5. "Google fired its star AI researcher one year ago. Now she's launching her own institute". The Washington Post . ISSN   0190-8286. Archived from the original on December 2, 2021. Retrieved April 7, 2022.
  6. "DAIR". Distributed AI Research Institute. Archived from the original on April 7, 2022. Retrieved April 7, 2022.
  7. 1 2 3 Metz, Rachel (March 7, 2022). "Activists pushed the IRS to drop facial recognition. They won, but they're not done yet". CNN. Archived from the original on March 31, 2022. Retrieved April 7, 2022.
  8. Buolamwini, Joy; Gebru, Timnit (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification" (PDF). Proceedings of the 1st Conference on Fairness, Accountability and Transparency. 81: 77–91. Archived (PDF) from the original on December 12, 2020. Retrieved December 12, 2020 via December 12, 2020.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  9. "Gender Shades". gendershades.org. Archived from the original on May 29, 2022. Retrieved April 7, 2022.
  10. Buell, Spencer (February 23, 2018). "MIT Researcher: AI Has a Race Problem, and We Need to Fix It". Boston . Archived from the original on April 7, 2022. Retrieved April 7, 2022.
  11. "Announcement - Safe Face Pledge". January 20, 2021. Archived from the original on January 20, 2021. Retrieved April 7, 2022.
  12. "The two-year fight to stop Amazon from selling face recognition to the police". MIT Technology Review. Archived from the original on April 7, 2022. Retrieved April 7, 2022.
  13. "IBM pulls out of facial recognition, fearing racial profiling and mass surveillance". Fortune. Archived from the original on April 7, 2022. Retrieved April 7, 2022.
  14. Lee, Jennifer 8 (February 8, 2020). "When Bias Is Coded Into Our Technology". NPR . Archived from the original on March 26, 2022. Retrieved April 7, 2022.{{cite news}}: CS1 maint: numeric names: authors list (link)
  15. 1 2 "Watch Coded Bias | Netflix". www.netflix.com. Archived from the original on March 24, 2022. Retrieved April 8, 2022.
  16. Burt, | Chris (June 8, 2020). "Biometrics experts call for creation of FDA-style government body to regulate facial recognition | Biometric Update". www.biometricupdate.com. Archived from the original on April 7, 2022. Retrieved April 7, 2022.
  17. Miller, Erik Learned; Ordóñez, Vicente; Morgenstern, Jamie; Buolamwini, Joy (2020). "Facial Recognition Technologies in the Wild: A Call for a Federal Office" (PDF). White Paper: 3–49. Archived (PDF) from the original on January 21, 2022. Retrieved April 8, 2022.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  18. Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (March 3, 2021). "On the Dangers of Stochastic Parrots: Can Language Models be Too Big?". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT '21. Virtual Event Canada: ACM. pp. 610–623. doi: 10.1145/3442188.3445922 . ISBN   978-1-4503-8309-7. S2CID   232040593.
  19. Kiritchenko, Svetlana; Mohammad, Saif M. (2018). "Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems" (PDF). Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. Proceedings of the 7th Joint Conference on Lexical and Computational Semantics. pp. 43–53. arXiv: 1805.04508 . doi:10.18653/v1/S18-2005. S2CID   21670658. Archived (PDF) from the original on March 8, 2022. Retrieved April 8, 2022 via June 5–6, 2018.{{cite book}}: CS1 maint: multiple names: authors list (link)
  20. 1 2 Koenecke, Allison; Nam, Andrew; Lake, Emily; Nudell, Joe; Quartey, Minnie; Mengesha, Zion; Toups, Connor; Rickford, John R.; Jurafsky, Dan; Goel, Sharad (April 7, 2020). "Racial disparities in automated speech recognition". Proceedings of the National Academy of Sciences. 117 (14): 7684–7689. Bibcode:2020PNAS..117.7684K. doi: 10.1073/pnas.1915768117 . ISSN   0027-8424. PMC   7149386 . PMID   32205437.
  21. 1 2 "Voicing Erasure". www.ajl.org. Archived from the original on April 11, 2022. Retrieved April 7, 2022.
  22. 1 2 "Algorithmic Justice League protests bias in voice AI and media coverage". VentureBeat. April 1, 2020. Archived from the original on March 31, 2022. Retrieved April 7, 2022.
  23. Quach, Katyanna (May 22, 2019). "We listened to more than 3 hours of US Congress testimony on facial recognition so you didn't have to go through it". The Register . Archived from the original on January 21, 2022. Retrieved April 8, 2022.
  24. 1 2 3 "Artificial Intelligence: Societal and Ethical Implications". House Committee on Science, Space and Technology. June 26, 2019. Archived from the original on March 15, 2022. Retrieved April 8, 2022.
  25. Rodrigo, Chris Mills (July 2, 2020). "Dozens of advocacy groups push for Congress to ban facial recognition technology". The Hill. Archived from the original on April 8, 2022. Retrieved April 8, 2022.
  26. "U.S. government study finds racial bias in facial recognition tools". Reuters. December 19, 2019. Archived from the original on April 8, 2022. Retrieved April 8, 2022.
  27. Rachel Metz (February 7, 2022). "IRS halts plan to require facial recognition for logging in to user accounts". CNN. Archived from the original on April 8, 2022. Retrieved April 8, 2022.
  28. "Demand All Government Agencies Drop ID.me". Fight for the Future. Archived from the original on April 26, 2022. Retrieved April 8, 2022.
  29. "Decode the Bias & Face Anything | Women in STEM | OLAY". www.olay.com. Archived from the original on April 11, 2022. Retrieved April 8, 2022.
  30. 1 2 3 4 Shacknai, Gabby (September 14, 2021). "Olay Teams Up With Algorithmic Justice Pioneer Joy Buolamwini To #DecodetheBias In Beauty". Forbes. Archived from the original on March 28, 2022. Retrieved April 8, 2022.
  31. "ORCAA's Report". www.olay.com. Archived from the original on April 11, 2022. Retrieved April 8, 2022.
  32. 1 2 "Algorithmic Vulnerability Bounty Project (AVBP)". www.ajl.org. Archived from the original on March 18, 2022. Retrieved April 8, 2022.
  33. Laas, Molly (January 27, 2022). "Bug Bounties For Algorithmic Harms? | Algorithmic Justice League". MediaWell. Archived from the original on January 18, 2023. Retrieved April 8, 2022.
  34. Vincent, James (August 10, 2021). "Twitter's photo-cropping algorithm prefers young, beautiful, and light-skinned faces". The Verge. Archived from the original on April 8, 2022. Retrieved April 8, 2022.
  35. "AJL Bug Bounties Report.pdf". Google Docs. Archived from the original on January 31, 2022. Retrieved April 8, 2022.
  36. 1 2 League, Algorithmic Justice (August 4, 2021). "Happy Hacker Summer Camp Season!". Medium. Archived from the original on November 16, 2021. Retrieved April 8, 2022.
  37. Ellis, Ryan Ellis; Stevens, Yuan (January 2022). "Bounty Everything: Hackers and the Making of the Global Bug Marketplace" (PDF). Data Society: 3–86. Archived (PDF) from the original on February 24, 2022. Retrieved April 8, 2022.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  38. "AJL Bug Bounties Report.pdf". Google Docs. Archived from the original on April 8, 2022. Retrieved April 8, 2022.
  39. "Joy Buolamwini: How Do Biased Algorithms Damage Marginalized Communities?". NPR. Archived from the original on April 3, 2022. Retrieved April 8, 2022.