Deborah Raji

Last updated
Inioluwa Deborah Raji
Deb Raji.jpg
Born1995/1996(age 27–28)
NationalityCanadian
Alma mater University of Toronto
Known for Algorithmic bias
Fairness (machine learning)
Algorithmic auditing and evaluation
Scientific career
Fields Computer Science
Institutions Mozilla Foundation
Partnership on AI
AI Now Institute
Google
MIT Media Lab

Inioluwa Deborah Raji (born 1995/1996 [1] ) is a Nigerian-Canadian computer scientist and activist who works on algorithmic bias, AI accountability, and algorithmic auditing. Raji has previously worked with Joy Buolamwini, Timnit Gebru, and the Algorithmic Justice League on researching gender and racial bias in facial recognition technology. [2] She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on how to operationalize ethical considerations in machine learning engineering practice. [3] A current Mozilla fellow, she has been recognized by MIT Technology Review and Forbes as one of the world's top young innovators. [4] [5]

Contents

Early life and education

Raji was born in Port Harcourt, Nigeria and moved to Mississauga, Ontario when she was four years old. Eventually her family moved to Ottawa, Canada. [4] She studied Engineering Science at the University of Toronto, graduating in 2019. [6] [7] In 2015, she founded Project Include, a nonprofit providing increased student access to engineering education, mentorship, and resources in low income and immigrant communities in the Greater Toronto Area. [8]

Career and research

Raji worked with Joy Buolamwini at the MIT Media Lab and Algorithmic Justice League, where she audited commercial facial recognition technologies from Microsoft, Amazon, IBM, Face++, and Kairos. [9] They found that these technologies were significantly less accurate for darker-skinned women than for white men. [6] [10] With support from other top AI researchers and increased public pressure and campaigning, their work led IBM and Amazon to agree to support facial recognition regulation and later halt the sale of their product to police for at least a year. [5] [11] [12] [13] Raji also interned at machine learning startup Clarifai, where she worked on a computer vision model for flagging images. [1]

She participated in a research mentorship program at Google and worked with their Ethical AI team on creating model cards, a documentation framework for more transparent machine learning model reporting. She also co-led the development of internal auditing practices at Google. [1] Her contributions at Google were separately presented and published at the AAAI conference and ACM Conference on Fairness, Accountability, and Transparency. [14] [15] [16]

In 2019, Raji was a summer research fellow at The Partnership on AI working on setting industry machine learning transparency standards and benchmarking norms. [17] [18] Raji was a Tech Fellow at the AI Now Institute worked on algorithmic and AI auditing. Currently, she is a fellow at the Mozilla Foundation researching algorithmic auditing and evaluation. [3]

Raji's work on bias in facial recognition systems has been highlighted in the 2020 documentary Coded Bias directed by Shalini Kantayya. [19]

Selected awards

Related Research Articles

<span class="mw-page-title-main">Facial recognition system</span> Technology capable of matching a face from an image against a database of faces

A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.

<span class="mw-page-title-main">Ethics of artificial intelligence</span> Ethical issues specific to AI

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.

Google Brain was a deep learning artificial intelligence research team under the umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, Google Brain combined open-ended machine learning research with information systems and large-scale computing resources. The team has created tools such as TensorFlow, which allow for neural networks to be used by the public, with multiple internal AI research projects. The team aims to create research opportunities in machine learning and natural language processing. The team was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.

DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. The program employs a nine-layer neural network with over 120 million connection weights and was trained on four million images uploaded by Facebook users. The Facebook Research team has stated that the DeepFace method reaches an accuracy of 97.35% ± 0.25% on Labeled Faces in the Wild (LFW) data set where human beings have 97.53%. This means that DeepFace is sometimes more successful than human beings. As a result of growing societal concerns Meta announced that it plans to shut down Facebook facial recognition system, deleting the face scan data of more than one billion users. This change will represent one of the largest shifts in facial recognition usage in the technology’s history. Facebook planned to delete by December 2021 more than one billion facial recognition templates, which are digital scans of facial features. However, it did not plan to eliminate DeepFace which is the software that powers the facial recognition system. The company has also not ruled out incorporating facial recognition technology into future products, according to Meta spokesperson.

<span class="mw-page-title-main">Labeled data</span>

Labeled data is a group of samples that have been tagged with one or more labels. Labeling typically takes a set of unlabeled data and augments each piece of it with informative tags. For example, a data label might indicate whether a photo contains a horse or a cow, which words were uttered in an audio recording, what type of action is being performed in a video, what the topic of a news article is, what the overall sentiment of a tweet is, or whether a dot in an X-ray is a tumor.

Emily Menon Bender is an American linguist who is a professor at the University of Washington. She specializes in computational linguistics and natural language processing. She is also the director of the University of Washington's Computational Linguistics Laboratory. She has published several papers on the risks of large language models.

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

<span class="mw-page-title-main">Joy Buolamwini</span> Computer scientist and digital activist

Joy Adowaa Buolamwini is a Ghanaian-American-Canadian computer scientist and digital activist based at the MIT Media Lab. Buolamwini introduces herself as a poet of code, daughter of art and science. She founded the Algorithmic Justice League, an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).

<span class="mw-page-title-main">Meredith Whittaker</span> American artificial intelligence research scientist

Meredith Whittaker is the president of the Signal Foundation and serves on their board of directors. She was formerly the Minderoo Research Professor at New York University (NYU), and the co-founder and faculty director of the AI Now Institute. She also served as a senior advisor on AI to Chair Lina Khan at the Federal Trade Commission. Whittaker was employed at Google for 13 years, where she founded Google's Open Research group and co-founded the M-Lab. In 2018, she was a core organizer of the Google Walkouts and resigned from the company in July 2019.

<span class="mw-page-title-main">Timnit Gebru</span> Computer scientist (born 1983)

Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works on algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of Black researchers working in artificial intelligence (AI). She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).

Amazon Rekognition is a cloud-based software as a service (SaaS) computer vision platform that was launched in 2016. It has been sold to, and used by a number of United States government agencies, including U.S. Immigration and Customs Enforcement (ICE) and Orlando, Florida police, as well as private entities.

<span class="mw-page-title-main">Rediet Abebe</span> Ethiopian computer scientist

Rediet Abebe is an Ethiopian computer scientist working in algorithms and artificial intelligence. She is an assistant professor of computer science at the University of California, Berkeley. Previously, she was a Junior Fellow at the Harvard Society of Fellows.

<i>Coded Bias</i> 2020 American documentary film

Coded Bias is an American documentary film directed by Shalini Kantayya that premiered at the 2020 Sundance Film Festival. The film includes contributions from researchers Joy Buolamwini, Deborah Raji, Meredith Broussard, Cathy O’Neil, Zeynep Tufekci, Safiya Noble, Timnit Gebru, Virginia Eubanks, and Silkie Carlo, and others.

<span class="mw-page-title-main">Algorithmic Justice League</span> Digital advocacy non-profit organization

The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.

<span class="mw-page-title-main">Rashida Richardson</span> American attorney and scholar

Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law and an attorney advisor to the Federal Trade Commission. She is also an assistant professor of law and political science at the Northeastern University School of Law and the Northeastern University Department of Political Science in the College of Social Sciences and Humanities.

<span class="mw-page-title-main">Black in AI</span> Technology research organization

Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship, and advocacy.

<span class="mw-page-title-main">Margaret Mitchell (scientist)</span> U.S. computer scientist

Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models, as well as more transparent reporting of their intended use.

Karen Hao is an American journalist and data scientist. Currently a journalist based in Hong Kong for The Wall Street Journal and previously senior artificial intelligence editor at the MIT Technology Review, she is best known for her coverage on AI research, technology ethics and the social impact of AI. Hao also co-produces the podcast In Machines We Trust and writes the newsletter The Algorithm.

<span class="mw-page-title-main">Allison Koenecke</span> American computer scientist and academic

Allison Koenecke is an American computer scientist and an assistant professor in the Department of Information Science at Cornell University. Her research considers computational social science and algorithmic fairness. In 2022, Koenecke was named one of the Forbes 30 Under 30 in Science.

In machine learning, a stochastic parrot is a large language model that is good at generating convincing language, but does not actually understand the meaning of the language it is processing. The term was coined by Emily M. Bender in the 2021 artificial intelligence research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell.

References

  1. 1 2 3 Hao, Karen (2020-06-17). "Inioluwa Deborah Raji". MIT Technology Review . Retrieved 2021-02-27.
  2. Schwab, Katharine (2021-02-26). "'This is bigger than just Timnit': How Google tried to silence a critic and ignited a movement". Fast Company. Archived from the original on 2021-02-26.
  3. 1 2 "Mozilla Welcomes Two New Fellows in Trustworthy AI". Mozilla Foundation. 2020-10-16. Retrieved 2021-02-27.
  4. 1 2 3 "Inioluwa Deborah Raji | Innovators Under 35". www.innovatorsunder35.com. Retrieved 2021-02-26.
  5. 1 2 3 "Inioluwa Deborah Raji - Forbes 30 Under 30". Forbes. Archived from the original on 2020-12-01. Retrieved 2021-02-27.
  6. 1 2 "This U of T Engineering student is holding companies accountable for biased AI facial technology". U of T Engineering News. 2019-02-11. Retrieved 2021-02-26.
  7. "U of T Engineering alumna Inioluwa Deborah Raji named to MIT Technology Review's Top Innovators Under 35". U of T Engineering News. 2020-06-23. Retrieved 2021-02-27.
  8. "Deborah Raji of Mozilla on Forbes 30 under 30, Mentorship in AI & more". RE•WORK Blog - AI & Deep Learning News. 2021-02-03. Retrieved 2021-02-27.
  9. "Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial ai products" (PDF). Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society: 429–435. January 27, 2019.
  10. Singer, Natasha (2019-01-25). "Amazon Is Pushing Facial Technology That a Study Says Could Be Biased (Published 2019)". The New York Times. ISSN   0362-4331 . Retrieved 2021-02-27.
  11. Heilweil, Rebecca (2020-06-10). "Why it matters that IBM is getting out of the facial recognition business". Vox. Retrieved 2021-02-27.
  12. "IBM walked away from facial recognition. What about Amazon and Microsoft?". VentureBeat. 2020-06-10. Retrieved 2021-02-27.
  13. "The two-year fight to stop Amazon from selling face recognition to the police". MIT Technology Review. Retrieved 2021-02-27.
  14. Raji, Inioluwa Deborah; Gebru, Timnit; Mitchell, Margaret; Buolamwini, Joy; Lee, Joonseok; Denton, Emily (2020-01-03). "Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing". arXiv: 2001.00964 [cs.CY].
  15. Raji, Inioluwa Deborah; Smart, Andrew; White, Rebecca N.; Mitchell, Margaret; Gebru, Timnit; Hutchinson, Ben; Smith-Loud, Jamila; Theron, Daniel; Barnes, Parker (2020-01-03). "Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing". arXiv: 2001.00973 [cs.CY].
  16. Mitchell, Margaret; Wu, Simone; Zaldivar, Andrew; Barnes, Parker; Vasserman, Lucy; Hutchinson, Ben; Spitzer, Elena; Raji, Inioluwa Deborah; Gebru, Timnit (2019-01-29). "Model Cards for Model Reporting". Proceedings of the Conference on Fairness, Accountability, and Transparency. pp. 220–229. arXiv: 1810.03993 . doi:10.1145/3287560.3287596. ISBN   9781450361255. S2CID   52946140.
  17. Xiang, Alice; Raji, Inioluwa Deborah (2019-11-25). "On the Legal Compatibility of Fairness Definitions". arXiv: 1912.00761 [cs.CY].
  18. "About Face: A Survey of Facial Recognition Evaluation". DeepAI. 2021-02-01. Retrieved 2021-02-26.
  19. "Coded Bias: Director Shalini Kantayya on Solving Facial Recognition's Serious Flaws". Stanford HAI. Retrieved 2021-03-15.
  20. "AI innovation winners announced in San Francisco". Innovation Matrix. 2019-07-12. Archived from the original on 2020-12-09. Retrieved 2021-02-27.
  21. "Pioneer Award Ceremony 2020". Electronic Frontier Foundation. 2020-08-24. Retrieved 2021-02-27.
  22. "Hall of Fame". 100 Brilliant Women in AI Ethics™. Retrieved 2021-02-27.
  23. Shaw, Simmone (September 7, 2023). "Inioluwa Deborah Raji". Time . Retrieved October 3, 2023.