Margaret Mitchell (scientist)

Last updated
Margaret Mitchell
MargaretMitchell2022.jpg
Mitchell (2022)
Born
United States
Other namesShmargaret Shmitchell [1]
Alma mater University of Aberdeen (PhD in Computer Science)
University of Washington (MSc in Computational Linguistics)
Known for Algorithmic bias
Fairness in Machine Learning
Computer Vision
Natural Language Processing
Scientific career
Fields Computer Science
Institutions Google
Microsoft Research
Johns Hopkins University
Thesis Generating Reference to Visible Objects  (2012)
Website Personal website

Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models, [2] as well as more transparent reporting of their intended use. [3]

Contents

Education

Mitchell obtained a bachelor's degree in linguistics from Reed College, Portland, Oregon, in 2005. After having worked as a research assistant at the OGI School of Science and Engineering for two years, she subsequently obtained a Master's in Computational Linguistics from the University of Washington in 2009. She enrolled in a PhD program at the University of Aberdeen, where she wrote a doctoral thesis on the topic of Generating Reference to Visible Objects, [4] graduating in 2013.

Career and research

Mitchell is best known for her work on fairness in machine learning and methods for mitigating algorithmic bias. This includes her work on introducing the concept of 'Model Cards' for more transparent model reporting, [3] and methods for debiasing machine learning models using adversarial learning. [2] Margaret Mitchell created the framework for recognizing and avoiding biases by testing with a variable for the group of interest, predictor and an adversary. [5]

In 2012, Mitchell joined the Human Language Technology Center of Excellence at Johns Hopkins University as a postdoctoral researcher, before taking up a position at Microsoft Research in 2013. [6] At Microsoft, Mitchell was the research lead of the Seeing AI project, an app that offers support for the visually impaired by narrating texts and images. [7]

In November 2016, she became a senior research scientist at Google Research and Machine intelligence. While at Google, she founded and co-led the Ethical Artificial Intelligence team together with Timnit Gebru. In May 2018, she represented Google in the Partnership on AI.

In February 2018, she gave a TED talk on 'How we can build AI to help humans, not hurt us'. [8]

In January 2021, after Timnit Gebru's termination from Google, Mitchell reportedly used a script to search through her corporate account and download emails that allegedly documented discriminatory incidents involving Gebru. An automated system locked Mitchell's account in response. In response to media attention Google claimed that she "exfiltrated thousands of files and shared them with multiple external accounts". [9] [10] [11] After a five-week investigation, Mitchell was fired. [12] [13] [14] Prior to her dismissal, Mitchell had been a vocal advocate for diversity at Google, and had voiced concerns about research censorship at the company. [15] [9]

In late 2021, she joined AI start-up Hugging Face. [16]

Leadership

Mitchell was a co-founder of Widening NLP, group seeking to increase the proportion of women and minorities working in natural language processing, [17] and a special interest group within the Association for Computational Linguistics.

Related Research Articles

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems. The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks. Some application areas may also have particularly important ethical implications, like healthcare, education, or the military.

<span class="mw-page-title-main">Eric Horvitz</span> American computer scientist, and Technical Fellow at Microsoft

Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

Google Brain was a deep learning artificial intelligence research team under the umbrella of Google AI, a research division at Google dedicated to artificial intelligence. Formed in 2011, Google Brain combined open-ended machine learning research with information systems and large-scale computing resources. The team has created tools such as TensorFlow, which allow for neural networks to be used by the public, with multiple internal AI research projects. The team aims to create research opportunities in machine learning and natural language processing. The team was merged into former Google sister company DeepMind to form Google DeepMind in April 2023.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

<span class="mw-page-title-main">Francesca Rossi</span> Italian computer scientist

Francesca Rossi is an Italian computer scientist, currently working at the IBM Thomas J. Watson Research Center as an IBM Fellow and the IBM AI Ethics Global Leader.

Emily Menon Bender is an American linguist who is a professor at the University of Washington. She specializes in computational linguistics and natural language processing. She is also the director of the University of Washington's Computational Linguistics Laboratory. She has published several papers on the risks of large language models and on ethics in natural language processing.

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

<span class="mw-page-title-main">Timnit Gebru</span> Computer scientist (born 1983)

Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI, a community of Black researchers working in AI. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).

ACM Conference on Fairness, Accountability, and Transparency is a peer-reviewed academic conference series about ethics and computing systems. Sponsored by the Association for Computing Machinery, this conference focuses on issues such as algorithmic transparency, fairness in machine learning, bias, and ethics from a multi-disciplinary perspective. The conference community includes computer scientists, statisticians, social scientists, scholars of law, and others.

<span class="mw-page-title-main">Artificial intelligence art</span> Machine application of knowledge of human aesthetic expressions

Artificial intelligence art is any visual artwork created through the use of artificial intelligence (AI) programs such as text-to-image models.

<span class="mw-page-title-main">Algorithmic Justice League</span> Digital advocacy non-profit organization

The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.

<span class="mw-page-title-main">Hanna Wallach</span> Computational social scientist

Hanna Wallach is a computational social scientist and partner research manager at Microsoft Research. Her work makes use of machine learning models to study the dynamics of social processes. Her current research focuses on issues of fairness, accountability, transparency, and ethics as they relate to AI and machine learning.

<span class="mw-page-title-main">Black in AI</span> Technology research organization

Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship, and advocacy.

Samy Bengio is a Canadian computer scientist, Senior Director of AI and Machine Learning Research at Apple, and a former long-time scientist at Google known for leading a large group of researchers working in machine learning including adversarial settings. Bengio left Google shortly after the company fired his report, Timnit Gebru, without first notifying him. At the time, Bengio said that he had been "stunned" by what happened to Gebru. He is also among the three authors who developed Torch in 2002, the ancestor of PyTorch, one of today's two largest machine learning frameworks.

<span class="mw-page-title-main">Deborah Raji</span> Nigerian-Canadian computer scientist and activist

Inioluwa Deborah Raji is a Nigerian-Canadian computer scientist and activist who works on algorithmic bias, AI accountability, and algorithmic auditing. Raji has previously worked with Joy Buolamwini, Timnit Gebru, and the Algorithmic Justice League on researching gender and racial bias in facial recognition technology. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on how to operationalize ethical considerations in machine learning engineering practice. A current Mozilla fellow, she has been recognized by MIT Technology Review and Forbes as one of the world's top young innovators.

Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning, artificial intelligence, algorithmic bias, and AI accountability. She is currently an Assistant Professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University. Lakkaraju is known for her work on explainable machine learning. More broadly, her research focuses on developing machine learning models and algorithms that are interpretable, transparent, fair, and reliable. She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare, criminal justice, business, and education. Lakkaraju was named as one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

AI safety is an interdisciplinary field concerned with preventing accidents, misuse, or other harmful consequences that could result from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to make AI systems moral and beneficial, and AI safety encompasses technical problems including monitoring systems for risks and making them highly reliable. Beyond AI research, it involves developing norms and policies that promote safety.

In machine learning, the term stochastic parrot is a metaphor to describe the theory that large language models, though able to generate plausible language, do not understand the meaning of the language they process. The term was coined by Emily M. Bender in the 2021 artificial intelligence research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell.

References

  1. Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (2021-03-01). "On the Dangers of Stochastic Parrots". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT '21. New York, NY, USA: Association for Computing Machinery. pp. 610–623. doi:10.1145/3442188.3445922. ISBN   978-1-4503-8309-7. S2CID   232040593.
  2. 1 2 Hu Zhang, Brian; Lemoine, Blake; Mitchell, Margaret (2018-12-01). "Mitigating Unwanted Biases with Adversarial Learning". Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. AAAI/ACM Conference on AI, Ethics, and Society. pp. 220–229. arXiv: 1801.07593 . doi: 10.1145/3278721.3278779 .
  3. 1 2 Mitchell, Margaret; Wu, Simone; Zaldivar, Andrew; Barnes, Parker; Vasserman, Lucy; Hutchinson, Ben; Spitzer, Elena; Raji, Inioluwa Deborah; Gebru, Timnit (2019-01-29). "Model Cards for Model Reporting". Proceedings of the Conference on Fairness, Accountability, and Transparency. Conference on Fairness, Accountability, and Transparency. arXiv: 1810.03993 . doi:10.1145/3287560.3287596.
  4. Mitchell, Margaret (2013). Generating Reference to Visible Objects (PDF) (PhD). University of Aberdeen.
  5. Zhang, Brian Hu; Lemoine, Blake; Mitchell, Margaret (2018-12-27). "Mitigating Unwanted Biases with Adversarial Learning". Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. Aies '18. New Orleans LA USA: ACM. pp. 335–340. arXiv: 1801.07593 . doi:10.1145/3278721.3278779. ISBN   978-1-4503-6012-8. S2CID   9424845.
  6. Mitchell, Margaret (February 14, 2017). "Margaret Mitchell (Google Research) "Algorithmic Bias in Artificial Intelligence: The Seen and Unseen Factors Influencing Machine Perception of Images and Language"". John Hopkins. Retrieved November 9, 2021.
  7. "Seeing AI in New Languages". Microsoft. Retrieved February 20, 2021.
  8. "Margaret Mitchell's TED talk". TED. February 2018. Retrieved February 20, 2021.
  9. 1 2 Murphy, Margi (20 February 2021). "Google sacks second ethical AI researcher amid censorship storm". The Daily Telegraph . Retrieved April 2, 2023.
  10. Fried, Ina (2021-01-20). "Scoop: Google is investigating the actions of another top AI ethicist". Axios. Retrieved 2023-04-02.
  11. Simonite, Tom. "What Really Happened When Google Ousted Timnit Gebru". Wired. ISSN   1059-1028 . Retrieved 2023-04-02.
  12. "Google fires Margaret Mitchell, another top researcher on its AI ethics team". The Guardian. February 20, 2021. Retrieved February 20, 2021.
  13. "Margaret Mitchell: Google fires AI ethics founder". BBC. February 20, 2021. Retrieved February 20, 2021.
  14. "Google fires Ethical AI lead Margaret Mitchell". VentureBeat. February 20, 2021. Retrieved February 20, 2021.
  15. Osborne, Charlie. "Google fires top ethical AI expert Margaret Mitchell". ZDNet. Retrieved 2021-03-22.
  16. "Fired from Google After Critical Work, AI Researcher Mitchell to Join Startup". Bloomberg.com. 24 August 2021.
  17. Johnson, Khari. "Black and Queer AI Groups Say They'll Spurn Google Funding". Wired. ISSN   1059-1028 . Retrieved 2023-04-02.