Meredith Whittaker

Last updated

Meredith Whittaker
2023 - Q&A LS3 0230 (53331756094).jpg
Whittaker in 2023
Alma mater University of California, Berkeley
Employer Signal Foundation

Meredith Whittaker is the president of the Signal Foundation and serves on its board of directors. [1] [2] [3] She was formerly the Minderoo Research Professor at New York University (NYU), and the co-founder and faculty director of the AI Now Institute. She also served as a senior advisor on AI to Chair Lina Khan at the Federal Trade Commission. [4] Whittaker was employed at Google for 13 years, where she founded Google's Open Research group [5] [6] [7] and co-founded the M-Lab. [8] [9] In 2018, she was a core organizer of the Google Walkouts and resigned from the company in July 2019. [10] [11]

Contents

Early life and education

Whittaker completed her bachelor's degree in rhetoric and English literature at University of California, Berkeley. [12] [13] [9]

Research and career

Whittaker is the president of the Signal foundation and serves on their board of directors. She was formerly the Minderoo Research Professor at NYU, and the Faculty Director of NYU’s AI Now Institute. [14]

She joined Google in 2006. [12] She founded Google Open Research [15] which collaborated with the open source and academic communities on issues related to net neutrality measurement, privacy, security, and the social consequences of artificial intelligence. [16] Whittaker was a speaker at the 2018 World Summit on AI. [17] She has written for the American Civil Liberties Union. [18]

Whittaker co-founded M-Lab, a globally distributed network measurement system that provides the world’s largest source of open data on Internet performance. She has also worked extensively on issues of data validation, privacy, the social implications of artificial intelligence, the political economy of tech, and labor movements in the context of tech and the tech industry. [19] She has spoken out about the need for privacy and against weakening encryption. [20] She has advised the White House, the FCC, the FTC, the City of New York, the European Parliament, and many other governments and civil society organizations on artificial intelligence, Internet policy, measurement, privacy, and security. [21]

AI Now

Whittaker is the co-founder and former faculty director of the AI Now Institute at NYU, a leading university institute dedicated to researching the social implications of artificial intelligence and related technologies which she started with Kate Crawford in 2017 after a symposium hosted by the White House. [22] [23] [24] AI Now is partnered with the New York University Tandon School of Engineering, New York University Center for Data Science and Partnership on AI. [25] They have produced annual reports that examine the social implications of artificial intelligence, including bias, rights and liberties. [26] [27]

Congressional testimony

Whittaker testifies before the House Science, Space, and Technology Committee in 2019 Meredith Whittaker at House Hearing on Artificial Intelligence.jpg
Whittaker testifies before the House Science, Space, and Technology Committee in 2019

Whittaker has testified before Congress, including testimony to the U.S. House Committee on Science, Space & Technology on "Artificial Intelligence: Societal and Ethical Implications" in June 2019. [28] In her testimony, Whittaker pointed to research and cases showing that AI systems can entrench bias and replicate harmful patterns. She called for whistleblower protections for tech workers, arguing that the centrality of tech to core social institutions, and the opacity of tech deployment, made such disclosures crucial to the public interest. [29]

She testified to the House Oversight Committee on “Facial Recognition Technology: Ensuring Commercial Transparency & Accuracy” in January 2020. [30] She highlighted structural issues with facial recognition and the political economy of the industry, where these technologies are used by powerful actors on less powerful actors in ways that can entrench marginalization. She made the case that ‘bias’ was not the core concern, warned against an over reliance on technical audits that could be used to justify the use of systems without tackling structural issues such as the opacity of facial recognition systems, and the power dynamics that attend their use. Her testimony also pointed to the lack of sound scientific support for some of the claims used by private vendors, and called for a halt to the use of these technologies. [31] [32]

Federal Trade Commission

In November 2021, Lina Khan confirmed Whittaker joined the United States Federal Trade Commission as a senior advisor on artificial intelligence to the chair. [4] Once announced as Signal's president, at the beginning of September 2022, she reported the ending of her term at the FTC. [1]

Signal

On September 6, 2022, Whittaker announced that she would be starting as Signal's president on September 12. Signal described the role as "a new position created in collaboration with Signal’s leadership". [3]

Activism

In 2018, Whittaker was one of the core organizers of the Google Walkouts, with over 20,000 Google employees walking out internationally to protest Google's culture when it comes to claims of sexual misconduct and citizen surveillance. They released a series of demands, some of which were met by Google. [33] [34]

The walkout was prompted by Google's reported $90 million payout to vice president Andy Rubin who had been accused of sexual misconduct, and the company's involvement with Project Maven, [33] [35] against which more than three thousand Google employees signed a petition. Τhe project was established by a contract between the US military and Google, through which Google was to develop machine vision technologies for the US drone program. Following the protests, Google did not renew the Maven contract. [36]

Whittaker was part of the movement that called for Google to rethink their AI ethics council after the appointment of Kay Coles James, the president of The Heritage Foundation who has fought against LGBT protections and advocated for Donald Trump’s proposed border wall. [37] Whittaker claimed that she faced retaliation from Google, and wrote in an open letter that she had been told by the company to "abandon her work" on enforcing ethics in technology at the AI Now Institute. [33] [38] [39] [40]

In a note shared internally following her resignation, Whittaker called for tech workers to "unionize in a way that works, protect conscientious objectors and whistleblowers, demand to know what you’re working on and how it’s used, and to build solidarity with other tech workers beyond your company." [11]

Whittaker promotes organizing within Silicon Valley and tackling sexual harassment, gender inequality and racism in tech. [41]

Selected publications

Related Research Articles

<span class="mw-page-title-main">Kate Crawford</span> Australian writer, composer, and academic

Kate Crawford is a researcher, writer, composer, producer and academic, who studies the social and political implications of artificial intelligence. She is based in New York and works as a principal researcher at Microsoft Research, the co-founder and former director of research at the AI Now Institute at NYU, a visiting professor at the MIT Center for Civic Media, a senior fellow at the Information Law Institute at NYU, and an associate professor in the Journalism and Media Research Centre at the University of New South Wales. She is also a member of the WEF's Global Agenda Council on Data-Driven Development.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

<span class="mw-page-title-main">Amy Webb</span> American futurist and journalist

Amy Lynn Webb is an American futurist, author and founder and CEO of the Future Today Institute. She is an adjunct assistant professor at New York University's Stern School of Business, a nonresident senior fellow at Atlantic Council, and was a 2014–15 Visiting Nieman Fellow at Harvard University.

<span class="mw-page-title-main">Fei-Fei Li</span> Chinese-American computer scientist (born 1976)

Fei-Fei Li is a Chinese-American computer scientist, known for establishing ImageNet, the dataset that enabled rapid advances in computer vision in the 2010s. She is the Sequoia Capital professor of computer science at Stanford University and former board director at Twitter. Li is a co-director of the Stanford Institute for Human-Centered Artificial Intelligence and a co-director of the Stanford Vision and Learning Lab. She served as the director of the Stanford Artificial Intelligence Laboratory from 2013 to 2018.

<span class="mw-page-title-main">Maja Pantić</span> Artificial intelligence and robotics researcher

Maja Pantić is a Professor of Affective and Behavioural Computing at Imperial College London and an AI Scientific Research Lead in Facebook London. She was previously Professor of Affective and Behavioural Computing University of Twente and Research Director of the Samsung AI lab in Cambridge, UK. She is an expert in machine understanding of human behaviour including vision-based detection and tracking of human behavioural cues like facial expressions and body gestures, and multimodal analysis of human behaviours like laughter, social signals and affective states.

<span class="mw-page-title-main">Meredith Broussard</span> Data journalism professor

Meredith Broussard is a data journalism professor at the Arthur L. Carter Journalism Institute at New York University. Her research focuses on the role of artificial intelligence in journalism.

<span class="mw-page-title-main">Joy Buolamwini</span> Computer scientist and digital activist

Joy Adowaa Buolamwini is a Canadian-American computer scientist and digital activist formerly based at the MIT Media Lab. She founded the Algorithmic Justice League (AJL), an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).

Project Maven is a Pentagon project involving using machine learning and engineering talent to distinguish people and objects in drone videos, apparently giving the government real-time battlefield command and control, and the ability to track, tag and spy on targets without human involvement. Initially the effort was led by Robert O. Work who was concerned about China's military use of the emerging technology. Reportedly, Pentagon development stops short of acting as an AI weapons system capable of firing on self-designated targets. The project was established in a memo by the U.S. Deputy Secretary of Defense on 26 April 2017. Also known as the Algorithmic Warfare Cross Functional Team, it is, according to Lt. Gen. of the United States Air Force Jack Shanahan in November 2017, a project "designed to be that pilot project, that pathfinder, that spark that kindles the flame front of artificial intelligence across the rest of the [Defense] Department". Its chief, U.S. Marine Corps Col. Drew Cukor, said: "People and computers will work symbiotically to increase the ability of weapon systems to detect objects." Project Maven has been noted by allies, such as Australia's Ian Langford, for the ability to identify adversaries by harvesting data from sensors on UAVs and satellite. At the second Defense One Tech Summit in July 2017, Cukor also said that the investment in a "deliberate workflow process" was funded by the Department [of Defense] through its "rapid acquisition authorities" for about "the next 36 months".

<span class="mw-page-title-main">Safiya Noble</span> American professor and author

Safiya Umoja Noble is the David O. Sears Presidential Endowed Chair of Social Sciences and Professor of Gender Studies, African American Studies, and Information Studies at the University of California, Los Angeles. She is the director of the UCLA Center on Race & Digital Justice and co-director of the Minderoo Initiative on Tech & Power at the UCLA Center for Critical Internet Inquiry (C2i2). She serves as interim director of the UCLA DataX Initiative, leading work in critical data studies.

The AI Now Institute is an American research institute studying the social implications of artificial intelligence and policy research that addresses the concentration of power in the tech industry. AI Now has partnered with organizations such as the Distributed AI Research Institute (DAIR), Data & Society, Ada Lovelace Institute, New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence. In 2021-2, AI Now’s leadership served as a Senior Advisors on AI to Chair Lina Khan at the Federal Trade Commission. Its executive director is Amba Kak.

The 2018 Google walkouts occurred on November 1, 2018 at approximately 11 am. The walkout had a large number of participants. The employees demanded five concrete changes from the company: an end to forced arbitration; a commitment to end pay inequality; a transparent sexual harassment report; an inclusive process for reporting sexual misconduct; and elevate the Chief of Diversity to answer directly to the CEO and create an Employee Representative. A majority of the known organizers have left the company since the walkout and many continue to voice their concerns. Google agreed to end forced arbitration and create a private report of sexual assault, but has not provided any further details about the other demands.

<span class="mw-page-title-main">Timnit Gebru</span> Computer scientist

Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is a co-founder of Black in AI, an advocacy group that has pushed for more Black roles in AI development and research. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).

<span class="mw-page-title-main">Veena Dubal</span> American law professor

Veena B. Dubal is a Professor of Law at the University of California, Irvine School of Law. Her research focuses on the intersection of law, technology, and precarious work. Dubal's scholarship on gig work has been widely cited.

Claire Stapleton is an American writer and marketer known for her involvement in the 2018 Google Walkout for Real Change. She is the author of the newsletter Tech Support.

<span class="mw-page-title-main">Lauren Wilcox</span> American professor and researcher

Lauren G. Wilcox is an American professor and researcher in responsible AI, human–computer interaction, and health informatics, known for research on enabling community participation in technology design and development and her prior contributions to health informatics systems.

<span class="mw-page-title-main">Algorithmic Justice League</span> Digital advocacy non-profit organization

The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.

<span class="mw-page-title-main">Rashida Richardson</span> American attorney and scholar

Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law and an attorney advisor to the Federal Trade Commission. She is also an assistant professor of law and political science at the Northeastern University School of Law and the Northeastern University Department of Political Science in the College of Social Sciences and Humanities.

<span class="mw-page-title-main">Black in AI</span> Technology research organization

Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship, and advocacy.

<span class="mw-page-title-main">HireVue</span> American human resources technology company

HireVue is an artificial intelligence (AI) and human resources management company headquartered in South Jordan, Utah. Founded in 2004, the company allows its clients to conduct digital interviews during the hiring process, where the job candidate interacts with a computer instead of a human interviewer.

Amba Kak is a US-based technology policy expert with over a decade of experience across roles in government, industry, academia, and civil society – and in multiple jurisdictions, including the US, India,and the EU. She is the current Co-Executive Director of AI Now Institute, a US-based research institute where she leads an agenda oriented at actionable policy recommendations to tackle concerns with AI and concentrated power in the tech industry.

References

  1. 1 2 "Meredith Whittaker: "We won't participate in the surveillance business model"". DER STANDARD (in Austrian German). Retrieved September 6, 2022.
  2. "Meredith Whittaker | NYU Tandon School of Engineering". engineering.nyu.edu. Retrieved November 21, 2021.
  3. 1 2 "A Message from Signal's New President". Signal Messenger. Retrieved September 6, 2022.
  4. 1 2 "FTC Chair Lina M. Khan Announces New Appointments in Agency Leadership Positions". Federal Trade Commission. November 19, 2021. Retrieved May 16, 2022.
  5. "Google Walkout Is Just the Latest Sign of Tech Worker Unrest". Wired. ISSN   1059-1028 . Retrieved December 17, 2019.
  6. "A Googler who brought down Google's AI ethics board says she's now facing retaliation". MIT Technology Review. Retrieved December 17, 2019.
  7. "Meredith Whittaker, AI researcher and an organizer of last year's Google walkout, is leaving the company". TechCrunch. July 16, 2019. Retrieved December 17, 2019.
  8. "Who we are – M-Lab". www.measurementlab.net. Retrieved December 17, 2019.
  9. 1 2 Greenberg, Andy (August 28, 2024). "Signal Is More Than Encrypted Messaging. Under Meredith Whittaker, It's Out to Prove Surveillance Capitalism Wrong". Wired. Retrieved September 9, 2024.
  10. "Google Protest Leader Leaves, Warns of Company's Unchecked Power". July 18, 2019. Retrieved July 18, 2019.
  11. 1 2 change, Google Walkout For Real (July 16, 2019). "Onward! Another #GoogleWalkout Goodbye". Medium. Retrieved August 10, 2019.
  12. 1 2 "Scientifically Verifiable Broadband Policy | Berkman Klein Center". cyber.harvard.edu. Retrieved July 7, 2018.
  13. "Meredith Whittaker". opentech. Retrieved July 18, 2019.
  14. "Faculty Director". ainowinstitute.org. Retrieved May 16, 2022.
  15. Gershgorn, Dave. "The field of AI research is about to get way bigger than code". Quartz. Retrieved July 7, 2018.
  16. "Most Google Walkout Organizers Left Company". WIRED.
  17. "World Summit AI | Meet 140 of the world's brightest AI brains". World Summit AI Amsterdam. Retrieved July 7, 2018.
  18. "Algorithms Are Making Government Decisions. The Public Needs to Have a Say". American Civil Liberties Union. Retrieved July 7, 2018.
  19. Vgontzas, Nantina; Whittaker, Meredith (January 29, 2021). "These Machines Won't Kill Fascism: Toward a Militant Progressive Vision for Tech". The Nation. ISSN   0027-8378 . Retrieved May 16, 2022.
  20. "Wanting It Bad Enough Won't Make It Work: Why Adding Backdoors and Weakening Encryption Threatens the Internet". HuffPost. December 16, 2015. Retrieved May 16, 2022.
  21. "Co-founder, Co-director". ainowinstitute.org. Retrieved August 10, 2019.
  22. "Biased AI Is A Threat To Civil Liberties. The ACLU Has A Plan To Fix It". Fast Company. July 25, 2017. Retrieved July 7, 2018.
  23. "The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term - Future of Life Institute". Future of Life Institute. Retrieved July 7, 2018.
  24. "About". ainowinstitute.org. Retrieved July 7, 2018.
  25. "Apply - Interfolio". apply.interfolio.com. Retrieved July 7, 2018.
  26. "Research". ainowinstitute.org. Retrieved July 7, 2018.
  27. "Research". ainowinstitute.org. Retrieved July 7, 2018.
  28. "Artificial Intelligence: Societal and Ethical Implications | House Committee on Science, Space and Technology". science.house.gov. Retrieved August 10, 2019.
  29. "Congressional Testimony on Societal Impact of AI". www.macfound.org. Retrieved August 10, 2019.
  30. Whittaker, Meredith (January 15, 2020). ""Facial Recognition Technology (Part III): Ensuring Commercial Transparency & Accuracy"" (PDF).
  31. "US lawmakers concerned by accuracy of facial recognition". BBC News. January 16, 2020. Retrieved May 19, 2022.
  32. "Facial Recognition Technology | C-SPAN.org". www.c-span.org. Retrieved May 19, 2022.
  33. 1 2 3 "How Google treats Meredith Whittaker is important to potential AI whistleblowers". VentureBeat. April 24, 2019. Retrieved April 27, 2019.
  34. Gaber, Claire Stapleton, Tanuja Gupta, Meredith Whittaker, Celie O'Neil-Hart, Stephanie Parker, Erica Anderson, Amr (November 1, 2018). "We're the Organizers of the Google Walkout. Here Are Our Demands". The Cut. Retrieved August 10, 2019.{{cite web}}: CS1 maint: multiple names: authors list (link)
  35. Bird, Interviews by Cameron; Captain, Sean; Craig, Elise; Gilliland, Haley Cohen; Shan, Joy (March 29, 2019). "Silicon Valley revolt: meet the tech workers fighting their bosses over Ice, censorship and racism". The Guardian. ISSN   0261-3077 . Retrieved April 27, 2019.
  36. R, Bhagyashree (October 9, 2018). "Google opts out of Pentagon's $10 billion JEDI cloud computing contract, as it doesn't align with its ethical use of AI principles". Packt Hub. Retrieved April 27, 2019.
  37. Levin, Sam (April 1, 2019). "Google employees call for removal of rightwing thinktank leader from AI council". The Guardian. ISSN   0261-3077 . Retrieved April 27, 2019.
  38. Wong, Julia Carrie (April 22, 2019). "Demoted and sidelined: Google walkout organizers say company retaliated". The Guardian. ISSN   0261-3077 . Retrieved April 27, 2019.
  39. Wong, Julia Carrie (April 27, 2019). "Google worker activists accuse company of retaliation at 'town hall'". The Guardian. ISSN   0261-3077 . Retrieved April 27, 2019.
  40. Sugandha Lahoti (April 27, 2019). "#NotOkGoogle: Employee-led town hall reveals hundreds of stories of retaliation at Google". Packt Hub. Retrieved April 27, 2019.
  41. "Claire Stapleton & Meredith Whittaker on the Google Walkout". Amanpour & Company. Retrieved August 10, 2019.