Meredith Whittaker | |
---|---|
Alma mater | University of California, Berkeley |
Employer | Signal Foundation |
Meredith Whittaker is the president of the Signal Foundation and serves on its board of directors. [1] [2] [3] She was formerly the Minderoo Research Professor at New York University (NYU), and the co-founder and faculty director of the AI Now Institute. She also served as a senior advisor on AI to Chair Lina Khan at the Federal Trade Commission. [4] Whittaker was employed at Google for 13 years, where she founded Google's Open Research group [5] [6] [7] and co-founded the M-Lab. [8] [9] In 2018, she was a core organizer of the Google Walkouts and resigned from the company in July 2019. [10] [11]
Whittaker completed her bachelor's degree in rhetoric and English literature at University of California, Berkeley. [12] [13] [9]
Whittaker is the president of the Signal foundation and serves on their board of directors. She was formerly the Minderoo Research Professor at NYU, and the Faculty Director of NYU’s AI Now Institute. [14]
She joined Google in 2006. [12] She founded Google Open Research [15] which collaborated with the open source and academic communities on issues related to net neutrality measurement, privacy, security, and the social consequences of artificial intelligence. [16] Whittaker was a speaker at the 2018 World Summit on AI. [17] She has written for the American Civil Liberties Union. [18]
Whittaker co-founded M-Lab, a globally distributed network measurement system that provides the world’s largest source of open data on Internet performance. She has also worked extensively on issues of data validation, privacy, the social implications of artificial intelligence, the political economy of tech, and labor movements in the context of tech and the tech industry. [19] She has spoken out about the need for privacy and against weakening encryption. [20] She has advised the White House, the FCC, the FTC, the City of New York, the European Parliament, and many other governments and civil society organizations on artificial intelligence, Internet policy, measurement, privacy, and security. [21]
Whittaker is the co-founder and former faculty director of the AI Now Institute at NYU, a leading university institute dedicated to researching the social implications of artificial intelligence and related technologies which she started with Kate Crawford in 2017 after a symposium hosted by the White House. [22] [23] [24] AI Now is partnered with the New York University Tandon School of Engineering, New York University Center for Data Science and Partnership on AI. [25] They have produced annual reports that examine the social implications of artificial intelligence, including bias, rights and liberties. [26] [27]
Whittaker has testified before Congress, including testimony to the U.S. House Committee on Science, Space & Technology on "Artificial Intelligence: Societal and Ethical Implications" in June 2019. [28] In her testimony, Whittaker pointed to research and cases showing that AI systems can entrench bias and replicate harmful patterns. She called for whistleblower protections for tech workers, arguing that the centrality of tech to core social institutions, and the opacity of tech deployment, made such disclosures crucial to the public interest. [29]
She testified to the House Oversight Committee on “Facial Recognition Technology: Ensuring Commercial Transparency & Accuracy” in January 2020. [30] She highlighted structural issues with facial recognition and the political economy of the industry, where these technologies are used by powerful actors on less powerful actors in ways that can entrench marginalization. She made the case that ‘bias’ was not the core concern, warned against an over reliance on technical audits that could be used to justify the use of systems without tackling structural issues such as the opacity of facial recognition systems, and the power dynamics that attend their use. Her testimony also pointed to the lack of sound scientific support for some of the claims used by private vendors, and called for a halt to the use of these technologies. [31] [32]
In November 2021, Lina Khan confirmed Whittaker joined the United States Federal Trade Commission as a senior advisor on artificial intelligence to the chair. [4] Once announced as Signal's president, at the beginning of September 2022, she reported the ending of her term at the FTC. [1]
On September 6, 2022, Whittaker announced that she would be starting as Signal's president on September 12. Signal described the role as "a new position created in collaboration with Signal’s leadership". [3]
In 2018, Whittaker was one of the core organizers of the Google Walkouts, with over 20,000 Google employees walking out internationally to protest Google's culture when it comes to claims of sexual misconduct and citizen surveillance. They released a series of demands, some of which were met by Google. [33] [34]
The walkout was prompted by Google's reported $90 million payout to vice president Andy Rubin who had been accused of sexual misconduct, and the company's involvement with Project Maven, [33] [35] against which more than three thousand Google employees signed a petition. Τhe project was established by a contract between the US military and Google, through which Google was to develop machine vision technologies for the US drone program. Following the protests, Google did not renew the Maven contract. [36]
Whittaker was part of the movement that called for Google to rethink their AI ethics council after the appointment of Kay Coles James, the president of The Heritage Foundation who has fought against LGBT protections and advocated for Donald Trump’s proposed border wall. [37] Whittaker claimed that she faced retaliation from Google, and wrote in an open letter that she had been told by the company to "abandon her work" on enforcing ethics in technology at the AI Now Institute. [33] [38] [39] [40]
In a note shared internally following her resignation, Whittaker called for tech workers to "unionize in a way that works, protect conscientious objectors and whistleblowers, demand to know what you’re working on and how it’s used, and to build solidarity with other tech workers beyond your company." [11]
Whittaker promotes organizing within Silicon Valley and tackling sexual harassment, gender inequality and racism in tech. [41]
Kate Crawford is a researcher, writer, composer, producer and academic, who studies the social and political implications of artificial intelligence. She is based in New York and works as a principal researcher at Microsoft Research, the co-founder and former director of research at the AI Now Institute at NYU, a visiting professor at the MIT Center for Civic Media, a senior fellow at the Information Law Institute at NYU, and an associate professor in the Journalism and Media Research Centre at the University of New South Wales. She is also a member of the WEF's Global Agenda Council on Data-Driven Development.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
Amy Lynn Webb is an American futurist, author and founder and CEO of the Future Today Institute. She is an adjunct assistant professor at New York University's Stern School of Business, a nonresident senior fellow at Atlantic Council, and was a 2014–15 Visiting Nieman Fellow at Harvard University.
Fei-Fei Li is a Chinese-American computer scientist, known for establishing ImageNet, the dataset that enabled rapid advances in computer vision in the 2010s. She is the Sequoia Capital professor of computer science at Stanford University and former board director at Twitter. Li is a co-director of the Stanford Institute for Human-Centered Artificial Intelligence and a co-director of the Stanford Vision and Learning Lab. She served as the director of the Stanford Artificial Intelligence Laboratory from 2013 to 2018.
Maja Pantić is a Professor of Affective and Behavioural Computing at Imperial College London and an AI Scientific Research Lead in Facebook London. She was previously Professor of Affective and Behavioural Computing University of Twente and Research Director of the Samsung AI lab in Cambridge, UK. She is an expert in machine understanding of human behaviour including vision-based detection and tracking of human behavioural cues like facial expressions and body gestures, and multimodal analysis of human behaviours like laughter, social signals and affective states.
Meredith Broussard is a data journalism professor at the Arthur L. Carter Journalism Institute at New York University. Her research focuses on the role of artificial intelligence in journalism.
Joy Adowaa Buolamwini is a Canadian-American computer scientist and digital activist formerly based at the MIT Media Lab. She founded the Algorithmic Justice League (AJL), an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).
Project Maven is a Pentagon project involving using machine learning and engineering talent to distinguish people and objects in drone videos, apparently giving the government real-time battlefield command and control, and the ability to track, tag and spy on targets without human involvement. Initially the effort was led by Robert O. Work who was concerned about China's military use of the emerging technology. Reportedly, Pentagon development stops short of acting as an AI weapons system capable of firing on self-designated targets. The project was established in a memo by the U.S. Deputy Secretary of Defense on 26 April 2017. Also known as the Algorithmic Warfare Cross Functional Team, it is, according to Lt. Gen. of the United States Air Force Jack Shanahan in November 2017, a project "designed to be that pilot project, that pathfinder, that spark that kindles the flame front of artificial intelligence across the rest of the [Defense] Department". Its chief, U.S. Marine Corps Col. Drew Cukor, said: "People and computers will work symbiotically to increase the ability of weapon systems to detect objects." Project Maven has been noted by allies, such as Australia's Ian Langford, for the ability to identify adversaries by harvesting data from sensors on UAVs and satellite. At the second Defense One Tech Summit in July 2017, Cukor also said that the investment in a "deliberate workflow process" was funded by the Department [of Defense] through its "rapid acquisition authorities" for about "the next 36 months".
Safiya Umoja Noble is the David O. Sears Presidential Endowed Chair of Social Sciences and Professor of Gender Studies, African American Studies, and Information Studies at the University of California, Los Angeles. She is the director of the UCLA Center on Race & Digital Justice and co-director of the Minderoo Initiative on Tech & Power at the UCLA Center for Critical Internet Inquiry (C2i2). She serves as interim director of the UCLA DataX Initiative, leading work in critical data studies.
The AI Now Institute is an American research institute studying the social implications of artificial intelligence and policy research that addresses the concentration of power in the tech industry. AI Now has partnered with organizations such as the Distributed AI Research Institute (DAIR), Data & Society, Ada Lovelace Institute, New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence. In 2021-2, AI Now’s leadership served as a Senior Advisors on AI to Chair Lina Khan at the Federal Trade Commission. Its executive director is Amba Kak.
The 2018 Google walkouts occurred on November 1, 2018 at approximately 11 am. The walkout had a large number of participants. The employees demanded five concrete changes from the company: an end to forced arbitration; a commitment to end pay inequality; a transparent sexual harassment report; an inclusive process for reporting sexual misconduct; and elevate the Chief of Diversity to answer directly to the CEO and create an Employee Representative. A majority of the known organizers have left the company since the walkout and many continue to voice their concerns. Google agreed to end forced arbitration and create a private report of sexual assault, but has not provided any further details about the other demands.
Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is a co-founder of Black in AI, an advocacy group that has pushed for more Black roles in AI development and research. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).
Veena B. Dubal is a Professor of Law at the University of California, Irvine School of Law. Her research focuses on the intersection of law, technology, and precarious work. Dubal's scholarship on gig work has been widely cited.
Claire Stapleton is an American writer and marketer known for her involvement in the 2018 Google Walkout for Real Change. She is the author of the newsletter Tech Support.
Lauren G. Wilcox is an American professor and researcher in responsible AI, human–computer interaction, and health informatics, known for research on enabling community participation in technology design and development and her prior contributions to health informatics systems.
The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.
Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law and an attorney advisor to the Federal Trade Commission. She is also an assistant professor of law and political science at the Northeastern University School of Law and the Northeastern University Department of Political Science in the College of Social Sciences and Humanities.
Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship, and advocacy.
HireVue is an artificial intelligence (AI) and human resources management company headquartered in South Jordan, Utah. Founded in 2004, the company allows its clients to conduct digital interviews during the hiring process, where the job candidate interacts with a computer instead of a human interviewer.
Amba Kak is a US-based technology policy expert with over a decade of experience across roles in government, industry, academia, and civil society – and in multiple jurisdictions, including the US, India,and the EU. She is the current Co-Executive Director of AI Now Institute, a US-based research institute where she leads an agenda oriented at actionable policy recommendations to tackle concerns with AI and concentrated power in the tech industry.
{{cite web}}
: CS1 maint: multiple names: authors list (link)