Rashida Richardson | |
---|---|
Occupation(s) | Scholar, assistant professor, attorney |
Academic background | |
Education | Wesleyan University (BA) |
Alma mater | Northeastern University School of Law (JD) |
Academic work | |
Discipline | Law and technology policy |
Website | https://www.rashidarichardson.com/ |
Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law [1] and an attorney advisor to the Federal Trade Commission. She is also an assistant professor of law and political science at the Northeastern University School of Law and the Northeastern University Department of Political Science in the College of Social Sciences and Humanities.
Richardson previously was the director of policy research at the AI Now Institute, [2] where she designed,implemented and coordinated research strategies and initiatives on law,policy,and civil rights. [3] During her career as an attorney,researcher,and scholar,Richardson has engaged in science communication and public advocacy.
Richardson earned a BA with Honors from the College of Social Studies at Wesleyan University,and a JD from Northeastern University School of Law. She was an intern with Judge Charles R. Breyer of the US District Court for the Northern District of California,the law firm of Cowan,DeBeats,Abraham &Sheppard,and the Legal Aid Society. [4]
Before joining The AI Now Institute,Richardson served as Legislative Counsel at the New York Civil Liberties Union [5] [6] and had worked as a staff attorney for The Center for HIV Law and Policy. She previously worked at Facebook and HIP Investor in San Francisco.
After her senior fellowship for digital innovation and democracy at the German Marshall Fund,she became a senior policy adviser for data and democracy at the Office of Science and Technology Policy in July 2021. [7] [8] She also joined the faculty at Northeastern Law as an assistant professor of law and political science with the School of Law and the Department of Political Science in the College of Social Sciences and Humanities in July 2021. [9] [8] In 2022,she began work as an attorney advisor for the Federal Trade Commission. [10]
In March 2020,she joined the advisory board of the Electronic Privacy Information Center (EPIC). [11] In March 2021,she joined the board of Lacuna Technologies to provide guidance on equity and data privacy issues. [12]
In 2018,as the director of policy research for the AI Now Institute,Richardson spoke at length with The Christian Science Monitor about the impacts and challenges of artificial intelligence,including a lack of transparency with the public about how the technology is used and a lack of technical expertise by municipalities in how the technology works or whether the results are biased or flawed. [13] Richardson discussed similar concerns about facial recognition technology with NBC News in 2018 [14] and CBS News in 2020. [15] In 2019,Richardson spoke with the Detroit Free Press about the increasing use of artificial intelligence systems by governments across the United States, [16] and extended her warnings to Canada when speaking with The Canadian Press . [17] In 2019,Richardson spoke with Reuters about ethics and artificial intelligence,and expressed concerns about the priorities of Amazon.com,Facebook,Microsoft and others. [18]
In 2019,Richardson testified before the U.S. Senate Subcommittee on Communications,Technology,Innovation,and the Internet in a hearing titled "Optimizing for Engagement:Understanding the Use of Persuasive Technology on Internet Platforms." [19] [20] In advance,she told Politico ,"Government intervention is urgently needed to ensure consumers - particularly women,gender minorities and communities of color - are protected from discrimination and bias at the hands of AI systems." [21]
In 2019,Karen Hao at MIT Technology Review profiled a study led by Richardson at the AI Now Institute,that according to Hao,"has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system." [22] In 2020,Richardson spoke with Hao about the use of predictive analytics applied to child welfare. [23] Richardson also spoke with Will Douglas Heaven at MIT Technology Review for articles published in 2020 and 2021 about algorithmic bias problems in predictive policing programs,including her perspective that "political will" is needed to address the issues. [24] [25]
In 2020,as a visiting scholar at Rutgers Law School and senior fellow in the Digital Innovation and Democracy Initiative at the German Marshall Fund,Richardson spoke with The New York Times about resistance from American police departments in sharing details about technologies used,and the limited regulation of the technology,stating,"The only thing that can improve this black box of predictive policing is the proliferation of transparency laws." [26] [27]
In 2020,Richardson was featured in the documentary film "The Social Dilemma," directed by Jeff Orlowski and distributed by Netflix,that focuses on social media and algorithmic manipulation. [28] [29]
In 2021,she spoke with MIT Technology Review about algorithmic bias and issues related to predictive policing technology. Richardson explained both arrest data and victim reports can skew results,noting that with regard to victim reports,"if you are in a community with a historically corrupt or notoriously racially biased police department,that will affect how and whether people report crime." [30]
Artificial intelligence (AI) is the intelligence of machines or software,as opposed to the intelligence of living beings,primarily of humans. It is a field of study in computer science that develops and studies intelligent machines. Such machines may be called AIs.
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data,and thus perform tasks without explicit instructions. Recently,generative artificial neural networks have been able to surpass many previous approaches in performance.
Kate Crawford is a researcher,writer,composer,producer and academic,who studies the social and political implications of artificial intelligence. She is based in New York and works as a principal researcher at Microsoft Research,the co-founder and former director of research at the AI Now Institute at NYU,a visiting professor at the MIT Center for Civic Media,a senior fellow at the Information Law Institute at NYU,and an associate professor in the Journalism and Media Research Centre at the University of New South Wales. She is also a member of the WEF's Global Agenda Council on Data-Driven Development.
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.
Artificial intelligence in healthcare is a term used to describe the use of machine-learning algorithms and software,or artificial intelligence (AI),to copy human cognition in the analysis,presentation,and understanding of complex medical and health care data,or to exceed human capabilities by providing new ways to diagnose,treat,or prevent disease. Specifically,AI is the ability of computer algorithms to arrive at approximate conclusions based solely on input data.
Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes,such as "privileging" one category over another in ways different from the intended function of the algorithm.
Joy Adowaa Buolamwini is a Ghanaian-American-Canadian computer scientist and digital activist based at the MIT Media Lab. She founded the Algorithmic Justice League (AJL),an organization that works to challenge bias in decision-making software,using art,advocacy,and research to highlight the social implications and harms of artificial intelligence (AI).
The AI Now Institute is an American research institute studying the social implications of artificial intelligence and policy research that addresses the concentration of power in the tech industry. AI Now has partnered with organizations such as the Distributed AI Research Institute (DAIR),Data &Society,Ada Lovelace Institute,New York University Tandon School of Engineering,New York University Center for Data Science,Partnership on AI,and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence. In 2021-2,AI Now’s leadership served as a Senior Advisors on AI to Chair Lina Khan at the Federal Trade Commission. Its executive director is Amba Kak.
Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI),algorithmic bias and data mining. She is an advocate for diversity in technology and co-founder of Black in AI,a community of Black researchers working in AI. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).
Olga Russakovsky is an associate professor of computer science at Princeton University. Her research investigates computer vision and machine learning. She was one of the leaders of the ImageNet Large Scale Visual Recognition challenge and has been recognised by MIT Technology Review as one of the world's top young innovators.
Government by algorithm is an alternative form of government or social ordering where the usage of computer algorithms is applied to regulations,law enforcement,and generally any aspect of everyday life such as transportation or land registration. The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013. A related term,algorithmic regulation,is defined as setting the standard,monitoring and modifying behaviour by means of computational algorithms –automation of judiciary is in its scope. In the context of blockchain,it is also known as blockchain governance.
Regulation of algorithms,or algorithmic regulation,is the creation of laws,rules and public sector policies for promotion and regulation of algorithms,particularly in artificial intelligence and machine learning. For the subset of AI algorithms,the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally,including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks,but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades,which is shifting due to technological progress into the realm of AI algorithms.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI);it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally,including in the European Union and in supra-national bodies like the IEEE,OECD and others. Since 2016,a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation,AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI,and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge,Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini,the AJL uses research,artwork,and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars,media appearances,and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021,Fast Company named AJL as one of the 10 most innovative AI companies in the world.
Black in AI,formally called the Black in AI Workshop,is a technology research organization and affinity group,founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop,later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas,fostering collaborations,mentorship,and advocacy.
Karen Hao is an American journalist and data scientist. Currently a contributing writer for The Atlantic and previously a foreign correspondent based in Hong Kong for The Wall Street Journal and senior artificial intelligence editor at the MIT Technology Review,she is best known for her coverage on AI research,technology ethics and the social impact of AI. Hao also co-produces the podcast In Machines We Trust and writes the newsletter The Algorithm.
Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning,artificial intelligence,algorithmic bias,and AI accountability. She is currently an Assistant Professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University. Lakkaraju is known for her work on explainable machine learning. More broadly,her research focuses on developing machine learning models and algorithms that are interpretable,transparent,fair,and reliable. She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare,criminal justice,business,and education. Lakkaraju was named as one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.
Automated decision-making (ADM) involves the use of data,machines and algorithms to make decisions in a range of contexts,including public administration,business,health,education,law,employment,transport,media and entertainment,with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources,such as databases,text,social media,sensors,images or speech,that is processed using various technologies including computer software,algorithms,machine learning,natural language processing,artificial intelligence,augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical,legal,ethical,societal,educational,economic and health consequences.
Predictive policing is the usage of mathematics,predictive analytics,and other analytical techniques in law enforcement to identify potential criminal activity. A report published by the RAND Corporation identified four general categories predictive policing methods fall into:methods for predicting crimes,methods for predicting offenders,methods for predicting perpetrators' identities,and methods for predicting victims of crime.
Moral outsourcing refers to placing responsibility for ethical decision-making on to external entities,often algorithms. The term is often used in discussions of computer science and algorithmic fairness,but it can apply to any situation in which one appeals to outside agents in order to absolve themselves of responsibility for their actions. In this context,moral outsourcing specifically refers to the tendency of society to blame technology,rather than its creators or users,for any harm it may cause.