Rashida Richardson | |
---|---|
Occupation(s) | Scholar, assistant professor, attorney |
Academic background | |
Education | Wesleyan University (BA) |
Alma mater | Northeastern University School of Law (JD) |
Academic work | |
Discipline | Law and technology policy |
Website | https://www.rashidarichardson.com/ |
Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law [1] and an attorney advisor to the Federal Trade Commission. She is also an assistant professor of law and political science at the Northeastern University School of Law and the Northeastern University Department of Political Science in the College of Social Sciences and Humanities.
Richardson previously was the director of policy research at the AI Now Institute, [2] where she designed,implemented and coordinated research strategies and initiatives on law,policy,and civil rights. [3] During her career as an attorney,researcher,and scholar,Richardson has engaged in science communication and public advocacy.
Richardson earned a BA with Honors from the College of Social Studies at Wesleyan University,and a JD from Northeastern University School of Law. She was an intern with Judge Charles R. Breyer of the US District Court for the Northern District of California,the law firm of Cowan,DeBeats,Abraham &Sheppard,and the Legal Aid Society. [4]
Before joining The AI Now Institute,Richardson served as Legislative Counsel at the New York Civil Liberties Union [5] [6] and had worked as a staff attorney for The Center for HIV Law and Policy. She previously worked at Facebook and HIP Investor in San Francisco.
After her senior fellowship for digital innovation and democracy at the German Marshall Fund,she became a senior policy adviser for data and democracy at the Office of Science and Technology Policy in July 2021. [7] [8] She also joined the faculty at Northeastern Law as an assistant professor of law and political science with the School of Law and the Department of Political Science in the College of Social Sciences and Humanities in July 2021. [9] [8] In 2022,she began work as an attorney advisor for the Federal Trade Commission. [10]
In March 2020,she joined the advisory board of the Electronic Privacy Information Center (EPIC). [11] In March 2021,she joined the board of Lacuna Technologies to provide guidance on equity and data privacy issues. [12]
In 2018,as the director of policy research for the AI Now Institute,Richardson spoke at length with The Christian Science Monitor about the impacts and challenges of artificial intelligence,including a lack of transparency with the public about how the technology is used and a lack of technical expertise by municipalities in how the technology works or whether the results are biased or flawed. [13] Richardson discussed similar concerns about facial recognition technology with NBC News in 2018 [14] and CBS News in 2020. [15] In 2019,Richardson spoke with the Detroit Free Press about the increasing use of artificial intelligence systems by governments across the United States, [16] and extended her warnings to Canada when speaking with The Canadian Press . [17] In 2019,Richardson spoke with Reuters about ethics and artificial intelligence,and expressed concerns about the priorities of Amazon.com,Facebook,Microsoft and others. [18]
In 2019,Richardson testified before the U.S. Senate Subcommittee on Communications,Technology,Innovation,and the Internet in a hearing titled "Optimizing for Engagement:Understanding the Use of Persuasive Technology on Internet Platforms." [19] [20] In advance,she told Politico ,"Government intervention is urgently needed to ensure consumers - particularly women,gender minorities and communities of color - are protected from discrimination and bias at the hands of AI systems." [21]
In 2019,Karen Hao at MIT Technology Review profiled a study led by Richardson at the AI Now Institute,that according to Hao,"has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system." [22] In 2020,Richardson spoke with Hao about the use of predictive analytics applied to child welfare. [23] Richardson also spoke with Will Douglas Heaven at MIT Technology Review for articles published in 2020 and 2021 about algorithmic bias problems in predictive policing programs,including her perspective that "political will" is needed to address the issues. [24] [25]
In 2020,as a visiting scholar at Rutgers Law School and senior fellow in the Digital Innovation and Democracy Initiative at the German Marshall Fund,Richardson spoke with The New York Times about resistance from American police departments in sharing details about technologies used,and the limited regulation of the technology,stating,"The only thing that can improve this black box of predictive policing is the proliferation of transparency laws." [26] [27]
In 2020,Richardson was featured in the documentary film "The Social Dilemma," directed by Jeff Orlowski and distributed by Netflix,that focuses on social media and algorithmic manipulation. [28] [29]
In 2021,she spoke with MIT Technology Review about algorithmic bias and issues related to predictive policing technology. Richardson explained both arrest data and victim reports can skew results,noting that with regard to victim reports,"if you are in a community with a historically corrupt or notoriously racially biased police department,that will affect how and whether people report crime." [30]
Artificial intelligence (AI),in its broadest sense,is intelligence exhibited by machines,particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data,and thus perform tasks without explicit instructions. Quick progress in the field of deep learning,beginning in 2010s,allowed neural networks to surpass many previous approaches in performance.
Kate Crawford is a researcher,writer,composer,producer and academic,who studies the social and political implications of artificial intelligence. She is based in New York and works as a principal researcher at Microsoft Research,the co-founder and former director of research at the AI Now Institute at NYU,a visiting professor at the MIT Center for Civic Media,a senior fellow at the Information Law Institute at NYU,and an associate professor in the Journalism and Media Research Centre at the University of New South Wales. She is also a member of the WEF's Global Agenda Council on Data-Driven Development.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases,fairness,automated decision-making,accountability,privacy,and regulation. It also covers various emerging or potential future challenges such as machine ethics,lethal autonomous weapon systems,arms race dynamics,AI safety and alignment,technological unemployment,AI-enabled misinformation,how to treat certain AI systems if they have a moral status,artificial superintelligence and existential risks.
Artificial intelligence in healthcare is the application of artificial intelligence (AI) to copy human cognition in the analysis,presentation,and understanding of complex medical and health care data. It can also augment and exceed human capabilities by providing faster or new ways to diagnose,treat,or prevent disease. Using AI in healthcare has the potential improve predicting,diagnosing and treating diseases. Through machine learning algorithms and deep learning,AI can analyse large sets of clinical data and electronic health records and can help to diagnose the disease more quickly and precisely. In addition,AI is becoming more relevant in bringing culturally competent healthcare practices to the industry.
Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes,such as "privileging" one category over another in ways different from the intended function of the algorithm.
Joy Adowaa Buolamwini is a Canadian-American computer scientist and digital activist formerly based at the MIT Media Lab. She founded the Algorithmic Justice League (AJL),an organization that works to challenge bias in decision-making software,using art,advocacy,and research to highlight the social implications and harms of artificial intelligence (AI).
The AI Now Institute is an American research institute studying the social implications of artificial intelligence and policy research that addresses the concentration of power in the tech industry. AI Now has partnered with organizations such as the Distributed AI Research Institute (DAIR),Data &Society,Ada Lovelace Institute,New York University Tandon School of Engineering,New York University Center for Data Science,Partnership on AI,and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence. In 2021-2,AI Now’s leadership served as a Senior Advisors on AI to Chair Lina Khan at the Federal Trade Commission. Its executive director is Amba Kak.
Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI),algorithmic bias and data mining. She is a co-founder of Black in AI,an advocacy group that has pushed for more Black roles in AI development and research. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).
Olga Russakovsky is an associate professor of computer science at Princeton University. Her research investigates computer vision and machine learning. She was one of the leaders of the ImageNet Large Scale Visual Recognition challenge and has been recognised by MIT Technology Review as one of the world's top young innovators.
Government by algorithm is an alternative form of government or social ordering where the usage of computer algorithms is applied to regulations,law enforcement,and generally any aspect of everyday life such as transportation or land registration. The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013. A related term,algorithmic regulation,is defined as setting the standard,monitoring and modifying behaviour by means of computational algorithms –automation of judiciary is in its scope. In the context of blockchain,it is also known as blockchain governance.
Regulation of algorithms,or algorithmic regulation,is the creation of laws,rules and public sector policies for promotion and regulation of algorithms,particularly in artificial intelligence and machine learning. For the subset of AI algorithms,the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally,including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks,but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades,which is shifting due to technological progress into the realm of AI algorithms.
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide,including for international organizations without direct enforcement power like the IEEE or the OECD.
The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge,Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini,the AJL uses research,artwork,and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars,media appearances,and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021,Fast Company named AJL as one of the 10 most innovative AI companies in the world.
Black in AI,formally called the Black in AI Workshop,is a technology research organization and affinity group,founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop,later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas,fostering collaborations,mentorship,and advocacy.
Karen Hao is an American journalist and data scientist. Currently a contributing writer for The Atlantic and previously a foreign correspondent based in Hong Kong for The Wall Street Journal and senior artificial intelligence editor at the MIT Technology Review,she is best known for her coverage on AI research,technology ethics and the social impact of AI. Hao also co-produces the podcast In Machines We Trust and writes the newsletter The Algorithm.
Automated decision-making (ADM) involves the use of data,machines and algorithms to make decisions in a range of contexts,including public administration,business,health,education,law,employment,transport,media and entertainment,with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources,such as databases,text,social media,sensors,images or speech,that is processed using various technologies including computer software,algorithms,machine learning,natural language processing,artificial intelligence,augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical,legal,ethical,societal,educational,economic and health consequences.
Predictive policing is the usage of mathematics,predictive analytics,and other analytical techniques in law enforcement to identify potential criminal activity. A report published by the RAND Corporation identified four general categories predictive policing methods fall into:methods for predicting crimes,methods for predicting offenders,methods for predicting perpetrators' identities,and methods for predicting victims of crime.
Discussions on regulation of artificial intelligence in the United States have included topics such as the timeliness of regulating AI,the nature of the federal regulatory framework to govern and promote AI,including what agency should lead,the regulatory and governing powers of that agency,and how to update regulations in the face of rapidly changing technology,as well as the roles of state governments and courts.
Amba Kak is a US-based technology policy expert with over a decade of experience across roles in government,industry,academia,and civil society –and in multiple jurisdictions,including the US,India,and the EU. She is the current Co-Executive Director of AI Now Institute,a US-based research institute where she leads an agenda oriented at actionable policy recommendations to tackle concerns with AI and concentrated power in the tech industry.