AI Now Institute

Last updated
AI Now Institute
FoundedNovember 15, 2017;6 years ago (2017-11-15)
Founders Kate Crawford,
Meredith Whittaker
Type 501(c)(3) Nonprofit organization
Location
Coordinates 40°44′06″N73°59′41″W / 40.7350°N 73.9948°W / 40.7350; -73.9948
Website www.ainowinstitute.org

The AI Now Institute (AI Now) is an American research institute studying the social implications of artificial intelligence and policy research that addresses the concentration of power in the tech industry. [2] [3] [4] AI Now has partnered with organizations such as the Distributed AI Research Institute (DAIR), Data & Society, Ada Lovelace Institute, New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence. In 2021-2, AI Now’s leadership served as a Senior Advisors on AI to Chair Lina Khan at the Federal Trade Commission. [5] Its executive director is Amba Kak. [6] [7]

Contents

Founding and mission

AI Now grew out of a 2016 symposium spearheaded by the Obama White House Office of Science and Technology Policy. The event was led by Meredith Whittaker, the founder of Google's Open Research Group, and Kate Crawford, a principal researcher at Microsoft Research. [8] The event focused on near-term implications of AI in social domains: Inequality, Labor, Ethics, and Healthcare. [9]

In November 2017, AI Now held a second symposium on AI and social issues, and publicly launched the AI Now Institute in partnership with New York University. [8] It is claimed to be the first university research institute focused on the social implications of AI, and the first AI institute founded and led by women. [1] It is now a fully independent institute.

In an interview with NPR, Crawford stated that the motivation for founding AI Now was that the application of AI into social domains - such as health care, education, and criminal justice - was being treated as a purely technical problem. The goal of AI Now's research is to treat these as social problems first, and bring in domain experts in areas like sociology, law, and history to study the implications of AI. [10]

Research

AI Now publishes an annual reports on the state of AI, and its integration into society. Its 2017 Report stated that, "current framings of AI ethics are failing", and provided ten strategic recommendations for the field - including pre-release trials of AI systems, and increased research into bias and diversity in the field. The report was noted for calling for an end to "black box" systems in core social domains, such as those responsible for criminal justice, healthcare, welfare, and education. [11] [12] [13]

In April 2018, AI Now released a framework for algorithmic impact assessments, as a way for governments to assess the use of AI in public agencies. According to AI Now, an AIA would be similar to environmental impact assessment, in that it would require public disclosure and access for external experts to evaluate the effects of an AI system, and any unintended consequences. This would allow systems to be vetted for issues like biased outcomes or skewed training data, which researchers have already identified in algorithmic systems deployed across the country. [14] [15] [16]

Its 2023 Report [17] argued that meaningful reform of the tech sector must focus on addressing concentrated power in the tech industry. [18]

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

Weak artificial intelligence is artificial intelligence that implements a limited part of the mind, or, as narrow AI, is focused on one narrow task.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

<span class="mw-page-title-main">Artificial intelligence in healthcare</span> Overview of the use of artificial intelligence in healthcare

Artificial intelligence in healthcare is the application of artificial intelligence (AI) to copy human cognition in the analysis, presentation, and understanding of complex medical and health care data. It can also augment and exceed human capabilities by providing faster or new ways to diagnose, treat, or prevent disease. Using AI in healthcare has the potential improve predicting, diagnosing and treating diseases. Through machine learning algorithms and deep learning, AI can analyse large sets of clinical data and electronic health records and can help to diagnose the disease more quickly and precisely.

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

<span class="mw-page-title-main">Joy Buolamwini</span> Computer scientist and digital activist

Joy Adowaa Buolamwini is a Canadian-American computer scientist and digital activist formerly based at the MIT Media Lab. She founded the Algorithmic Justice League (AJL), an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).

<span class="mw-page-title-main">Meredith Whittaker</span> American artificial intelligence research scientist

Meredith Whittaker is the president of the Signal Foundation and serves on its board of directors. She was formerly the Minderoo Research Professor at New York University (NYU), and the co-founder and faculty director of the AI Now Institute. She also served as a senior advisor on AI to Chair Lina Khan at the Federal Trade Commission. Whittaker was employed at Google for 13 years, where she founded Google's Open Research group and co-founded the M-Lab. In 2018, she was a core organizer of the Google Walkouts and resigned from the company in July 2019.

<span class="mw-page-title-main">Rumman Chowdhury</span> Data scientist, AI specialist

Rumman Chowdhury is a Bangladeshi American data scientist, a business founder, and former responsible artificial intelligence lead at Accenture. She was born in Rockland County, New York. She is recognized for her contributions to the field of data science.

<span class="mw-page-title-main">Timnit Gebru</span> Computer scientist

Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is a co-founder of Black in AI, an advocacy group that has pushed for more Black roles in AI development and research. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).

ACM Conference on Fairness, Accountability, and Transparency is a peer-reviewed academic conference series about ethics and computing systems. Sponsored by the Association for Computing Machinery, this conference focuses on issues such as algorithmic transparency, fairness in machine learning, bias, and ethics from a multi-disciplinary perspective. The conference community includes computer scientists, statisticians, social scientists, scholars of law, and others.

Amazon Rekognition is a cloud-based software as a service (SaaS) computer vision platform that was launched in 2016. It has been sold to, and used by, a number of United States government agencies, including U.S. Immigration and Customs Enforcement (ICE) and Orlando, Florida police, as well as private entities.

<span class="mw-page-title-main">Sandra Wachter</span> Data ethics, artificial intelligence and robotics researcher

Sandra Wachter is a professor and senior researcher in data ethics, artificial intelligence, robotics, algorithms and regulation at the Oxford Internet Institute. She is a former Fellow of The Alan Turing Institute.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

<span class="mw-page-title-main">Algorithmic Justice League</span> Digital advocacy non-profit organization

The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.

<span class="mw-page-title-main">Rashida Richardson</span> American attorney and scholar

Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law and an attorney advisor to the Federal Trade Commission. She is also an assistant professor of law and political science at the Northeastern University School of Law and the Northeastern University Department of Political Science in the College of Social Sciences and Humanities.

<span class="mw-page-title-main">Black in AI</span> Technology research organization

Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship, and advocacy.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

References

  1. 1 2 "New Artificial Intelligence Research Institute Launches". NYU Tandon News. 2017-11-25. Retrieved 2018-07-07.
  2. "About Us". AI Now Institute. Retrieved 2023-05-12.
  3. "The field of AI research is about to get way bigger than code". Quartz. 2017-11-15. Retrieved 2018-07-09.
  4. "Biased AI Is A Threat To Civil Liberties. The ACLU Has A Plan To Fix It". Fast Company. 2017-07-25. Retrieved 2018-07-07.
  5. "FTC Chair Lina M. Khan Announces New Appointments in Agency Leadership Positions". Federal Trade Commission. 2021-11-19. Retrieved 2023-05-12.
  6. "Amazon, Google, Meta, Microsoft and other tech firms agree to AI safeguards set by the White House". AP News. 2023-07-21. Retrieved 2023-07-21.
  7. "People". AI Now Institute. Retrieved 2023-07-21.
  8. 1 2 Ahmed, Salmana. "In Pursuit of Fair and Accountable AI". Omidyar. Retrieved 19 July 2018.
  9. "2016 Symposium". ainowinstitute.org. Archived from the original on 2018-07-20. Retrieved 2018-07-09.
  10. "Studying Artificial Intelligence At New York University". NPR. Retrieved 2018-07-18.
  11. "AI Now 2017 Report". AI Now. 18 October 2017. Retrieved 19 July 2018.
  12. Simonite, Tom (18 October 2017). "AI Experts Want to End 'Black Box' Algorithms in Government". Wired. Retrieved 19 July 2018.
  13. Rosenberg, Scott (1 November 2017). "Why AI is Still Waiting For Its Ethics Transplant". Wired. Retrieved 19 July 2018.
  14. Gershgorn, Dave (9 April 2018). "AI experts want government algorithms to be studied like environmental hazards". Quartz. Retrieved 19 July 2018.
  15. "AI Now AIA Report" (PDF). AI Now. Archived from the original (PDF) on 14 June 2020. Retrieved 19 July 2018.
  16. Reisman, Dillon (16 April 2018). "Algorithms Are Making Government Decisions. The Public Needs to Have a Say". Medium. ACLU. Retrieved 19 July 2018.
  17. "2023 Landscape". AI Now Institute. Retrieved 2023-05-16.
  18. Samuel, Sigal (2023-04-12). "Finally, a realistic roadmap for getting AI companies in check". Vox. Retrieved 2023-05-16.

See also