ACM Conference on Fairness, Accountability, and Transparency

Last updated

ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT, formerly known as ACM FAT*) is a peer-reviewed academic conference series about ethics and computing systems. [1] Sponsored by the Association for Computing Machinery, this conference focuses on issues such as algorithmic transparency, fairness in machine learning, bias, and ethics from a multi-disciplinary perspective. [2] The conference community includes computer scientists, statisticians, social scientists, scholars of law, and others. [3]

Contents

The conference is sponsored by Big Tech companies such as Facebook, Twitter, and Google, and large foundations such as the Rockefeller Foundation, Ford Foundation, MacArthur Foundation, and Luminate. [4] Sponsors contribute to a general fund (no "earmarked" contributions are allowed) and have no say in the selection, substance, or structure of the conference. [5]

FATE Overview

The acronym FATE refers to Fairness, Accountability, Transparency, and Ethics in sociotechnical systems.  FATE is a topic of rising interest as the societal and ethical implications of complex systems such as artificial intelligence (AI), machine learning (ML), and natural language processing (NLP) are increasing. The topic provides an interdisciplinary challenge of bridging the gap of transparency between technical and non technical academics and policy makers to ensure the safety and equity of algorithmic systems as they advance at a rapid rate. [6]   Some solutions and techniques that have been discovered include Explainable artificial intelligence (XAI).

Recent adoptions of AI in both the public and private sector include the predictive recidivism algorithm (otherwise known as COMPAS) which was deployed in the US Court, as well as Amazon’s AI Powered recruitment tool, later proven to favor male over female applicants.  Further, AI based decision support (ADS) powered by machine learning techniques is more commonly being integrated across fields including criminal justice, education, and benefits provision. [7]   FATE functions as a means to look further into algorithms to raise awareness and work towards a solution.  Companies such as Microsoft have created research teams specifically devoted to the topic. [8]

The FAccT Conference 2024 is looking for articles specifically within the following areas: Audits and Evaluation Practices, System Development and Deployment, Experiences and Interactions, Critical Studies, Law and Policy, and Philosophy.

For further reading on areas relevant to FATE see:

Algorithmic bias

Artificial intelligence art

Artificial intelligence marketing

Ethics of artificial intelligence

List of conferences

Past and future FAccT conferences include:

YearLocationDateKeynote/Invited speakersLink
2024 Rio de Janeiro, Brazil June 3–6TBD Website
2023 Chicago, Illinois and online June 12–15 Payal Arora, Charlotte Burrows, Alex Hanna, Moritz Hardt, Alondra Nelson, Ziad Obermeyer Website
2022 Seoul, South Korea and online June 21–24 Cha Meeyoung, Pascale Fung, Mariano-Florentino Cuéllar, André Brock Website
2021 Online March 3–10 Yeshimabeit Milner, Katrina Ligett, Julia Angwin Website
2020 Barcelona, Spain January 27–30 Ayanna Howard, Yochai Benkler, Nani Jansen Reventlow Website
2019 Atlanta, Georgia January 29–31 Jon Kleinberg, Deirdre Mulligan Website
2018 New York, New York February 23–24 Latanya Sweeney, Deborah Hellman Website

Related Research Articles

Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or other animals. It is a field of study in computer science that develops and studies intelligent machines. Such machines may be called AIs.

The Association for Computing Machinery (ACM) is a US-based international learned society for computing. It was founded in 1947 and is the world's largest scientific and educational computing society. The ACM is a non-profit professional membership group, reporting nearly 110,000 student and professional members as of 2022. Its headquarters are in New York City.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.

<span class="mw-page-title-main">Eric Horvitz</span> American computer scientist, and Technical Fellow at Microsoft

Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.

<span class="mw-page-title-main">Partnership on AI</span> Nonprofit coalition

Partnership on Artificial Intelligence to Benefit People and Society, otherwise known as Partnership on AI, is a nonprofit coalition committed to the responsible use of artificial intelligence. Coming into inception in September 2016, PAI grouped together members from over 90 companies and non-profits in order to explore best practice recommendations for the tech community. Since its founding, Partnership on AI has experienced plethora of change with influential moments, comprehensive principles and missions, and generating more relevancy by every passing day.

Explainable AI (XAI), often overlapping with Interpretable AI, or Explainable Machine Learning (XML), either refers to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

Fairness in machine learning refers to the various attempts at correcting algorithmic bias in automated decision processes based on machine learning models. Decisions made by computers after a machine-learning process may be considered unfair if they were based on variables considered sensitive. Examples of these kinds of variable include gender, ethnicity, sexual orientation, disability, and more. As it is the case with many ethical concepts, definitions of fairness and bias are always controversial. In general, fairness and bias are considered relevant when the decision process impacts people's lives. In machine learning, the problem of algorithmic bias is well known and well studied. Outcomes may be skewed by a range of factors and thus might be considered unfair with respect to certain groups or individuals. An example would be the way social media sites deliver personalized news to consumers.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

<span class="mw-page-title-main">Algorithmic Justice League</span> Digital advocacy non-profit organization

The Algorithmic Justice League (AJL) is a digital advocacy non-profit organization based in Cambridge, Massachusetts. Founded in 2016 by computer scientist Joy Buolamwini, the AJL uses research, artwork, and policy advocacy to increase societal awareness regarding the use of artificial intelligence (AI) in society and the harms and biases that AI can pose to society. The AJL has engaged in a variety of open online seminars, media appearances, and tech advocacy initiatives to communicate information about bias in AI systems and promote industry and government action to mitigate against the creation and deployment of biased AI systems. In 2021, Fast Company named AJL as one of the 10 most innovative AI companies in the world.

<span class="mw-page-title-main">Hanna Wallach</span> Computational social scientist

Hanna Wallach is a computational social scientist and partner research manager at Microsoft Research. Her work makes use of machine learning models to study the dynamics of social processes. Her current research focuses on issues of fairness, accountability, transparency, and ethics as they relate to AI and machine learning.

<span class="mw-page-title-main">Black in AI</span> Technology research organization

Black in AI, formally called the Black in AI Workshop, is a technology research organization and affinity group, founded by computer scientists Timnit Gebru and Rediet Abebe in 2017. It started as a conference workshop, later pivoting into an organization. Black in AI increases the presence and inclusion of Black people in the field of artificial intelligence (AI) by creating space for sharing ideas, fostering collaborations, mentorship, and advocacy.

<span class="mw-page-title-main">Margaret Mitchell (scientist)</span> U.S. computer scientist

Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models, as well as more transparent reporting of their intended use.

<span class="mw-page-title-main">Deborah Raji</span> Nigerian-Canadian computer scientist and activist

Inioluwa Deborah Raji is a Nigerian-Canadian computer scientist and activist who works on algorithmic bias, AI accountability, and algorithmic auditing. Raji has previously worked with Joy Buolamwini, Timnit Gebru, and the Algorithmic Justice League on researching gender and racial bias in facial recognition technology. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on how to operationalize ethical considerations in machine learning engineering practice. A current Mozilla fellow, she has been recognized by MIT Technology Review and Forbes as one of the world's top young innovators.

Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning, artificial intelligence, algorithmic bias, and AI accountability. She is currently an Assistant Professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University. Lakkaraju is known for her work on explainable machine learning. More broadly, her research focuses on developing machine learning models and algorithms that are interpretable, transparent, fair, and reliable. She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare, criminal justice, business, and education. Lakkaraju was named as one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

<span class="mw-page-title-main">Jennifer Wortman Vaughan</span> American computer scientist

Jennifer (Jenn) Wortman Vaughan is an American computer scientist and Senior Principal Researcher at Microsoft Research focusing mainly on building responsible artificial intelligence (AI) systems as part of Microsoft's Fairness, Accountability, Transparency, and Ethics in AI (FATE) initiative. Jennifer is also a co-chair of Microsoft's Aether group on transparency that works on operationalizing responsible AI across Microsoft through making recommendations on responsible AI issues, technologies, processes, and best practices. Jennifer is also active in the research community, she served as the workshops chair and the program co-chair of the Conference on Neural Information Processing Systems (NeurIPs) in 2019 and 2021, respectively. She currently serves as Steering Committee member of the Association for Computing Machinery Conference on Fairness, Accountability and Transparency. Jennifer is also a senior advisor to Women in Machine Learning (WiML), an initiative co-founded by Jennifer in 2006 aiming to enhance the experience of women in Machine Learning.

The AAAI/ACM Conference on AI, Ethics, and Society (AIES) is a peer-reviewed academic conference series focused on societal and ethical aspects of artificial intelligence. The conference is jointly organized by the Association for Computing Machinery, namely the Special Interest Group on Artificial Intelligence (SIGAI), and the Association for the Advancement of Artificial Intelligence. The conference community includes lawyers, practitioners, and academics in computer science, philosophy, public policy, economics, human-computer interaction, and more.

Resisting AI: An Anti-fascist Approach to Artificial Intelligence is a book on artificial intelligence (AI) by Dan McQuillan, published in 2022 by Bristol University Press.

References

  1. "Association for Computing Machinery Conferences" . Retrieved 2019-03-27.
  2. Laufer, Benjamin; Jain, Sameer; Cooper, A. Feder; Kleinberg, Jon; Heidari, Hoda (2022-06-20). "Four Years of FAccT: A Reflexive, Mixed-Methods Analysis of Research Contributions, Shortcomings, and Future Prospects". Proceedings of the 2022 Conference on Fairness, Accountability, and Transparency. FAccT '22. Seoul, Korea: Association for Computing Machinery. pp. 401–426. doi: 10.1145/3531146.3533107 . ISBN   978-1-4503-9352-2. S2CID   249642305.
  3. "2019 ACM FAT conference". www.acm.org. Retrieved 2019-02-01.
  4. "ACM FAccT 2020 Sponsors" . Retrieved 2019-02-19.
  5. "ACM FAccT Sponsorship Policy" . Retrieved 2019-02-19.
  6. Memarian, Bahar; Doleck, Tenzin (2023-01-01). "Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and higher education: A systematic review". Computers and Education: Artificial Intelligence. 5: 100152. doi: 10.1016/j.caeai.2023.100152 . ISSN   2666-920X.
  7. Levy, Karen; Chasalow, Kyla E.; Riley, Sarah (2021-10-13). "Algorithms and Decision-Making in the Public Sector". Annual Review of Law and Social Science. 17 (1): 309–334. arXiv: 2106.03673 . doi:10.1146/annurev-lawsocsci-041221-023808. ISSN   1550-3585.
  8. "FATE: Fairness, Accountability, Transparency & Ethics in AI". Microsoft Research. Retrieved 2023-11-19.