AI literacy or artificial intelligence literacy, is the ability to understand, use, monitor, and critically reflect on AI applications. [1] The term usually refers to teaching skills and knowledge to the general public, not to people who are adept in AI. [1]
AI literacy is necessary for school and college students. [1] [2] AI is employed in a variety of applications, including self-driving automobiles and Virtual assistants. Users of these tools should be able to make informed judgments. AI literacy is also likely to have an impact on pupils' future employment prospects. [1]
One of the common and early definitions for AI literacy was that it is "a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace." [3]
Later definitions are the ability to understand, use, monitor, and critically reflect on AI applications, [1] or the ability to understand, use, evaluate, and ethically navigate AI. [2]
AI literacy is linked to other forms of literacy. AI literacy requires digital literacy, whereas scientific and computational literacy may inform it, and data literacy has a significant overlap with it. [3]
AI literacy encompasses multiple categories, including theoretical understanding of how artificial intelligence works, the usage of artificial intelligence technologies, the critical appraisal of artificial intelligence, and its ethics. [2]
Knowledge and understanding of AI refers to a basic understanding of what artificial intelligence is and how it works. This includes familiarity with machine learning algorithms and the limitations and biases present in AI systems. [2] Users who know and understand AI should be familiar with various technologies that use artificial intelligence, including cognitive systems, robotics and machine learning. [3]
Using and applying AI refers to the ability to use AI tools to solve problems and perform tasks such as programming and analyzing big data. [2]
Evaluation and creation refers to the ability to critically evaluate the quality and reliability of AI systems. It also refers to designing and building fair and ethical AI systems. [2] To evaluate correctly, users should also learn in which areas AI is strong, and in which areas it is weak. [3]
AI ethics refers to understanding the moral implications of AI, and the making informed decisions regarding the use of AI tools. [2] This area includes considerations such as:
Support AI by developing associated knowledge and skills such as programming and statistics. [2]
Several governments have recognized the need to promote AI literacy, including among adults. Such programs have been published in the United States, China, Germany and Finland. [1] Programs intended for the general public usually consist of short and easy to understand online study units. Programs intended for children are usually project-based. Programs for students at colleges and universities often address the specific professional needs of the student, depending on their field of study. [1] Beyond the education system, AI literacy can also be developed in the community, for example in museums. [6]
Schools use diverse pedagogies to promote AI literacy. [7] These include:
Artificial intelligence curricula can improve students' understanding of topics such as machine learning, neural networks, and deep learning. [8]
The DAILy (Developing AI Literacy) program was developed by MIT and Boston University with the goal of increasing AI literacy among middle school students. The program is structured as a 30-hour workshop that includes the topics of introduction to artificial intelligence, logical systems (decision trees), supervised learning, neural networks, computational learning, deepfake, and natural language generators. Students examine the moral and social implications of each topic, as well as its occupational implications. [9]
Before the second decade of the 21st century, artificial intelligence was studied mainly in STEM courses. Later, projects emerged to increase artificial intelligence education, specifically to promote AI literacy. [2] Most courses start with one or more study units that deal with basic questions such as what artificial intelligence is, where it comes from, what it can do and what it can't do. Most courses also refer to machine learning and deep learning. Some of the courses deal with moral issues in artificial intelligence. [1]
At the University of Florida, a comprehensive effort was made to infuse artificial intelligence into the curriculum across all disciplines. The goal of the move was to provide university students with the skills needed for the 21st century work market. [2] As part of the project, over 100 new faculty members were recruited. Each student was expected to complete a fundamental artificial intelligence course as well as a course on ethics, information, and technology. Each student chose an extra course from a variety of academic areas, including medicine and business. Students who successfully completed all three courses earned an official certificate. [2]
The transition was accompanied by an increase in hands-on learning at the university. Courses were held in collaboration with industry, where students and industry professionals tried to solve real-world problems together, with the help of AI tools. [2]
To supervise the program, a team was formed to analyze existing and new courses and map the literacy areas covered in each. Each course was identified by the areas of literacy to which it related, allowing students to select courses that suited them and administrators to detect gaps or deficits in certain areas. [2]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
Gerald Jay Sussman is the Panasonic Professor of Electrical Engineering at the Massachusetts Institute of Technology (MIT). He has been involved in artificial intelligence (AI) research at MIT since 1964. His research has centered on understanding the problem-solving strategies used by scientists and engineers, with the goals of automating parts of the process and formalizing it to provide more effective methods of science and engineering education. Sussman has also worked in computer languages, in computer architecture, and in Very Large Scale Integration (VLSI) design.
The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.
Weak artificial intelligence is artificial intelligence that implements a limited part of the mind, or, as narrow AI, is focused on one narrow task.
An intelligent tutoring system (ITS) is a computer system that imitates human tutors and aims to provide immediate and customized instruction or feedback to learners, usually without requiring intervention from a human teacher. ITSs have the common goal of enabling learning in a meaningful and effective manner by using a variety of computing technologies. There are many examples of ITSs being used in both formal education and professional settings in which they have demonstrated their capabilities and limitations. There is a close relationship between intelligent tutoring, cognitive learning theories and design; and there is ongoing research to improve the effectiveness of ITS. An ITS typically aims to replicate the demonstrated benefits of one-to-one, personalized tutoring, in contexts where students would otherwise have access to one-to-many instruction from a single teacher, or no teacher at all. ITSs are often designed with the goal of providing access to high quality education to each and every student.
Digital literacy is an individual's ability to find, evaluate, and communicate information using typing or digital media platforms. It is a combination of both technical and cognitive abilities in using information and communication technologies to create, evaluate, and share information.
The following outline is provided as an overview of and topical guide to artificial intelligence:
Artificial intelligence marketing (AIM) is a form of marketing that uses artificial intelligence concepts and models such as machine learning, Natural process Languages, and Bayesian Networks to achieve marketing goals. The main difference between AIM and traditional forms of marketing resides in the reasoning, which is performed by a computer algorithm rather than a human.
Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.
DeepMind Technologies Limited, doing business as Google DeepMind, is a British-American artificial intelligence research laboratory which serves as a subsidiary of Google. Founded in the UK in 2010, it was acquired by Google in 2014 and merged with Google AI's Google Brain division to become Google DeepMind in April 2023. The company is based in London, with research centres in Canada, France, Germany, and the United States.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
Explainable AI (XAI), often overlapping with Interpretable AI, or Explainable Machine Learning (XML), either refers to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.
Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
ACM Conference on Fairness, Accountability, and Transparency is a peer-reviewed academic conference series about ethics and computing systems. Sponsored by the Association for Computing Machinery, this conference focuses on issues such as algorithmic transparency, fairness in machine learning, bias, and ethics from a multi-disciplinary perspective. The conference community includes computer scientists, statisticians, social scientists, scholars of law, and others.
Bruce Martin McLaren is an American researcher, scientist and author. He is an Associate Research Professor at Carnegie Mellon University and a former President of the International Artificial Intelligence in Education Society (2017-2019).
Hanna Wallach is a computational social scientist and partner research manager at Microsoft Research. Her work makes use of machine learning models to study the dynamics of social processes. Her current research focuses on issues of fairness, accountability, transparency, and ethics as they relate to AI and machine learning.
Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning, artificial intelligence, algorithmic bias, and AI accountability. She is currently an Assistant Professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University. Lakkaraju is known for her work on explainable machine learning. More broadly, her research focuses on developing machine learning models and algorithms that are interpretable, transparent, fair, and reliable. She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare, criminal justice, business, and education. Lakkaraju was named as one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.
Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.
Seiji Isotani is a Brazilian scientist specializing in the areas of artificial intelligence applied to education, particularly intelligent tutoring systems, gamification, and educational technologies. He is a visiting full professor in education at Harvard University and a full professor in computer science and educational technologies at the University of São Paulo. since 2019,