![]() | This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
Artificial intelligence in education (often abbreviated as AIEd) is a subfield of educational technology that studies how to use artificial intelligence, such as generative AI chatbots, to create learning environments. [1]
The field considers the ramifications and impacts of AI on existing educational infrastructure, as well as future possibilities and innovations. Considerations in the field include data-driven decision-making, AI ethics, data privacy and AI literacy. [2]
Use of artificial intelligence in education has raised concerns such as the environmental impact and existential risk of AI, as well as the potential for classroom misuse, challenges to learner agency and autonomy, and the perpetuation of misinformation and bias. [3]
Efforts to integrate AI into educational contexts have often followed technological advancement in the history of artificial intelligence.
In the 1960s, educators and researchers began developing computer-based instruction systems, such as PLATO, developed by the University of Illinois. [4]
In the 1970s and 1980s, intelligent tutoring systems (ITS) were being adapted for classroom instruction.
The International Artificial Intelligence in Education Society was founded in 1993. [5]
In the late 2010s and 2020s, large language models (LLMs) and other generative AI technologies have become focuses of AIEd conversations. During this time, AI content detectors have been developed and employed to detect and/or punish unsanctioned AI use in educational contexts, although their accuracy is limited. Some schools banned LLMs, but many bans were later lifted. [6]
AIEd applies theory from education studies, machine learning, and related fields.
One posited model suggests the following three paradigms for AI in education, which follow roughly from least to most learner-centered and from requiring least to most technical complexity from the AI systems:
AI-Directed, Learner-as-recipient: AIEd systems present a pre-set curriculum based on statistical patterns that do not adjust to learner’s feedback.
AI-Supported, Learner-as-collaborator: Systems that incorporate responsiveness to learner’s feedback through, for example, natural language processing, wherein AI can support knowledge construction.
AI-Empowered, Learner-as-leader: This model seeks to position AI as a supplement to human intelligence wherein learners take agency and AI provides consistent and actionable feedback. [7]
Some scholars frame AI in education within the concept of the socio-technical imaginary, defined as collective visions and aspirations that shape societal transformations and governance through the interplay of technology and social norms. [8]
This framing positions AI in the history of “emerging technologies” that have and will transform education, such as computing, the internet, or social media. [9]
AI has been employed in educational settings through a wide range of tools.
Intelligent tutors or Intelligent tutoring systems (ITS), such as SCHOLAR system in the 1970s, are designed to create an artificial interaction between a student and a teacher. [10]
ITS have also been considered for accessibility purposes like supporting students in larger classes who may not be able to get direct attention from human instructors. [10]
Personalized AI platforms can tailor instructional environments to students' needs using algorithms to predict students' patterns and habits and making recommendations to improve performance. [11] Many such platforms are app-based, for example Photomath, which purports to help students solve and understand equations. [12]
Automation in student assessment and feedback could save time for educators. Systems make use of different rubric combinations to grade performances. These systems need oversight to prevent bias in scoring and raise concerns about labor equity and job replacement. [11]
Large language models when designed and/or employed for educational contexts represent a significant area of interest to the AIEd field. Many of the above systems operate using natural language processing and the transformer architecture of generative AI platforms like OpenAI's ChatGPT, and Grok. [13]
Educational uses of generative AI include assessment and feedback, instant machine translations, on-demand proof-reading and copy editing, intelligent tutoring or virtual assistants. [14]
The AI in education community has grown rapidly in the global north, driven by venture capital, big tech, and open educationalists. [14] In the 2020s, companies who create AI services are targeting students and educational institutions as consumers and enterprise partners. Similarly, pre-AI boom educational companies are expanding their AI integration or AI-powered services. [15] These commercial incentives for AIEd innovation may be related to a potential AI bubble. In the U.S., bipartisan support of AI development in K-12 education has been expressed, but specific implementations and best practices remain contentious. [16]
Many higher-education institutions in the 2020s have needed to develop guidelines and policies to account for AI. [17] Governmental and non-governmental organizations such as UNESCO, Article 4 of the European Union's AI Act, and the U.S. Department of Education have published reports advocating for specific AIEd approaches. [18] [19] [20]
Some educators and school administrations have found AI to improve the efficiency of their work. Some educators are concerned about job replacement. [21] [22] Some teachers lack trust and have a negative attitude towards the use of AI in education, particularly due to student misuse or over-reliance. [23]
Some educators advocate for the integration of AI across the curriculum and the need to create curricula that develop AI literacy. [24] Research and reporting from 2024 onward suggest that the number of higher education instructors using LLMs for grading, research, and/or curricular design has increased. [25]
Reporting has indicated that students' use of AI in higher education has been increasing since 2022 and is relatively commonplace. The evidence suggests students believe their college education has been changed rather than "ruined" by AI and that they want instructors and themselves to have ongoing AI guidance. [26]
Some studies have found students to be flexible with technology such as personalized feedback and self-paced learning, but still perceive reliability, privacy, and fairness as concerns. [27] [28]
In September 2025, The Atlantic published an op-ed from a high school senior arguing that the normalization of AI cheating was eroding critical thinking, academic integrity, creativity, and the shared student experience. [29]
The advancement and adoption of AI in education comes with criticisms and ethical challenges.
Some critics believe that reliance on the technology could lead students to develop less creativity, critical thinking, and/or problem-solving abilities. Reliance on generative artificial intelligence has been linked with reduced academic self-esteem and performance, and heightened learned helplessness. [30] Algorithm errors and hallucinations are some of the common flaws in AI agents, making them less trustworthy or reliable. [3] These limitations underscore concerns regarding academic integrity, skill development, and information accuracy regarding AI use in academic settings. [31]
While AIEd technologies may be able to improve an individual user's access to education by serving as an assistive technology, the proliferation or need for AI in education continues to raise concerns about equal access to technology. [32] For example, lower-income or rural areas may have more limited access to the computing hardware or paid software subscriptions needed for AIEd platform use. [33] This might widen the digital divide or create further gaps in terms of access to education. Some AIEd practitioners believe that global efforts should be made towards increasing accessibility and training educators to serve underprivileged areas. [3] [34]
AI agents might be trained on biased data sets, and thus continue to perpetuate societal biases. Since LLMs were created to produce human-like text, algorithmic bias can easily, and unintentionally be introduced and reproduced. [35] Some critics also argue that AI's data processing and monitoring reinforce neoliberal approaches to education rather than addressing inequalities. [36] [37]
Data privacy and intellectual property are further ethical concerns of AIEd. [38] [39] [40] Contemporary LLMs are trained on datasets that are often proprietary and may contain copyrighted or theoretically private materials (e.g. personal emails). Further, many LLMs are regularly trained with data from end users. [10] [41] [ failed verification ]