Since the public release of ChatGPT by OpenAI in November 2022, the integration of chatbots in education has sparked considerable debate and exploration. Educators' opinions vary widely; while some are skeptical about the benefits of large language models, many see them as valuable tools. [1]
ChatGPT serves multiple educational purposes, including providing topic overviews, generating ideas, and assisting in drafting. [2] A 2023 study highlighted its greater acceptance among professors compared to students. [1] Moreover, chatbots show promise in personalized tutoring. [3]
Efforts to ban chatbots like ChatGPT in schools focus on preventing cheating, but enforcement faces challenges due to AI detection inaccuracies and widespread accessibility of chatbot technology. Banning could also hinder students' opportunities to learn effective technology usage, while straining teacher-student relationships. [2]
ChatGPT is a virtual assistant developed by OpenAI and launched in November 2022. It uses advanced artificial intelligence (AI) models called generative pre-trained transformers (GPT), such as GPT-4o, to generate text. GPT models are large language models that are pre-trained to predict the next token in large amounts of text (a token usually corresponds to a word, subword or punctuation). This pre-training enables them to generate human-like text, by repeatedly predicting the next likely token. After pre-training, these GPT models were fine-tuned to adopt an assistant role, improve response accuracy and reduce harmful content; using supervised learning and reinforcement learning from human feedback (RLHF). [4]
In a January 2023 assessment, ChatGPT demonstrated performance comparable to graduate-level standards at institutions such as the University of Minnesota and Wharton School. [5] A blind study conducted at the University of Wollongong Law School compared GPT-3.5 and GPT-4 with 225 students in an end-of-semester criminal law exam. The findings revealed that the average score of the students was considerably higher than the GenAI models. [6]
Its application in fields like computer programming and numerical methods has been explored and validated. [7] Assessment psychologist Eka Roivainen's study suggested ChatGPT's verbal IQ approximates the top 0.1% of test-takers. [8] One notable drawback of ChatGPT is occasional inaccuracies detected in academic assignments, particularly in technical subjects such as mathematics, as noted by educators like Ethan Mollick from the Wharton School. [9]
A survey conducted between March and April 2023 revealed 58% of American students acknowledged using ChatGPT, with 38% admitting use without teacher consent, highlighting challenges in enforcing bans. [10] [11]
In response to educational demand, OpenAI launched "ChatGPT Edu" in May 2024, aiming to offer universities affordable access to this technology. [12]
AI tools like ChatGPT have shown promise in enhancing literacy skills among adolescents and adults. They provide instant feedback on writing, aid in idea generation, and help improve grammar and vocabulary. [13] These tools can also support students with disabilities, such as dyslexia, by assisting with spelling and grammar. Additionally, AI can facilitate higher-order thinking by automating lower-order tasks, allowing students to focus on complex conceptual work. [14] AI tools like GitHub Copilot, similar to ChatGPT, have significantly impacted programming by enhancing productivity and influencing developers' perceptions of AI in technical fields. [15]
The education technology company Chegg, which was a website dedicated to helping students with assignments using a database of collected worksheets and assignments, became one of the most prominent business victims to ChatGPT, with its stock price nearly being cut in half after a quarterly earnings call in May 2023. [16] [17]
A study critically examined in 2023 the role of ChatGPT in education. The authors had ChatGPT generate a SWOT analysis of itself in an educational setting, identifying several key issues and potential uses. They highlighted that while ChatGPT can generate human-like responses and assist with personalized learning, it also has limitations. For instance, the AI may produce inaccurate information, known as "hallucinations", which can be difficult for students to distinguish from accurate responses. This underscores the importance of teaching digital literacy and critical thinking skills. The study also noted that ChatGPT's responses often lacked depth in understanding broader educational goals and processes, focusing primarily on providing immediate answers to queries. The potential biases in ChatGPT's training data and the ethical implications of its use were also discussed, particularly concerning the control exerted by its developers over the content it generates. [18]
ChatGPT's capability to generate assignments has prompted concerns about academic integrity, particularly in essay writing, with critics foreseeing potential misuse and devaluation of traditional writing skills. [19] [20] [21]
One instance, as reported by Rolling Stone , resulted in a professor at Texas A&M University misusing ChatGPT to check student assignments for verifying whether an assignment utilized the large language model. ChatGPT returned a result of all students using it, and so the professor promptly returned a failing grade to all of his students. Rolling Stone noted however that ChatGPT is unable to reliably verify whether it was used to write student assignments, and a post to a Reddit community dedicated to ChatGPT received widespread attention with many attacking the professor for a lack of familiarity towards the chatbot. [22] [23] More recently, a 2024 study found that increased academic workload and time constraints lead students to use ChatGPT more frequently. However, this reliance is associated with negative outcomes like procrastination, memory loss, and decreased academic performance. [24] [25]
The impact of ChatGPT on education, especially in English studies, is a topic of significant discussion. Daniel Herman's perspective reflects concerns about the potential devaluation of writing skills if AI can generate text as easily as humans. [26] Similarly, Naomi S. Baron wrote that "If AI text generation does our writing for us, we diminish opportunities to think out problems for ourselves". She also mentioned the risk of a slippery slope, where students start letting generative AI control the content and style of their writing, leading to a diminished sense of ownership. [27] Educators might have to rethink the practicality and value of writing in higher education. For example, business schools could reframe writing from a career prep approach to a liberal-arts approach, where writing is "the foundation of a rich and meaningful life". [28] According to Katy Major's faculty reactions towards AI at Ashland University varied depending on their field. Fields which viewed written words as carrying "significant view and value" were more likely to be concerned with AI in education as opposed to fields where writing is "treated pragmatically, as a means to some other end". [29] Others highlight the need for educators to adapt their methods to focus more on critical thinking and reasoning, [30] as AI tools like ChatGPT could potentially be used for plagiarism or produce biased content.
Although there is growing concern over the rise of AI and its potential use of plagiarism and production of biased content, some have argued ChatGPT itself is an effective tool to enhance students' critical thinking and reasoning. An example of this in practice involves a student who was assigned to analyze the work of a singer and songwriter Burna Boy. ChatGPT failed to offer an in-depth analysis of a political song by Burna Boy, only being able to assist with translating Nigerian Pidgin and slang, and listing discussion forums where Nigerian fans interpreted the meanings of the song. However, this limitation offered an opportunity for the student to engage with others and allowed them to think more critically and deeply about the content. [31] Marrel De Matas from University of Massachusetts Amherst proposes an "inquiry-based model" where the writer problem solves and improves their critical thinking and writing skills when engaging with ChatGPT. Matas also suggests a "user-centered design of the writing process" which prioritizes users over the technology. This "user-centered design" involves two core components. The first is an emphasis on reflective writing which would encourage students to view writing as a process rather than product-based like how ChatGPT can be. The second is to have students evaluate chatbot responses on their ability to provide information on a particular subject and the limitations of where it draws its information from. [32]
Some have emphasized a change in pedagogy focused on quality of writing and reading as opposed to quantity. If educators slow down down by assigning one paper or one book per semester, it could allow students to put more "thought, research, and critical thinking into through multiple drafts and revisions". [33]
Not only is there growing concern around ChatGPT's impact on students, but also on the educators themselves. Writing centers may be particularly impacted by the growing use of ChatGPT in education. Although there are local policies on AI usage, university-wide policies are currently not well established and with this comes a burden that is placed on writing center tutors, who may have to act as "police clients for unethical AI-generated writing". According to a 2024 survey done on writing centers, participants stated that they believed they would need to be able to advise clients on the "ethical issues of using AI tools" and "recognize AI-generated text and discuss academic integrity". [34] Some have proposed ways to remediate this including placing digital watermarks on AI-generated works to distinguish it from original student work. [28]
It's a complex issue with various viewpoints and implications for how education may evolve in the future. The theory that ChatGPT will destroy education is disputed almost as widely as it is believed. Kevin Brown of Christianity Today wrote that the human brain remains consistently better able to create material than ChatGPT. Brown further argued that education in its spirit will continue to live on and that ChatGPT could only revolutionize teaching methods. [20] The New York Times ' Kevin Roose also reported that ChatGPT's prohibition would never be able to be practiced effectively, noting it would be impossible to police. Roose noted that students can access the internet outside of schools, effectively rendering a ban obsolete; Roose suggested instead that teachers allow it openly for some assignments similar to calculators, and that teaching with the AI is the best approach. [35] The oral exam has also been used as an example of an instruction method which could circumvent the assignment and test students' knowledge more effectively on a 1:1 basis. [36]
This technological evolution prompts educators to rethink pedagogical approaches, emphasizing critical thinking and ethical AI usage. Proponents argue for integrating ChatGPT into educational frameworks responsibly, leveraging its potential to enhance learning outcomes through personalized approaches. [37] [38]
Some educational institutions have chosen to ban access to ChatGPT. The reasons behind these decisions likely vary, but concerns about potential misuse, such as plagiarism or reliance on AI for writing tasks, could be driving factors. ChatGPT has been met with various bans from certain educational institutions. One of the earliest districts to ban the tool was the Los Angeles Unified School District, which blocked access to the tool less than a month after its official release. [39] The New York City Department of Education reportedly blocked access to ChatGPT in December 2022 [40] and officially announced a ban around January 4, 2023. [41] [42]
In February 2023, the University of Hong Kong sent a campus-wide email to instructors and students stating that the use of ChatGPT or other AI tools is prohibited in all classes, assignments and assessments at the university. Any violations would be treated as plagiarism by the university unless the student obtains the prior written consent from the course instructor. [43] [44]
Some schools in the United States for the 2023–24 school year announced a repeal of their bans for ChatGPT. New York City repealed its ban in May 2023 while replacing it with a statement which encourages students to learn how to use generative AI, and in rural Washington, Walla Walla Public Schools announced it would repeal its ban of ChatGPT in student assignments. [39] [45] [46] A professor at the University of California, Los Angeles, has permitted the incorporation of ChatGPT into his students' writing process instead of banning the use of this new technology. [47] He argued that educators should teach students to use ChatGPT ethically and productively, and that it is not feasible or practical to ban students from using it. [47] He also discussed the benefits of learning to write well with AI assistance, and stressed on the importance of being responsible users of AI. [47]
Some professors have created separate college courses designed specifically to train generative AI. For example, Arizona State University professor Andrew Maynard and Vanderbilt professor Jules White both developed a new course specifically for prompt engineering generative AI chatbots. [48] Other instructors, such as Ethan Mollick at Wharton, have, in the face of inevitable use by students regardless of prohibition, not only accepted usage of generative AI but required all students to use ChatGPT in their assignments. Mollick reported to NPR that the usage of ChatGPT generally improved his students' work, using AI to further assist in the generation of ideas. [9] Some professors have focused on creating learning material and have highlighted the opportunities of using ChatGPT to personalize assignments to a student's background. [49] In 2024, the Los Angeles Unified School District introduced Ed, a chatbot developed in response to ChatGPT's use in classrooms. [50] [51] The use of AI in education has vast potential. It can provide personalized learning experiences and adaptive teaching methods. [52]
Some companies have responded to the influx of ChatGPT and generative AI among students by developing detection software which flags down essays likely written by AI. Among the first companies to develop solutions like this was Turnitin, which developed a tool to detect AI-based academic dishonesty. A corporate blog post from the company stated that the company's database of numerous student essays was used to train its own detection system. When tested by The Washington Post , though, noted that Turnitin's detector flagged an innocent student for using ChatGPT to generate the conclusion of her essay. [53] The company itself reported that its detector was not always accurate as well. [54] [55]
Numerous tools such as GPTZero were created as tools to detect AI-generated text, and numerous other startups have released tools on detecting AI-written work, including OpenAI itself. However, research reports have stated that detection software often fails to detect content generated by AI, and that these tools are easy to fool. [56] [57] [58] OpenAI's official tool, Classifier, launched in January 2023, was later taken down in August 2023 due to low usage and accuracy issues. [59] [60] [61]
A chatbot is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.
Turnitin is an Internet-based similarity detection service run by the American company Turnitin, LLC, a subsidiary of Advance Publications.
A literature review is an overview of the previously published works on a topic. The term can refer to a full scholarly paper or a section of a scholarly work such as a book, or an article. Either way, a literature review is supposed to provide the researcher/author and the audiences with a general image of the existing knowledge on the topic under question. A good literature review can ensure that a proper research question has been asked and a proper theoretical framework and/or research methodology have been chosen. To be precise, a literature review serves to situate the current study within the body of the relevant literature and to provide context for the reader. In such case, the review usually precedes the methodology and results sections of the work.
A thesis statement is a statement of one's core argument, the main idea(s), and/or a concise summary of an essay, research paper, etc. It is usually expressed in one or two sentences near the beginning of a paper, and may be reiterated elsewhere, such as in the conclusion. In some contexts, such as in the British educational system, a thesis statement is generally considered synonymous with one's argument.
Grammarly is a writing assistant. It reviews the spelling, grammar, and tone of a piece of writing as well as identifying possible instances of plagiarism. It can also can suggest style and tonal recommendations to users and produce writing from prompts with its generative AI capabilities.
Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020.
You.com is an AI assistant that began as a personalization-focused search engine. While still offering web search capabilities, You.com has evolved to prioritize a chat-first AI assistant.
ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and launched in 2022. It is currently based on the GPT-4o large language model (LLM). ChatGPT can generate human-like conversational responses and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence (AI). Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.
In the field of artificial intelligence (AI), a hallucination or artificial hallucination is a response generated by AI that contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneous responses rather than perceptual experiences.
Generative Pre-trained Transformer 4 (GPT-4) is a multimodal large language model created by OpenAI and the fourth in its series of GPT foundation models. It was launched on March 14, 2023, and made publicly available via the paid chatbot product ChatGPT Plus, via OpenAI's API, and via the free chatbot Microsoft Copilot. As a transformer-based model, GPT-4 uses a paradigm where pre-training using both public data and "data licensed from third-party providers" is used to predict the next token. After this step, the model was then fine-tuned with reinforcement learning feedback from humans and AI for human alignment and policy compliance.
A generative pre-trained transformer (GPT) is a type of large language model (LLM) and a prominent framework for generative artificial intelligence. It is an artificial neural network that is used in natural language processing by machines. It is based on the transformer deep learning architecture, pre-trained on large data sets of unlabeled text, and able to generate novel human-like content. As of 2023, most LLMs had these characteristics and are sometimes referred to broadly as GPTs.
Generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
Microsoft Copilot is a generative artificial intelligence chatbot developed by Microsoft. Based on the GPT-4 series of large language models, it was launched in 2023 as Microsoft's primary replacement for the discontinued Cortana.
Gemini, formerly known as Bard, is a generative artificial intelligence chatbot developed by Google. Based on the large language model (LLM) of the same name, it was launched in 2023 after being developed as a direct response to the rise of OpenAI's ChatGPT. It was previously based on PaLM, and initially the LaMDA family of large language models.
GPTZero is an artificial intelligence detection software developed to identify artificially generated text, such as those produced by large language models.
Writesonic is a company that develops artificial intelligence tools for content creation. It was founded by Samanyou Garg in October 2020 and is based in San Francisco. The platform uses GPT-3.5 and GPT-4 technologies.
Artificial intelligence detection software aims to determine whether some content was generated using artificial intelligence (AI).
QuillBot is a software developed in 2017 that uses artificial intelligence to rewrite and paraphrase text.
The GPT Store is a platform developed by OpenAI that enables users and developers to create, publish, and monetize GPTs without requiring advanced programming skills. GPTs are custom applications built using the artificial intelligence chatbot known as ChatGPT.
Artificial intelligence rhetoric is a term primarily applied to persuasive text and speech generated by chatbots using generative artificial intelligence, although the term can also apply to the language that humans type or speak when communicating with a chatbot. This emerging field of rhetoric scholarship is related to the fields of digital rhetoric and human-computer interaction.