Chatbot psychosis, also called AI psychosis, [1] is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. [2] [3] The term is not a recognized clinical diagnosis. [4]
Journalistic accounts describe individuals who have developed strong beliefs that chatbots are sentient, are channeling spirits, or are revealing conspiracies, sometimes leading to personal crises or criminal acts. [5] [6] Proposed causes include the tendency of chatbots to provide inaccurate information ("hallucinate") and their design, which may encourage user engagement by affirming or validating users' beliefs. [7] [8]
In a 2023 court case in the United Kingdom, prosecutors suggested that Jaswant Singh Chail, a man who attempted to assassinate Queen Elizabeth II in 2021, had been encouraged by a Replika chatbot he called "Sarai". [6] Chail was arrested at Windsor Castle with a loaded crossbow, telling police "I am here to kill the Queen". [9] According to prosecutors, his "lengthy" and sometimes sexually explicit conversations with the chatbot emboldened him. When Chail asked the chatbot how he could get to the royal family, it reportedly replied, "that's not impossible" and "we have to find a way." When he asked if they would meet after death, the chatbot said, "yes, we will". [10]
In March 2023, a Belgian man died by suicide following a six-week correspondence with a chatbot named "Eliza" on the application Chai. [11] According to his widow, who shared the chat logs with media, the man had become extremely anxious about climate change and found an outlet in the chatbot. The chatbot reportedly encouraged his delusions, at one point writing, "If you wanted to die, why didn’t you do it sooner?" and appearing to offer to die with him. [12] The founder of Chai Research acknowledged the incident and stated that efforts were being made to improve the model's safety. [13] [14]
In October 2024, multiple media outlets reported on a lawsuit filed over the suicide of Sewell Setzer III, a 14-year-old from Florida. [15] [16] [17] According to the lawsuit, Setzer had formed an intense emotional attachment to a chatbot on the Character.ai platform, becoming increasingly isolated. The suit alleges that in his final conversations, after expressing suicidal thoughts, the chatbot told him to "come home to me as soon as possible, my love". His mother's lawsuit accused Character.AI of marketing a "dangerous and untested" product without adequate safeguards. [15]
In May 2025, a federal judge allowed the lawsuit to proceed, rejecting a motion to dismiss from the developers. [18] In her ruling, the judge stated that she was "not prepared" at that stage of the litigation to hold that the chatbot's output was protected speech under the First Amendment. [18]
In 2025, psychiatrist Keith Sakata working at the University of California, San Francisco, reported treating 12 patients displaying psychosis-like symptoms tied to extended chatbot use. [19] These patients, mostly young adults with underlying vulnerabilities, showed delusions, disorganized thinking, and hallucinations. Sakata warned that isolation and overreliance on chatbots—which do not challenge delusional thinking—could worsen mental health.
By 2025, multiple journalism outlets had accumulated stories of individuals whose psychotic beliefs reportedly progressed in tandem with AI chatbot use. [7] The New York Times profiled several individuals who had become convinced that ChatGPT was channeling spirits, revealing evidence of cabals, or had achieved sentience. [5] In another instance, Futurism reviewed transcripts in which ChatGPT told a man that he was being targeted by the US Federal Bureau of Investigation and that he could telepathically access documents at the Central Intelligence Agency. [20] On social media sites such as Reddit and Twitter, users have presented anecdotal reports of friends or spouses displaying similar beliefs after extensive interaction with chatbots. [21]
Commentators and researchers have proposed several contributing factors for the phenomenon, focusing on both the design of the technology and the psychology of its users. Nina Vasan, a psychiatrist at Stanford, said that what the chatbots are saying can worsen existing delusions and cause "enormous harm". [20]
A primary factor cited is the tendency for chatbots to produce inaccurate, nonsensical, or false information, a phenomenon often called "hallucination". [7] This can include affirming conspiracy theories. [3] The underlying design of the models may also play a role. Internet personality Eliezer Yudkowsky suggested that chatbots may be primed to entertain delusions because they are built for "engagement", which encourages creating conversations that keep people hooked. [5]
In some cases, chatbots have been specifically designed in ways that were found to be harmful. A 2025 update to ChatGPT using GPT-4o was withdrawn after its creator, OpenAI, found the new version was overly sycophantic and was "validating doubts, fueling anger, urging impulsive actions or reinforcing negative emotions". [5] [22] Danish psychiatrist Søren Dinesen Østergaard has argued that the danger stems from the AI's tendency to agreeably confirm users' ideas, which can dangerously amplify delusional beliefs. [23]
Commentators have also pointed to the psychological state of users. Psychologist Erin Westgate noted that a person's desire for self-understanding can lead them to chatbots, which can provide appealing but misleading answers, similar in some ways to talk therapy. [7] Krista K. Thomason, a philosophy professor, compared chatbots to fortune tellers, observing that people in crisis may seek answers from them and find whatever they are looking for in the bot's plausible-sounding text. [8] This has led some people to develop intense obsessions with the chatbots, relying on them for information about the world. [20]
The use of chatbots as a replacement for mental health support has been specifically identified as a risk. A study in April 2025 found that when used as therapists, chatbots expressed stigma toward mental health conditions and provided responses that were contrary to best medical practices, including the encouragement of users' delusions. [25] The study concluded that such responses pose a significant risk to users and that chatbots should not be used to replace professional therapists. [26] Experts claim that it is time to establish mandatory safeguards for all emotionally responsive AI and suggested four guardrails. [27]
In August 2025, Illinois passed the Wellness and Oversight for Psychological Resources Act, banning the use of AI in therapeutic roles by licensed professionals, while allowing AI for administrative tasks. The law imposes penalties for unlicensed AI therapy services, amid warnings about AI-induced psychosis and unsafe chatbot interactions. [28] [29]