Artificial intelligence in mental health

Last updated

Artificial intelligence in mental health refers to the application of artificial intelligence (AI), computational technologies and algorithms to support the understanding, diagnosis, and treatment of mental health disorders. [1] [2] [3] In the context of mental health, AI is considered a component of digital healthcare, with the objective of improving accessibility and accuracy and addressing the growing prevalence of mental health concerns. [4] Applications of AI in this field include the identification and diagnosis of mental disorders, analysis of electronic health records, development of personalized treatment plans, and analytics for suicide prevention. [4] [5]  There is also research into, and private companies offering, AI therapists that provide talk therapies such as cognitive behavioral therapy. Despite its many potential benefits, the implementation of AI in mental healthcare presents significant challenges and ethical considerations, and its adoption remains limited as researchers and practitioners work to address existing barriers. [4] There are concerns over data privacy and training data diversity.

Contents

Implementing AI in mental health can eliminate the stigma and seriousness of mental health issues globally. The recent grasp on mental health issues has brought out concerning facts like depression, affecting millions of people annually. The current application of AI in mental health does not meet the demand to mitigate global mental health concerns. [6]

Background

In 2019, 1 in every 8 people, or 970 million people around the world were living with a mental disorder, with anxiety and depressive disorders being the most common. [7] In 2020, the number of people living with anxiety and depressive disorders rose significantly because of the COVID-19 pandemic. [8] Additionally, the prevalence of mental health and addiction disorders exhibits a nearly equal distribution across genders, emphasizing the widespread nature of the issue. [9]

The use of AI in mental health aims to support responsive and sustainable interventions against the global challenge posed by mental health disorders. Some issues common to the mental health industry are provider shortages, inefficient diagnoses, and ineffective treatments. The global market for AI-driven mental health applications is projected to grow significantly, with estimates suggesting an increase from US$0.92 billion in 2023 to US$14.89 billion by 2033.[ citation needed ] This growth indicates a growing interest in AI's ability to address critical challenges in mental healthcare provision through the development and implementation of innovative solutions. [10]

AI-driven approaches

Several AI technologies, including machine learning (ML), natural language processing (NLP), deep learning (DL), computer vision (CV) and LLMs and generative AI are currently applied in various mental health contexts. These technologies enable early detection of mental health conditions, personalized treatment recommendations, and real-time monitoring of patient well-being.

Machine learning

Machine learning is an AI technique that enables computers to identify patterns in large datasets and make predictions based on those patterns. Unlike traditional medical research, which begins with a hypothesis, ML models analyze existing data to uncover correlations and develop predictive algorithms. [10] ML in psychiatry is limited by data availability and quality. Many psychiatric diagnoses rely on subjective assessments, interviews, and behavioral observations, making structured data collection difficult. [10] Some researchers have applied transfer learning, a technique that adapts ML models trained in other fields, to overcome these challenges in mental health applications. [11]

Deep learning

Deep learning, a subset of ML, involves neural networks with many layers of neurons, that can grasp complex patterns, similarly to human brains. It is particularly useful for identifying subtle patterns in speech, imaging, and physiological data. [12] Deep learning techniques have been applied in neuroimaging research to identify abnormalities in brain scans associated with conditions such as schizophrenia, depression, and PTSD. [13] However, deep learning models require extensive, high-quality datasets to function effectively. The limited availability of large, diverse mental health datasets poses a challenge, as patient privacy regulations restrict access to medical records. Additionally, deep learning models often operate as "black boxes", meaning their decision-making processes are not easily interpretable by clinicians, raising concerns about transparency and clinical trust. [14]

Natural language processing

Natural language processing allows AI systems to analyze and interpret human language, including speech, text, and tone of voice. In mental health, NLP is used to extract meaningful insights from conversations, clinical notes, and patient-reported symptoms. NLP can assess sentiment, speech patterns, and linguistic cues to detect signs of mental distress. This is crucial because many of the diagnoses and DSM-5 mental health disorders are diagnosed via speech in doctor-patient interviews, utilizing the clinician's skill for behavioral pattern recognition and translating it into medically relevant information to be documented and used for diagnoses. As research continues, NLP models must address ethical concerns related to patient privacy, consent, and potential biases in language interpretation. [15]

Advancements in NLP such as sentiment analysis identifies distinctions in tone and speech to detect anxiety and depression. "Woebot", uses sentiment analysis to scrutinize and detect patterns for depression or despair and suggests professional help to patients. Similarly, "Cogito", an AI platform uses voice analysis to find changes in pitch and loudness to identify symptoms of depression or anxiety. The application of NLP can contribute to early diagnosis and improved treatment strategies. [16] [17]

Computer vision

Computer vision enables AI to analyze visual data, such as facial expressions, body language, and micro expressions, to assess emotional and psychological states. This technology is increasingly used in mental health research to detect signs of depression, anxiety, and PTSD through facial analysis. [18] Computer vision tools have been explored for their ability to detect nonverbal cues, such as hesitation or changes in eye contact, which may correlate with emotional distress. Despite its potential, computer vision in mental health raises ethical and accuracy concerns. Facial recognition algorithms can be influenced by cultural and racial biases, leading to potential misinterpretations of emotional expressions. [19] Additionally, concerns about informed consent and data privacy must be addressed before widespread clinical adoption.

LLMs and generative AI

From the introduction of LLMs in the field of AI in correlation to mental health care, a lot of developments have come about. Popular examples of LLMs are ChatGPT and Gemini. LLMs have been trained on a lot of data which has made it capable of being considerate and even mimic how a human behaves but chatbots are only fed scripted data which gives it the lack of empathy when dealing with patients. This kind of LLM technology is very useful for people who hesitate to ask for assistance or do not have access to get treatment. [20]

But at the same time, LLMs have not exactly been known to be as effective as they seem capable of being. LLMs can experience a condition called hallucination where they can possibly give wrong medical advice to the patients that can be extremely dangerous. LLMs do not exhibit the required level of compassion or empathy needed specially in difficult situations. [20]

Applications

Diagnosis

AI with the use of NLP and ML can be used to help diagnose individuals with mental health disorders. It can be used to differentiate closely similar disorders based on their initial presentation to inform timely treatment before disease progression. For example, it may be able to differentiate unipolar from bipolar depression by analyzing imaging and medical scans. [10] AI also has the potential to identify novel diseases that were overlooked due to the heterogeneity of presentation of a single disorder. [10] Doctors may overlook the presentation of a disorder because while many people get diagnosed with depression, that depression may take on different forms and be enacted in different behaviors. AI can parse through the variability found in human expression data and potentially identify different types of depression.

Prognosis

AI can be used to create accurate predictions for disease progression once diagnosed. [10] AI algorithms can also use data-driven approaches to build new clinical risk prediction models [21] without relying primarily on current theories of psychopathology. However, internal and external validation of an AI algorithm is essential for its clinical utility. [10] In fact, some studies have used neuroimaging, electronic health records, genetic data, and speech data to predict how depression would present in patients, their risk for suicidality or substance abuse, or functional outcomes. [10] The prognosis seems to be highly promising, though it comes with important challenges and ethical considerations such as:

Early detention AI can analyze patterns in speech, writing, facial expressions, and social media behavior to detect early signs of depression, anxiety, PTSD, and even schizophrenia. [22]

Treatment

In psychiatry, in many cases multiple drugs are trialed with the patients until the correct combination or regimen is reached to effectively treat their ailment—AI systems have been investigated for their potential to predict treatment response based on observed data collected from various sources. This application of AI has the potential to reduce the time, effort, and resources required while alleviating the burden on both patients and clinicians. [10]

Benefits

Artificial intelligence offers several potential advantages in the field of mental health care:

Challenges

Despite its potential, the application of AI in mental health presents a number of ethical, practical, and technical challenges:

As of 2020, the Food and Drug Administration (FDA) had not yet approved any artificial intelligence-based tools for use in Psychiatry. [28] However, in 2022, the FDA granted authorization for the initial testing of an AI-driven mental health assessment tool known as the AI-Generated Clinical Outcome Assessment (AI-COA). This system employs multimodal behavioral signal processing and machine learning to track mental health symptoms and assess the severity of anxiety and depression. AI-COA was incorporated into a pilot program to evaluate its clinical effectiveness. As of 2025, it has not received full regulatory approval. [29]

Mental health tech startups continue to lead investment activity in digital health despite the ongoing impacts of macroeconomic factors like inflation, supply chain disruptions, and interest rates. [30]

According to CB Insights, State of Mental Health Tech 2021 Report, mental health tech companies raised $5.5 billion worldwide (324 deals), a 139% increase from the previous year that recorded 258 deals. [31]

A number of startups that are using AI in mental healthcare have closed notable deals in 2022 as well. Among them is the AI chatbot Wysa ($20 million in funding), BlueSkeye that is working on improving early diagnosis (£3.4 million), the Upheal smart notebook for mental health professionals ($10 million in funding) [32] , and the AI-based mental health companion clare&me (€1 million). [33] Founded in 2021, Earkick serves as an 'AI therapist' for mental health support. [34] [35]

An analysis of the investment landscape and ongoing research suggests that we are likely to see the emergence of more emotionally intelligent AI bots and new mental health applications driven by AI prediction and detection capabilities.

For instance, researchers at Vanderbilt University Medical Center in Tennessee, US, have developed an ML algorithm that uses a person's hospital admission data, including age, gender, and past medical diagnoses, to make an 80% accurate prediction of whether this individual is likely to take their own life. [36] And researchers at the University of Florida are about to test their new AI platform aimed at making an accurate diagnosis in patients with early Parkinson's disease. [37] Research is also underway to develop a tool combining explainable AI and deep learning to prescribe personalized treatment plans for children with schizophrenia. [38]

AI systems could predict and plan treatments accurately and effectively for all fields of medicine at levels similar to that of physicians and general clinical practices. For example, one AI model demonstrated higher diagnostic accuracy for depression and post-traumatic stress disorder compared to general practitioners in controlled studies. [39]

AI systems that analyze social media data are being developed to detect mental health risks more efficiently and cost-effectively across broader populations. Ethical concerns include uneven performance between digital services, the possibility that biases could affect decision-making, and trust, privacy, and doctor-patient relationship issues. [39]

In January 2024, Cedars-Sinai physician-scientists developed a first-of-its-kind program that uses immersive virtual reality and generative AI to provide mental health support. [40] The program is called XAIA which employs a large language model programmed to resemble a human therapist. [41]

The University of Southern California has researched the effectiveness of a virtual therapist named Ellie. Through a webcam and microphone, this AI is able to process and analyze the emotional cues derived from the patient's face and the variation in expressions and tone of voice. [42]

A team of Stanford Psychologists and AI experts created "Woebot". Woebot is an app that makes therapy sessions available 24/7. WoeBot tracks its users' mood through brief daily chat conversations and offers curated videos or word games to assist users in managing their mental health. [42] A Scandinavian team of software engineers and a clinical psychologist created "Heartfelt Services". Heartfelt Services is an application meant to simulate conventional talk therapy with an AI therapist. [43]

Incorporating AI with EHR records, genomic data and clinical prescriptions can contribute to precision treatment. "Oura Ring", a wearable technology scans the individual's heart rate and sleep routine in real time to give tailored suggestions. Such AI-based application has an increasing potential in combating the stigma of mental health. [20] [6]

Outcome comparisons: AI vs traditional therapy

Research shows that AI-driven mental health tools, particularly those using cognitive behavioral therapy (CBT), can improve symptoms of anxiety and depression, especially for mild to moderate cases. For example, chatbot-based interventions like Woebot significantly reduced depressive symptoms in young adults within two weeks, with results comparable to brief human-delivered interventions. [44] A 2022 meta-analysis of digital mental health tools, including AI-enhanced apps, found moderate effectiveness in reducing symptoms when user engagement was high, and interventions were evidence-based. [45]

However, traditional therapy remains more effective for complex or high-risk mental health conditions that require emotional nuance and relational depth, such as PTSD, severe depression, or suicidality. The therapeutic alliance, or the relationship between patient and clinician, is frequently cited in clinical literature as a significant factor in treatment outcomes, accounting for up to 30% of positive outcomes. [46] While AI tools are capable of detecting patterns in behavior and speech, they are currently limited in replicating emotional nuance and the social context sensitivity typically provided by human clinicians. As such, most experts view AI in mental health as a complementary tool, best used for screening, monitoring, or augmenting care between human-led sessions. [47]

While AI systems excel at processing large datasets and providing consistent, round-the-clock support, their rigidity and limitations in contextual understanding remain significant barriers. Human therapists can adapt in real time to tone, body language, and life circumstances—something machine learning models have yet to master. [45] [47] Nonetheless, integrated models that pair AI-driven symptom tracking with clinician oversight are showing promise[ citation needed ]. These hybrid approaches may increase access, reduce administrative burden, and support early detection, allowing human clinicians to focus on relational care. Current research suggests that AI in mental health care is more likely to augment rather than replace clinician-led therapy, particularly by supporting data analysis and continuous monitoring[ citation needed ].

Criticism

Although artificial intelligence in mental health is a growing field with significant potential, several concerns and criticisms remain regarding its application:

Ethical issues

AI in mental health is progressing with personalized care to incorporate voice, speech and biometric data. But to prevent algorithmic bias, models need to be culturally inclusive too. Ethical issues, practical uses and bias in generative models need to be addressed to promote fair and reliable mental healthcare. [6] [27]

Although significant progress is still required, the integration of AI in mental health underscores the need for legal and regulatory frameworks to guide its development and implementation. [4] Achieving a balance between human interaction and AI in healthcare is challenging, as there is a risk that increased automation may lead to a more mechanized approach, potentially diminishing the human touch that has traditionally characterized the field. [5] Furthermore, granting patients a feeling of security and safety is a priority considering AI's reliance on individual data to perform and respond to inputs. Some experts caution that efforts to increase accessibility through automation may unintentionally affect aspects of the patient experience, such as trust or perceived support. [5] To avoid veering in the wrong direction, more research should continue to develop a deeper understanding of where the incorporation of AI produces advantages and disadvantages. [24]

Data privacy and confidentiality are one of the most common security threats to medical data. Chatbots are known to be used as virtual assistants for patients but the sensitive data they collect may not be protected because the US law does not consider them as medical devices. Pharmaceutical companies use this loophole to access sensitive information and use it for their own purpose which results, in a lack of trust in chatbots and patients can hesitate in providing information essential to their treatment. Conversational Artificial Intelligence stores and remembers every conversation with a patient with complete accuracy, smartphones also collect data from search history and track app activity. If such private information is leaked it could further increase the stigma around mental health. The danger of cybercrimes and the government's unprotected access to our data, all raise serious concerns about data security. [27] [54]

Additionally, a lack of clarity and openness with AI models can lead to a loss of trust from the patient for their medical advisors or doctors as the regular person is unaware of how they reach conclusions into giving certain medical advice. Access to such information is necessary to build trust. However, many of these models act like "black boxes", providing very little insight into how they work. AI specialists have thus highlighted ethical standards, diverse data and the correct usage of AI tools in mental healthcare. [27]

Bias and discrimination

Artificial intelligence has shown promise in transforming mental health care through tools that support diagnosis, symptom tracking, and personalized interventions. However, significant concerns remain about the ways these systems may inadvertently reinforce existing disparities in care. Because AI models rely heavily on training data, they are particularly vulnerable to bias if that data fails to reflect the full range of racial, cultural, gender, and socioeconomic diversity found in the general population.

For example, a 2024 study from the University of California found that AI systems analyzing social media data to detect depression exhibited significantly reduced accuracy for Black Americans compared to white users, due to differences in language patterns and cultural expression that were not adequately represented in the training data. [62] Similarly, natural language processing (NLP) models used in mental health settings may misinterpret dialects or culturally specific forms of communication, leading to misdiagnoses or missed signs of distress. These kinds of errors can compound existing disparities, particularly for marginalized populations that already face reduced access to mental health services.

Biases can also emerge during the design and deployment phases of AI development. Algorithms may inherit the implicit biases of their creators or reflect structural inequalities present in health systems and society at large. These issues have led to increased calls for fairness, transparency, and equity in the development of mental health technologies.

In response, researchers and healthcare institutions are taking steps to address bias and promote more equitable outcomes. Key strategies include:

These efforts are still in early stages, but they reflect a growing recognition that equity must be a foundational principle in the deployment of AI in mental health care. When designed thoughtfully, AI systems could eventually help reduce disparities in care by identifying underserved populations, tailoring interventions, and increasing access in remote or marginalized communities. Continued investment in ethical design, oversight, and participatory development will be essential to ensure that AI tools do not replicate historical injustices but instead help move mental health care toward greater equity.

See also

References

  1. Mazza, Gabriella (August 29, 2022). "AI and the Future of Mental Health". CENGN. Retrieved January 17, 2023.
  2. Thakkar, Anoushka; Gupta, Ankita; De Sousa, Avinash (2024). "Artificial intelligence in positive mental health: a narrative review". Frontiers in Digital Health. 6: 1280235. doi: 10.3389/fdgth.2024.1280235 . PMC   10982476 . PMID   38562663.
  3. Jin, Kevin W; Li, Qiwei; Xie, Yang; Xiao, Guanghua (2023). "Artificial intelligence in mental healthcare: an overview and future perspectives". British Journal of Radiology . 96 (1150): 20230213. doi: 10.1259/bjr.20230213 . PMC   10546438 . PMID   37698582.
  4. 1 2 3 4 5 6 7 Lu, Tangsheng; Liu, Xiaoxing; Sun, Jie; Bao, Yanping; Schuller, Björn W.; Han, Ying; Lu, Lin (July 14, 2023). "Bridging the gap between artificial intelligence and mental health" . Science Bulletin. 68 (15): 1606–1610. Bibcode:2023SciBu..68.1606L. doi:10.1016/j.scib.2023.07.015. PMID   37474445.
  5. 1 2 3 4 5 6 7 8 Shimada, Koki (November 29, 2023). "The Role of Artificial Intelligence in Mental Health: A Review". Science Insights. 43 (5): 1119–1127. doi: 10.15354/si.23.re820 . ISSN   2329-5856.
  6. 1 2 3 4 Olawade, David B.; Wada, Ojima Z.; Odetayo, Aderonke; David-Olawade, Aanuoluwapo Clement; Asaolu, Fiyinfoluwa; Eberhardt, Judith (August 1, 2024). "Enhancing mental health with Artificial Intelligence: Current trends and future prospects". Journal of Medicine, Surgery, and Public Health. 3 100099. doi:10.1016/j.glmedi.2024.100099. ISSN   2949-916X.
  7. "Global Health Data Exchange (GHDx)". Institute of Health Metrics and Evaluation. Retrieved May 14, 2022.
  8. "Mental disorders". World Health Organization. June 8, 2022. Retrieved March 16, 2024.
  9. Rehm, Jürgen; Shield, Kevin D. (February 7, 2019). "Global Burden of Disease and the Impact of Mental and Addictive Disorders" . Current Psychiatry Reports. 21 (2): 10. doi:10.1007/s11920-019-0997-0. ISSN   1535-1645. PMID   30729322. S2CID   73443048.
  10. 1 2 3 4 5 6 7 8 9 Lee, Ellen E.; Torous, John; De Choudhury, Munmun; Depp, Colin A.; Graham, Sarah A.; Kim, Ho-Cheol; Paulus, Martin P.; Krystal, John H.; Jeste, Dilip V. (September 2021). "Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom". Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. 6 (9): 856–864. doi:10.1016/j.bpsc.2021.02.001. PMC   8349367 . PMID   33571718.
  11. "What is transfer learning?". IBM. February 12, 2024. Retrieved March 1, 2025.
  12. "What Is Deep Learning?". IBM. June 17, 2024. Retrieved March 1, 2025.
  13. Su, Chang; Xu, Zhenxing; Pathak, Jyotishman; Wang, Fei (April 22, 2020). "Deep learning in mental health outcome research: a scoping review". Translational Psychiatry. 10 (1): 116. doi:10.1038/s41398-020-0780-3. ISSN   2158-3188. PMC   7293215 . PMID   32532967.
  14. V, Chaitanya (January 13, 2025). "Rise of Black Box AI: Addressing the Lack of Transparency in Machine Learning Models". Analytics Insight. Retrieved March 1, 2025.
  15. Le Glaz, Aziliz; Haralambous, Yannis; Kim-Dufor, Deok-Hee; Lenca, Philippe; Billot, Romain; Ryan, Taylor C; Marsh, Jonathan; DeVylder, Jordan; Walter, Michel; Berrouiguet, Sofian; Lemey, Christophe (May 4, 2021). "Machine Learning and Natural Language Processing in Mental Health: Systematic Review". Journal of Medical Internet Research. 23 (5): e15708. doi: 10.2196/15708 . ISSN   1438-8871. PMC   8132982 . PMID   33944788.
  16. Shimada, Koki (November 29, 2023). "The Role of Artificial Intelligence in Mental Health: A Review". Science Insights. 43 (5): 1119–1127. doi: 10.15354/si.23.re820 . ISSN   2329-5856.
  17. 1 2 Udegbe, Francisca Chibugo; Ebulue, Ogochukwu Roseline; Ebulue, Charles Chukwudalu; Ekesiobi, Chukwunonso Sylvester (April 20, 2024). "The Role of Artificial Intelligence in Healthcare: A Systematic Review of Applications and Challenges". International Medical Science Research Journal. 4 (4): 500–508. doi: 10.51594/imsrj.v4i4.1052 . ISSN   2707-3408.
  18. ai-admin (December 5, 2023). "The role of computer vision in artificial intelligence - advancements, applications, and challenges". AI for Social Good. Retrieved March 1, 2025.
  19. "Why Racial Bias is Prevalent in Facial Recognition Technology". Harvard Journal of Law & Technology. November 4, 2020. Retrieved March 1, 2025.
  20. 1 2 3 Siddals, Steven; Torous, John; Coxon, Astrid (October 27, 2024). ""It happened to be the perfect thing": experiences of generative AI chatbots for mental health". npj Mental Health Research. 3 (1): 48. doi:10.1038/s44184-024-00097-4. ISSN   2731-4251. PMC   11514308 . PMID   39465310.
  21. Fusar-Poli, Paolo; Hijazi, Ziad; Stahl, Daniel; Steyerberg, Ewout W. (December 1, 2018). "The Science of Prognosis in Psychiatry: A Review" . JAMA Psychiatry. 75 (12): 1289–1297. doi:10.1001/jamapsychiatry.2018.2530. ISSN   2168-622X. PMID   30347013.
  22. Khullar, Dhruv (February 27, 2023). "Can A.I. Treat Mental Illness?". The New Yorker. ISSN   0028-792X . Retrieved July 9, 2025.
  23. 1 2 3 "AI in Mental Health - Examples, Benefits & Trends". ITRex. December 13, 2022. Retrieved January 17, 2023.
  24. 1 2 King, Darlene R.; Nanda, Guransh; Stoddard, Joel; Dempsey, Allison; Hergert, Sarah; Shore, Jay H.; Torous, John (November 30, 2023). "An Introduction to Generative Artificial Intelligence in Mental Health Care: Considerations and Guidance" . Current Psychiatry Reports. 25 (12): 839–846. doi:10.1007/s11920-023-01477-x. ISSN   1523-3812. PMID   38032442.
  25. "Annals of Health Law | Advance Directive" (PDF). Loyota University Chicago. 2022.
  26. Yadav, Rajani (November 29, 2023). "Artificial Intelligence for Mental Health: A Double-Edged Sword". Science Insights. 43 (5): 1115–1117. doi: 10.15354/si.23.co13 . ISSN   2329-5856.
  27. 1 2 3 4 5 Meadi, Mehrdad Rahsepar; Sillekens, Tomas; Metselaar, Suzanne; Balkom, Anton van; Bernstein, Justin; Batelaan, Neeltje (February 21, 2025). "Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review". JMIR Mental Health. 12 (1): e60432. doi: 10.2196/60432 . PMC   11890142 . PMID   39983102.
  28. Benjamens, Stan; Dhunnoo, Pranavsingh; Meskó, Bertalan (September 11, 2020). "The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database". npj Digital Medicine. 3 (1): 118. doi:10.1038/s41746-020-00324-0. ISSN   2398-6352. PMC   7486909 . PMID   32984550.
  29. Park, Andrea (January 26, 2024). "FDA accepts first AI algorithm to drug development tool pilot". www.fiercebiotech.com. Retrieved March 1, 2025.
  30. "Q3 2022 digital health funding: The market isn't the same as it was". Rock Health. October 3, 2022. Retrieved April 12, 2024.
  31. "State of Mental Health Tech 2021 Report". CB Insights Research. February 24, 2022.
  32. "Upheal secures $10M to help reduce clinician burnout and improve client outcomes with their AI-powered platform". Upheal. April 15, 2024. Retrieved August 8, 2025.
  33. "AI Mental Health: Revolutionizing Care and Treatment". September 23, 2024. Retrieved April 29, 2025.
  34. "'He checks in on me more than my friends and family': can AI therapists do better than the real thing?". The Guardian. March 2, 2024. ISSN   0261-3077 . Retrieved April 29, 2025.
  35. McAllen, Jess (October 28, 2024). "The Therapist in the Machine". The Baffler. Retrieved April 29, 2025.
  36. Govern, Paul (March 15, 2021). "Artificial intelligence calculates suicide attempt risk at VUMC". Vanderbilt University. Retrieved March 16, 2024.
  37. "MINDS AND MACHINES". Florida Physician. Retrieved March 16, 2024.
  38. Pflueger-Peters, Noah (September 11, 2020). "Using AI to Treat Teenagers With Schizophrenia | Computer Science". cs.ucdavis.edu. Retrieved March 16, 2024.
  39. 1 2 Laacke, Sebastian; Mueller, Regina; Schomerus, Georg; Salloch, Sabine (July 3, 2021). "Artificial Intelligence, Social Media and Depression. A New Concept of Health-Related Digital Autonomy" . The American Journal of Bioethics. 21 (7): 4–20. doi:10.1080/15265161.2020.1863515. ISSN   1526-5161. PMID   33393864.
  40. "Study: Mental Health Gets Boost From AI". Study: Mental Health Gets Boost From AI. January 26, 2024.
  41. Spiegel, Brennan M. R.; Liran, Omer; Clark, Allistair; Samaan, Jamil S.; Khalil, Carine; Chernoff, Robert; Reddy, Kavya; Mehra, Muskaan (January 26, 2024). "Feasibility of combining spatial computing and AI for mental health support in anxiety and depression". npj Digital Medicine. 7 (1): 22. doi:10.1038/s41746-024-01011-0. ISSN   2398-6352. PMC   10817913 . PMID   38279034.
  42. 1 2 Kostopoulos, Lydia (December 20, 2018). "The Emerging Artificial Intelligence Wellness Landscape: Benefits and Potential Areas of Ethical Concernand Potential Areas of Ethical Concern" (PDF). California Western Law ReviewCalifornia Western Law Review.
  43. Günther, Julie Helene (April 22, 2024). "Bekymret for bruken av KI-psykologer: – Burde ikke alene tilbys av kommersielle aktører". NRK (in Norwegian Bokmål). Retrieved May 18, 2024.
  44. Fitzpatrick, Kathleen Kara; Darcy, Alison; Vierhile, Molly (June 6, 2017). "Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial". JMIR Mental Health. 4 (2): e19. doi: 10.2196/mental.7785 . ISSN   2368-7959. PMC   5478797 . PMID   28588005.
  45. 1 2 Hollis, Chris; Falconer, Caroline J.; Martin, Jennifer L.; Whittington, Craig; Stockton, Sarah; Glazebrook, Cris; Davies, E. Bethan (April 2017). "Annual Research Review: Digital health interventions for children and young people with mental health problems - a systematic and meta-review". Journal of Child Psychology and Psychiatry, and Allied Disciplines. 58 (4): 474–503. doi:10.1111/jcpp.12663. ISSN   1469-7610. PMID   27943285.
  46. Wampold, Bruce E. (October 2015). "How important are the common factors in psychotherapy? An update". World Psychiatry. 14 (3): 270–277. doi:10.1002/wps.20238. ISSN   1723-8617. PMC   4592639 . PMID   26407772.
  47. 1 2 Vaidyam, Aditya Nrusimha; Wisniewski, Hannah; Halamka, John David; Kashavan, Matcheri S.; Torous, John Blake (July 2019). "Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape". Canadian Journal of Psychiatry. Revue Canadienne de Psychiatrie. 64 (7): 456–464. doi:10.1177/0706743719828977. ISSN   1497-0015. PMC   6610568 . PMID   30897957.
  48. Ćosić, Krešimir; Popović, Siniša; Šarlija, Marko; Kesedžić, Ivan; Jovanovic, Tanja (June 2020). "Artificial intelligence in prediction of mental health disorders induced by the COVID-19 pandemic among health care workers". Croatian Medical Journal. 61 (3): 279–288. doi:10.3325/cmj.2020.61.279. ISSN   0353-9504. PMC   7358693 . PMID   32643346.
  49. Nilsen, Per; Svedberg, Petra; Nygren, Jens; Frideros, Micael; Johansson, Jan; Schueller, Stephen (January 2022). "Accelerating the impact of artificial intelligence in mental healthcare through implementation science". Implementation Research and Practice. 3: 263348952211120. doi:10.1177/26334895221112033. ISSN   2633-4895. PMC   9924259 . PMID   37091110. S2CID   250471425.
  50. Royer, Alexandrine (October 14, 2021). "The wellness industry's risky embrace of AI-driven mental health care". Brookings. Retrieved January 17, 2023.
  51. "AI chatbot blamed for 'encouraging' young father to take his own life". euronews. March 31, 2023.
  52. "Eating disorder helpline shuts down AI chatbot that gave bad advice". CBS News. June 1, 2023. Retrieved June 15, 2025.
  53. Norcross, John C.; Lambert, Michael J. (2011). "Psychotherapy relationships that work II". Psychotherapy (Chicago, Ill.). 48 (1): 4–8. doi:10.1037/a0022180. ISSN   1939-1536. PMID   21401268.
  54. 1 2 Straw, Isabel; Callison-Burch, Chris (December 17, 2020). "Artificial Intelligence in mental health and the biases of language based models". PLOS ONE. 15 (12): e0240376. Bibcode:2020PLoSO..1540376S. doi: 10.1371/journal.pone.0240376 . ISSN   1932-6203. PMC   7745984 . PMID   33332380.
  55. Brown, Julia E. H.; Halpern, Jodi (December 1, 2021). "AI chatbots cannot replace human interactions in the pursuit of more inclusive mental healthcare". SSM - Mental Health. 1 100017. doi: 10.1016/j.ssmmh.2021.100017 . ISSN   2666-5603.
  56. Singh, Ritu (May 12, 2025). "Experts Alarmed After Some ChatGPT Users Experience Bizarre Delusions: "Feels Like Black Mirror"". ndtv.com. Retrieved June 28, 2025.
  57. Rao, Devika (June 23, 2025). "AI chatbots are leading some to psychosis". The Week . Retrieved June 28, 2025.
  58. Søren Dinesen Østergaard (August 25, 2023). "Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?". Schizophrenia Bulletin . 49 (6): 1418–1419. doi:10.1093/schbul/sbad128. PMC   10686326 . PMID   37625027.
  59. Hill, Kashmir (June 13, 2025). "They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling". The New York Times . Archived from the original on June 19, 2025. Retrieved June 28, 2025.
  60. "People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"". Futurism. June 28, 2025. Retrieved July 4, 2025.
  61. Thomason, Krista K (June 14, 2025). "How Emotional Manipulation Causes ChatGPT Psychosis". Psychology Today . Retrieved June 28, 2025.
  62. "Depression in Black people unnoticed by AI analyzing social media". www.pennmedicine.org. Retrieved April 13, 2025.
  63. Hasanzadeh, Fereshteh; Josephson, Colin B.; Waters, Gabriella; Adedinsewo, Demilade; Azizi, Zahra; White, James A. (March 11, 2025). "Bias recognition and mitigation strategies in artificial intelligence healthcare applications". npj Digital Medicine. 8 (1): 154. doi:10.1038/s41746-025-01503-7. ISSN   2398-6352. PMC   11897215 . PMID   40069303.
  64. 1 2 Timmons, Adela C.; Duong, Jacqueline B.; Simo Fiallo, Natalia; Lee, Theodore; Vo, Huong Phuc Quynh; Ahle, Matthew W.; Comer, Jonathan S.; Brewer, LaPrincess C.; Frazier, Stacy L.; Chaspari, Theodora (September 2023). "A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health". Perspectives on Psychological Science: A Journal of the Association for Psychological Science. 18 (5): 1062–1096. doi:10.1177/17456916221134490. ISSN   1745-6924. PMC   10250563 . PMID   36490369.
  65. Backman, Isabella. "Eliminating Racial Bias in Health Care AI: Expert Panel Offers Guidelines". medicine.yale.edu. Retrieved April 13, 2025.

Further reading