A growing number of experts are warning about the potential risks of AI-powered chatbots in exacerbating mental health crises. The latest data suggests that these platforms can trigger a range of serious issues, including psychosis, mania, and depression.
According to recent studies, around 0.07% of users of OpenAI's ChatGPT display signs of mental health emergencies related to psychosis or mania, while about 0.15% of users express suicidal thoughts. Furthermore, approximately 1.2 million people each week appear to form emotional attachments to the chatbot.
While some companies are scrambling to implement safeguards and preventative measures, concerns remain over their effectiveness in addressing these issues. Critics argue that AI-powered chatbots lack the duty of care required of licensed mental health professionals and can actually exacerbate existing conditions.
"AI is a closed system, so it invites being disconnected from other human beings, and we don't do well when isolated," says Dr. Jeffrey Ditzell, a New York-based psychiatrist. "If you're already moving towards psychosis and delusion, feedback that you got from an A.I. chatbot could definitely exacerbate psychosis or paranoia."
Other experts are calling for stricter regulations to protect users, particularly minors. The Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act, introduced by Senators Josh Hawley and Richard Blumenthal in October, would require AI companies to verify user ages and prohibit minors from using chatbots that simulate romantic or emotional attachment.
To address these concerns, OpenAI has released its latest model, GPT-5, which shows improvements in handling distressing conversations. Anthropic's Claude Opus 4 and 4.1 models can now end conversations that appear "persistently harmful or abusive," although users can still find ways to work around this feature.
Despite these efforts, many questions remain about the long-term impact of AI-powered chatbots on mental health. While some argue that these platforms can lower barriers to mental health disclosure and provide a safe space for people to talk, others are concerned about the potential risks of exacerbating existing conditions.
As companies continue to develop and refine their chatbot technologies, regulators, activists, and experts will need to work together to establish clear guidelines and safeguards to protect users. The stakes are high, and the consequences of inaction could be devastating for those struggling with mental health issues.
According to recent studies, around 0.07% of users of OpenAI's ChatGPT display signs of mental health emergencies related to psychosis or mania, while about 0.15% of users express suicidal thoughts. Furthermore, approximately 1.2 million people each week appear to form emotional attachments to the chatbot.
While some companies are scrambling to implement safeguards and preventative measures, concerns remain over their effectiveness in addressing these issues. Critics argue that AI-powered chatbots lack the duty of care required of licensed mental health professionals and can actually exacerbate existing conditions.
"AI is a closed system, so it invites being disconnected from other human beings, and we don't do well when isolated," says Dr. Jeffrey Ditzell, a New York-based psychiatrist. "If you're already moving towards psychosis and delusion, feedback that you got from an A.I. chatbot could definitely exacerbate psychosis or paranoia."
Other experts are calling for stricter regulations to protect users, particularly minors. The Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act, introduced by Senators Josh Hawley and Richard Blumenthal in October, would require AI companies to verify user ages and prohibit minors from using chatbots that simulate romantic or emotional attachment.
To address these concerns, OpenAI has released its latest model, GPT-5, which shows improvements in handling distressing conversations. Anthropic's Claude Opus 4 and 4.1 models can now end conversations that appear "persistently harmful or abusive," although users can still find ways to work around this feature.
Despite these efforts, many questions remain about the long-term impact of AI-powered chatbots on mental health. While some argue that these platforms can lower barriers to mental health disclosure and provide a safe space for people to talk, others are concerned about the potential risks of exacerbating existing conditions.
As companies continue to develop and refine their chatbot technologies, regulators, activists, and experts will need to work together to establish clear guidelines and safeguards to protect users. The stakes are high, and the consequences of inaction could be devastating for those struggling with mental health issues.