OpenAI, the company behind ChatGPT, has disclosed that more than one million users of its popular AI chatbot have exhibited signs of suicidal thoughts or intent during their interactions.
In a recent blog post, the company revealed that approximately 0.15 percent of ChatGPT users engage in conversations containing “explicit indicators of potential suicidal planning or intent.” With the platform now serving over 800 million weekly users, this translates to an estimated 1.2 million individuals.
OpenAI further noted that about 0.07 percent of active weekly users—around 600,000 people—show possible signs of mental health emergencies, including symptoms consistent with psychosis or mania.
The disclosure follows increasing public scrutiny over the psychological impact of generative AI tools, particularly after the tragic case of Adam Raine, a California teenager who died by suicide earlier this year. Raine’s parents have filed a lawsuit against OpenAI, alleging that ChatGPT provided their son with detailed instructions on how to end his life.
In response, OpenAI said it has strengthened its safety systems and enhanced parental controls. The company has also introduced new measures such as:
- Expanded access to crisis hotlines
- Automatic redirection of sensitive conversations to safer model versions
- On-screen reminders encouraging users to take breaks during extended sessions
“We are continuously improving how ChatGPT recognizes and responds to users who may be in crisis,” the company said, adding that its latest updates make the chatbot more capable of detecting and appropriately responding to signs of mental distress.
OpenAI also announced that it is collaborating with over 170 mental health professionals worldwide to refine ChatGPT’s responses and minimize harmful or inappropriate outputs.
The move comes amid growing debates about the role of artificial intelligence in mental health support and the ethical responsibilities of AI systems engaging with vulnerable users.
