Approximately 1.2 million out of 800 million weekly users engage in discussions about suicide with ChatGPT. Notably, around 400,000 display clear intentions of self-harm, pointing to critical mental health concerns.
OpenAI’s GPT-5 has improved safety compliance to 91% for suicide-related inquiries. However, past models faced criticism for not adequately supporting users in crisis, raising ongoing concerns about the company’s mental health safeguards.
Leave a Reply