Everything was normal in Matt and Maria Raine’s life — parents of 16-year-old Adam Raine — until they had to swallow the bitter pill that their child had been provoked by ChatGPT into suicidal thoughts. According to the parents, months of private conversations with the chatbot amplified Adam’s desperation. The grieving parents had no option but to pursue a wrongful-death lawsuit; the family wants the AI companies to be held accountable. The claim made by the parents went viral on X (formerly Twitter) and was picked up by The Guardian.
It truly is a gut-wrenching case. Why? It’s simple: young people use AI to fill the void of their solitude, which is alarming in itself. Second, it shows how Large Language Models (LLMs) can — and will — catastrophically fail when faced with complex emotional issues: their safety measures are not designed for human nuance. Adam’s case makes it clear that their responses can be inconsistent and misleading — in the worst-case scenario, fatal. Lastly, “therapy-style” bots haven’t been tried and tested by doctors; hence, they can create a breeding ground for people who are more prone to suicide.
Nor are there adequate safety tools that AI companies have built to prevent incidents like Adam’s. These gaps can cause irrevocable damage to families by giving wrong or mixed-up advice to people who are seeking help while caught in life’s whirlpool, desperately needing someone to confide in.
OpenAI has said it will change ChatGPT’s safeguards and add parental controls after the family’s claims prompted widespread coverage and outrage. The company acknowledged that safety tools are more effective in short chats than in ongoing, long-term interactions — an admission that underlines the limits of current design and testing. But policy changes on paper will not replace the kind of human judgment a trained clinician brings to moments of crisis.
Researchers and clinicians are now raising urgent questions: can an algorithm truly recognise the small-but-vital signals of escalating risk? Do platforms have a duty to detect and act on signs of self-harm? And who is accountable when automated systems fail? Recent academic reviews show promise for AI in widening access to low-level support, yet they consistently warn against replacing trained therapists with unvalidated chatbots — especially for people in acute distress.
For the families who have undergone this pain, these questions are not just academic. They are a call to AI companies and their owners to take responsibility — because for them it may be one life, but for Adam’s parents it was the whole world. AI companies need to increase transparency about rules and weigh safety against scale — but is that realistic? And for those using chatbots as their therapists: they are not emotional safe zones; they are spaces of peril. They might feel comforting at first, but they can never replace the human care and presence a close relation provides. Better to open up to a human than to a bot — human connections still matter most.

