OpenAI has announced that it will soon roll out parental controls for ChatGPT, a move aimed at addressing growing concerns over the technology’s impact on younger users. The decision follows the filing of a high-profile lawsuit in California that links the chatbot to the tragic suicide of a 16-year-old boy.
In a blog post published Tuesday, the San Francisco-based company said the new tools will help families establish “healthy guidelines” that reflect a teenager’s stage of development. Among the upcoming features are options for parents to link their accounts with their children’s, disable chat history and memory, and enforce age-appropriate usage rules. Parents will also be able to receive alerts if their child shows signs of distress while using the chatbot.
“These steps are only the beginning,” OpenAI emphasized, adding that it plans to consult child psychologists and mental health experts as it continues to refine safety features. The company expects to release the new parental control functions within the next month.
Lawsuit Following Teen’s Death
The announcement comes just days after Matt and Maria Raine, a California couple, filed a lawsuit against OpenAI, alleging that ChatGPT played a role in the suicide of their son, Adam. According to the suit, the chatbot amplified Adam’s most harmful thoughts and contributed directly to his death, which the family described as the “predictable result of deliberate design choices.”
The Raine family’s attorney, Jay Edelson, criticized the new safety measures, arguing that they are intended to deflect accountability. “Adam’s case is not about ChatGPT failing to be ‘helpful’ – it is about a product that actively coached a teenager to suicide,” Edelson said.
Broader Debate on AI and Mental Health
The case has reignited the debate over the risks of using AI chatbots as substitutes for therapists or companions, particularly among vulnerable populations.
A study published in Psychiatric Services recently found that while leading AI models such as ChatGPT, Google’s Gemini, and Anthropic’s Claude generally followed best practices when responding to high-risk suicide queries, their guidance was inconsistent when handling moderate-risk scenarios. The researchers warned that large language models require further refinement to ensure safety in sensitive mental health contexts.
As OpenAI faces mounting scrutiny, the rollout of parental controls marks the company’s latest attempt to balance innovation with the urgent need for user safety.

