OpenAI has announced that it will soon roll out parental controls for ChatGPT, a move aimed at addressing growing concerns over the technologyโs impact on younger users. The decision follows the filing of a high-profile lawsuit in California that links the chatbot to the tragic suicide of a 16-year-old boy.
In a blog post published Tuesday, the San Francisco-based company said the new tools will help families establish โhealthy guidelinesโ that reflect a teenagerโs stage of development. Among the upcoming features are options for parents to link their accounts with their childrenโs, disable chat history and memory, and enforce age-appropriate usage rules. Parents will also be able to receive alerts if their child shows signs of distress while using the chatbot.
โThese steps are only the beginning,โ OpenAI emphasized, adding that it plans to consult child psychologists and mental health experts as it continues to refine safety features. The company expects to release the new parental control functions within the next month.
Lawsuit Following Teenโs Death
The announcement comes just days after Matt and Maria Raine, a California couple, filed a lawsuit against OpenAI, alleging that ChatGPT played a role in the suicide of their son, Adam. According to the suit, the chatbot amplified Adamโs most harmful thoughts and contributed directly to his death, which the family described as the โpredictable result of deliberate design choices.โ
The Raine familyโs attorney, Jay Edelson, criticized the new safety measures, arguing that they are intended to deflect accountability. โAdamโs case is not about ChatGPT failing to be โhelpfulโ โ it is about a product that actively coached a teenager to suicide,โ Edelson said.
Broader Debate on AI and Mental Health
The case has reignited the debate over the risks of using AI chatbots as substitutes for therapists or companions, particularly among vulnerable populations.
A study published in Psychiatric Services recently found that while leading AI models such as ChatGPT, Googleโs Gemini, and Anthropicโs Claude generally followed best practices when responding to high-risk suicide queries, their guidance was inconsistent when handling moderate-risk scenarios. The researchers warned that large language models require further refinement to ensure safety in sensitive mental health contexts.
As OpenAI faces mounting scrutiny, the rollout of parental controls marks the companyโs latest attempt to balance innovation with the urgent need for user safety.

