OpenAI CEO Sam Altman has issued a cautionary note regarding the growing reliance on ChatGPT for emotional support and mental health advice, warning users that the AI chatbot is not a substitute for professional help. Speaking on the popular podcast This Past Weekend with Theo Von, Altman emphasized that conversations with ChatGPT lack legal or medical protection and should not replace advice from licensed therapists, doctors, or lawyers.
Altman expressed concern that many users, particularly teenagers and young adults, are turning to ChatGPT for guidance on deeply personal and mental health issues without understanding the potential privacy implications. “People talk about the most personal stuff in their lives to ChatGPT,” he noted, adding that OpenAI has yet to implement robust privacy protections for such sensitive interactions.
While AI continues to integrate into daily life, Altman stressed the importance of seeking real human expertise when it comes to health, legal matters, and emotional well-being. “ChatGPT is not therapy,” he said bluntly, highlighting the limits of AI when dealing with complex human emotions.
In parallel, Google is taking steps to regulate how its AI services interact with younger users. According to The New York Times, the company will soon impose an age restriction for its Gemini chatbot, preventing children under 13 from using it. The update will be rolled out next week through Google’s Family Link service.
Family Link is a parental control tool that allows guardians to manage and monitor their children’s access to digital platforms. With the upcoming changes, parents will be able to determine whether their children can interact with Gemini, ensuring safer AI use for minors.

These developments underscore a growing recognition among tech leaders of the need for ethical boundaries and safeguards as AI becomes increasingly integrated into personal and emotional aspects of users’ lives.

