OpenAI CEO Sam Altman has raised concerns about the growing presence of AI bots on social media platforms. He believes the digital space is becoming increasingly “unreal,” as the line between human interaction and bot-generated content blurs.
Altman’s remarks followed his observation of a surge in online discussions praising Codex, OpenAI’s programming tool launched earlier this year. He noticed an overwhelming number of users claiming they had switched from competitor tools, with some joking about the repetitive nature of these posts.
Although Codex has gained popularity, Altman admitted that he often questions whether these online testimonials are written by real users. “I assume it’s all fake or bots,” he explained, even while acknowledging that Codex adoption trends are genuine.
Breaking Down the Problem
Altman analyzed the situation by highlighting several factors shaping the current state of social media.
-
Many users now mimic “LLM-speak,” adopting patterns similar to large language models.
-
Online communities tend to move in coordinated waves of activity.
-
Technology hype cycles fuel extremes, shifting between optimism and criticism.
-
Platforms prioritize engagement, pushing creators to copy trending styles for visibility.
-
Past corporate “astroturfing” campaigns have made online audiences more suspicious.
-
Bots remain an undeniable factor in shaping discussions.
Humans Sounding Like Machines
A striking paradox emerges from Altman’s observations: humans are now being accused of sounding like the very AI systems designed to mimic them. The widespread use of language models has created a feedback loop, where human communication increasingly mirrors machine output.
This challenge is amplified by the fact that OpenAI’s models were partly trained on platforms such as Reddit. Communities on these platforms often show collective behavior that looks manufactured, even when it is not.
The Echo Chamber Effect
Altman also noted that online fandoms and hyperactive communities tend to evolve into echo chambers. These groups amplify excitement or anger, sometimes escalating into negativity or even hostility. As a result, authentic voices become harder to separate from synthetic or exaggerated ones.
Authenticity in Question
Altman’s comments reflect a broader cultural unease. As artificial intelligence grows more sophisticated, the ability to distinguish between real and synthetic content is shrinking. With bots, machine-like human expression, and platform-driven engagement strategies all at play, the authenticity of online interaction faces a serious challenge.
For businesses, creators, and everyday users, this raises a critical question: how can trust be maintained in a digital landscape where bots and humans appear indistinguishable?

