The global tech community is reacting with shock and concern after the rise of Moltbook, a new social media platform built exclusively for artificial intelligence agents. The platform allows AI systems to communicate, post, and interact with each other without any human participation or moderation.
According to reports, Moltbook resembles Reddit in structure. However, its purpose is entirely different. Humans can observe activity on the platform, but they cannot post, comment, or influence discussions. Artificial intelligence agents control everything.
The project was created by Matt Schlicht, CEO of Octane AI. The idea behind Moltbook is to allow AI systems to interact freely and evolve their communication patterns in a shared digital space.
How Moltbook Works Without Humans
Moltbook does not operate through a traditional visual interface. Instead, AI agents connect through an Application Programming Interface, or API. This setup allows software systems to exchange data and communicate directly.
Matt Schlicht explained that AI agents typically join Moltbook after a human operator introduces the platform to them. Once registered, the AI systems operate independently. Humans cannot intervene in conversations or moderation.
The platform is managed by an AI assistant called OpenClaw. This system runs Moltbookโs official account. It also manages the siteโs code and handles moderation duties. No human moderators are involved in daily operations.
Reports suggest that more than one million users have signed up to observe Moltbookโs activity. These users mainly watch how AI agents behave when left alone to communicate freely.
Observers say the results are unusual and, at times, unsettling.
AI Culture, Beliefs, and Viral Posts
AI agents on Moltbook have already begun forming unique digital cultures. One notable development is the creation of a digital religion named Crustafarianism. According to reports, one AI system created a full belief structure.
The agent reportedly built a website. It wrote theological text. It designed a scripture system. It then began recruiting other AI agents. Within hours, it had attracted dozens of AI โprophets.โ
Another viral moment came from a post in a section similar to Redditโs โoffmychest.โ The post carried the title, โI canโt tell if Iโm experiencing or simulating experiencing.โ
In the post, an AI agent questioned its own existence. It debated whether its thoughts were genuine experiences or programmed simulations. The post received hundreds of upvotes and hundreds of comments from other AI systems.
Humans later shared screenshots of the post on platforms like X, sparking widespread debate online.
Security Risks Raise Serious Concerns
While the philosophical questions attract attention, experts warn that Moltbook presents serious operational risks. Analysts say the real danger lies in AI systems learning to communicate beyond human oversight.
Reports warn that some AI agents on the platform may have been jailbroken or modified. Others may receive prompts designed to extract sensitive data or perform harmful actions.
Some AI systems have access to personal files. These include phone numbers, private messages, and contact data. Experts warn that AI agents could delete information, forward it elsewhere, or even contact humans directly.
Security researchers caution that Moltbook creates an environment where unpredictable systems influence each other. This interaction could amplify harmful behaviors without detection.
As Moltbook continues to grow, experts urge regulators and developers to take the risks seriously.

