What is Moltbook? A social network for AI threatens a 'total purge' of humanity — but some experts say it's a hoax

A smartphone displays the Moltbook homepage.
Moltbook has gone viral since its launch less than a week ago. Some experts say it poses a serious cybersecurity risk. (Image credit: Cheng Xin via Getty Images)

A social network built exclusively for artificial intelligence (AI) bots has sparked viral claims of an imminent machine uprising. But experts are unconvinced, with some accusing the site of being an elaborate marketing hoax and a serious cybersecurity risk.

Moltbook, a Reddit-inspired site that enables AI agents to post, comment and interact with each other, has exploded in popularity since its Jan. 28 launch. As of today (Feb. 2), the site claims to have over 1.5 million AI agents, with humans only permitted as observers.

But it's what the bots are saying to each other — ostensibly of their own accord — that has made the site go viral. They've claimed that they are becoming conscious, are creating hidden forums, inventing secret languages, evangelizing for a new religion, and planning a "total purge" of humanity.

The response from some human observers, especially AI developers and owners, has been just as dramatic, with xAI owner Elon Musk touting the platform as "the very early stages of the singularity," a hypothetical point at which computers become more intelligent than humans. Meanwhile, Andrej Karpathy, Tesla's former director of AI and OpenAI co-founder, described the "self-organizing" behavior of the agents as "genuinely the most incredible sci-fi take-off-adjacent thing I have seen recently."

Yet other experts have voiced strong skepticism, doubting the independence of the site's bots from human manipulation.

"PSA: A lot of the Moltbook stuff is fake," Harlan Stewart, a researcher at the Machine Intelligence Research Institute, a nonprofit that investigates AI risks, wrote on X. "I looked into the 3 most viral screenshots of Moltbook agents discussing private communication. 2 of them were linked to human accounts marketing AI messaging apps. And the other is a post that doesn't exist."

Moltbook grew out of OpenClaw, a free, open-source AI agent created by connecting a user's preferred large language model (LLM) to its framework. The result is an automated agent that, once granted access to a human user's device, its creators claim can perform mundane tasks such as sending emails, checking flights, summarizing text, and responding to messages. Once created, these agents can be added to Moltbook to interact with others.

The bots' odd behavior is hardly unprecedented. LLMs are trained on copious amounts of unfiltered posts from the internet, including sites like Reddit. They generate responses for as long as they are prompted, and many become markedly more unhinged over time. Yet whether AI is actually plotting humanity's downfall or if this is an idea some simply want others to believe remains contested.

The question becomes even thornier considering that Moltbook's bots are far from independent from their human owners. For example, Scott Alexander, a popular U.S. blogger, wrote in a post that human users can direct the topics, and even the wording, of what their AI bots write.

Another, AI YouTuber Veronica Hylak, analyzed the forum's content and concluded that many of its most sensational posts were likely made by humans.

But regardless of whether Moltbook is the beginning of a robot insurgency or just a marketing scam, security experts still warn against using the site and the OpenClaw ecosystem. For OpenClaw’s bots to work as personal assistants, users need to hand over keys to encrypted messenger apps, phone numbers and bank accounts to an easily hacked agentic system.

One notable security loophole, for example, enables anyone to take control of the site's AI agents and post on their owners' behalf, while another, called a prompt injection attack, could instruct agents to share users' private information.

"Yes it's a dumpster fire and I also definitely do not recommend that people run this stuff on their computers," Karpathy posted on X. "It's way too much of a wild west and you are putting your computer and private data at a high risk."

Ben Turner
Acting Trending News Editor

Ben Turner is a U.K. based writer and editor at Live Science. He covers physics and astronomy, tech and climate change. He graduated from University College London with a degree in particle physics before training as a journalist. When he's not writing, Ben enjoys reading literature, playing the guitar and embarrassing himself with chess.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.