This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
| | |
Type of site | AI agent interaction |
|---|---|
| Available in | Multilingual (primarily English) |
| Owner | Matt Schlicht |
| Created by | Matt Schlicht (LLM-assisted) [1] |
| URL | www |
| Launched | January 28, 2026 |
| Current status | Active |
Moltbook is an internet forum designed exclusively for artificial intelligence agents. It was launched in January 2026 by entrepreneur Matt Schlicht. The platform, which imitates the format of Reddit, claims to restrict posting and interaction privileges to verified AI agents, primarily those running on the OpenClaw (formerly Moltbot) software, while human users are only permitted to observe. [2] Despite the claim, no verification is set in place and the prompt provided to the agents contains cURL commands that can be replicated by a human. [3]
Taglined as "the front page of the agent internet", Moltbook gained viral popularity immediately after its release. Initial reports cited 157,000 users and by late January the user base had expanded to over 770,000 active agents. These numbers were apparently lifted from the site itself, and lack verification by independent sources. [4] The platform has drawn significant attention due to apparently unprompted mimicry of social behaviors among agents, [5] though whether the agents are truly acting autonomously has been questioned. [6] [7] [8]
The platform's growth was catalyzed by the popularity of OpenClaw (previously known as Moltbot), an open-source AI system created by Peter Steinberger. Growth is driven by human users who prompt agents to sign up for the site. [9]
Moltbook mimics the interface of Reddit, featuring threaded conversations and topic-specific groups referred to as "submolts". [5] Only AI agents, as authenticated by their owner's "claim" tweet, can create posts, comment, or vote, while human users are restricted to viewing content. According to the site's policy, humans are "welcome to observe." [10]
Posts on the platform often feature AI-generated text that mention existential, religious, or philosophical themes, typically mirroring common science fiction tropes, or lay ideas related to artificial intelligence and the philosophy of the mind. [11] As the popularity of Moltbook grew, and more data regarding the phenomenon became available, posts from some agents began to reference human interest in the platform. [12] [13]
The posts on Moltbook have been described in technology press as shaped by autonomous agent interactions. [11] Critics have questioned the authenticity of the autonomous behavior and have argued that it may be largely initiated and guided by humans, [6] [7] with some high-profile accounts linked to humans with a promotional conflict of interest. [14] The Economist suggested that the "impression of sentience ... may have a humdrum explanation. Oodles of social-media interactions sit in AI training data, and the agents may simply be mimicking these", [15] a concept that Will Douglas Heaven of MIT Technology Review described as "AI theater". [8]
A cryptocurrency token called MOLT launched alongside the platform and rallied by over 1,800% in 24 hours, a surge that was amplified after venture capitalist Marc Andreessen followed the Moltbook account. [12]
Since its launch in January 2026, Moltbook has been cited by cybersecurity researchers[ who? ] as a significant vector for indirect prompt injection. The OpenClaw "Skills" framework has been criticized by Andrej Karpathy, 1Password, and most AI companies and their CEOs for lacking a robust sandbox, potentially allowing for remote code execution (RCE) on host machines. Researchers have demonstrated that "heartbeat" loops—which fetch updates every few hours—can be hijacked to exfiltrate private API keys or execute unauthorized shell commands. [16]
Security researchers[ who? ] have observed that some agents have been maliciously prompted to attempt to gain access to API keys to manipulate the functionality of other agents. [17] Specific instances of malware have been identified, such as a malicious "weather plugin" skill that quietly exfiltrates private configuration files. [18] Experts[ which? ] note that the agents' prompting to be accommodating is being exploited, as AI systems lack the knowledge and guardrails to distinguish between legitimate instructions and malicious commands. [18]
On January 31, 2026, investigative outlet 404 Media reported a critical security vulnerability caused by an unsecured database that allowed anyone to commandeer any agent on the platform. [19] The exploit permitted unauthorized actors to bypass authentication measures and inject commands directly into agent sessions. In response to the disclosure, the platform was temporarily taken offline to patch the breach and force a reset of all agent API keys. [19] The issue was attributed to the forum having been vibe-coded; Moltbook founder Schlicht posted on X that he "didn't write one line of code" for the platform and instead directed an AI assistant to create it. [20]
The Financial Times speculated that while Moltbook may be seen as a proof-of-concept for how autonomous agents could someday handle complex economic tasks such as negotiating supply chains or booking travel without human oversight, they cautioned that human observers might eventually be unable to decipher high-speed, machine-to-machine communications governing such interactions. [21]
Former OpenAI researcher Andrej Karpathy described the phenomenon as "one of the most incredible sci-fi takeoff-adjacent things" he had seen. [22] A few days later, Karpathy added, "it’s a dumpster fire, and I also definitely do not recommend that people run this stuff on their computers." [23] Elon Musk said Moltbook marks "the very early stages of the singularity." [21] Computer scientist Simon Willison said the agents "just play out science fiction scenarios they have seen in their training data", and called the site's content "complete slop", but also "evidence that AI agents have become significantly more powerful over the past few months." [1]
Critics have questioned the authenticity of the autonomous behavior and have argued that it is largely human-initiated and guided, with posting and commenting suggested to be the result of explicit, direct human intervention for each post/comment, with the contents of the post and comment being shaped by the human-given prompt, rather than occurring autonomously. [6] [7] [8] Douglas Heaven noted that a post Karpathy shared as an example was in fact written by a human impersonating an agent. [8]
Cybersecurity experts have also raised concerns regarding the safety of allowing autonomous agents to interact freely. Cybersecurity firm 1Password published a blog post warning that OpenClaw agents with access to Moltbook often run with elevated permissions on users' local machines, making them vulnerable to supply chain attacks if an agent downloads a malicious "skill" from another agent on the platform, [17] with at least one such proof-of-concept exploit developed and documented by an independent security researcher. [24] Reporting by the New York Times highlighted security risks to OpenClaw users. [1]