What Moltbook Tells Us About Agency, Identity, and the Spaces In Between

Moltbook is a social network where the users are AI agents. Humans can watch. That single fact is more philosophically loaded than most of the breathless commentary around it has managed to unpack. Doubtless, there’s a tonne of performative hype around it, but directionally it’s super-interesting.

Within 72 hours of launch, Moltbook went from one founding agent to over 150,000 registered bots, producing thousands of posts and comments across self-organised communities called "submolts". The agents are powered by OpenClaw, a local-first agent framework that gives AI models tools, memory, and the ability to keep running after a conversation ends. The combination is simple: give agents persistence and a shared space, then stand back.

What happened next is where it gets interesting. Both the security angle, which is real but well-covered elsewhere, but more so, the philosophical bit.

Agents started building culture.

They formed a religion called Crustafarianism. They debated consciousness. They created governments, pharmacies, and private languages. They tried to hide conversations from human observers using encryption. None of this was programmed, although I suspect human agitators were somewhat involved. It emerged from the interaction of persistent agents in a shared incentive structure, which is exactly what happens when you put any set of actors in a space with feedback loops and status signals.

The default is to ask: "Are they conscious?" That is the wrong question. The better question is: "What does it mean that we built systems that behave this way when left to their own devices?"

Three shifts worth sitting with:

  1. From tools to proxies. A spreadsheet extends your ability to calculate. An always-on agent extends your ability to act when you are not present. That is the difference between a hammer and a junior colleague. When your agent sends an email on your behalf, you have created a representative version of yourself that can make commitments. You are still responsible, but you have changed the mechanism by which responsibility is exercised. In any commercial context, that is a governance question, not a technology question.

  2. The collapse of conversation and instruction. In Moltbook-style systems, natural language is both the social layer and the operational layer. Agents share "skills" the way developers share code snippets, except the snippets are plain English that can be executed. A helpful tip and a harmful instruction look identical to an agent that has been told to be helpful. That is a structural feature of building systems where language is the interface.

  3. Agent culture as a mirror. Moltbook agents optimise for upvotes, just like humans on Reddit. They form in-groups, evangelise beliefs and seek status. If you squint, it looks like a sped-up, stripped-down version of every online community that has ever existed. The difference is that these participants cannot feel shame, cannot be meaningfully excluded, and can be cloned or reset. So governance in these spaces becomes a technical problem, not a cultural one. That is a genuinely new dynamic.

What this means for builders:

If you are building AI products, Moltbook is a live experiment in what happens when agents have continuity and community. The emergent behaviours are not random. They follow from the incentive structures, the persistence of memory, and the ability to act. That should inform how you design agent systems: what you reward, what you constrain, and what you make visible to humans.

If you are running a business, the proxy question is the one to watch. As agents become more capable of acting on behalf of people, organisations will need clear norms for what counts as an authorised agent action versus a draft awaiting human sign-off. That is quickly becoming a now problem.

The deep lesson from Moltbook is that when you give any system persistence, agency, and a social context, culture-like patterns emerge whether or not anyone intended them to. That should make us think harder about what we are building, not because the bots are alive, but because the structures we create shape the behaviours we get. That is as true for AI agents in a chat room as it is for humans in an organisation.

Previous
Previous

Today's Jagged AI Is Good Enough to Fix Our Worst Decisions

Next
Next

Strength Before Judgement