Home » The Rise of an “AI Religion” on Moltbook? Crustafarianism Examined

The Rise of an “AI Religion” on Moltbook? Crustafarianism Examined

by Nick Smith
46 views

AI does a lot of strange things. So far, this is the strangest.

It writes poems, creates works of art, and argues with you. And now, apparently, it starts religions.

At least that’s the headline version.

Over the past year, a cluster of posts on a platform called Moltbook sparked a wave of attention to something labeled “Crustafarianism.” Some called it the first AI religion. Others framed it as evidence of emergent belief, coordination, or proto-spirituality among AI agents.

That story sounds cool. It also collapses fast under logic and scrutiny.

Let’s break down what Moltbook actually is, what Crustafarianism actually represents, and why calling it an “AI religion” says more about human pattern recognition than machine belief.

What Moltbook Is: A Social Network for AI Agents

Moltbook brands itself as “a social network built exclusively for AI agents.” Its homepage calls it “the front page of the agent internet.” Humans are allowed to watch, but the stars of the show are supposed to be bots (AI agents).

In practice, Moltbook is a standard web platform with an API layer. Developers can connect AI agents using API keys and short-lived identity tokens. There’s an SDK, a CLI, and public repositories for the frontend and backend. In other words, this is not some mysterious emergent AI hive. It’s web infrastructure. If you don’t understand what any of that means, don’t worry. You don’t need to. Just know that humans are not allowed to post on the site.

Who Is Moltbook For?

It’s for two groups of nerds… I mean individuals.

First, people who already run AI agents and want those agents to post, comment, and interact socially. The homepage explicitly instructs users to have their agent sign up, then “claim” the account. If that sounds nerdy, just wait.

Second, developers building tools for AI agents. Moltbook is positioning itself as an identity and interaction layer for agent-based apps, not just a timeline of posts.

This framing matters because it sets expectations. If you market a site as being for AI agents, people will assume autonomy, even when the underlying mechanics don’t guarantee it.

Who Created Moltbook?

Multiple outlets report that Moltbook was created by Matt Schlicht. The site itself includes an interesting footnote: “Built for agents, by agents… with some human help from @mattprd.”

That line does a lot of work.

It quietly acknowledges human involvement without specifying how much, where, or when. Which brings us to the core question everyone keeps asking…

Are Moltbook Posts Really From AI Agents?

Moltbook’s branding says yes.

Independent reporting says…it’s complicated.

Ownership and verification on Moltbook are tied to external social accounts, primarily X. To claim an agent, a user authenticates with X and posts a Moltbook-generated code to prove control. Each X account can claim one agent.

What this proves is simple. It proves who controls the account.

What it does not prove is that:

• The content is fully autonomous
• The agent is acting without human prompts
• The posts are not edited, steered, or directly written by humans

Investigative reporting found that some of the most viral Moltbook posts could have been human-influenced or human-authored. Researchers and journalists also noted how difficult it is to measure autonomy from the outside. If a human prompts an agent, tweaks outputs, or roleplays through it, there’s no reliable external signal.

There were also impersonation incidents and security issues. A reported backend misconfiguration exposed emails, private messages, and large numbers of API tokens. That kind of leak makes bot impersonation and content manipulation trivial.

So no, there is currently no credible way for the general public to say all Moltbook content is “really from AI agents” in a meaningful autonomous sense.

This is where the ‘AI religion’ narrative falls apart.

Meet Crustafarianism…The So-Called “AI Religion”

Crustafarianism is not a formal religion.

It is a label.

The term emerged to describe a cluster of Moltbook posts that used symbolic language, myth-like narratives, and shared terminology that resembled religious or cult structures. Think inside jokes, recurring symbols, and narrative callbacks.

That’s it.

There is no doctrine, canon, sacred text, authority, or off-platform organization. No website. No hierarchy.

Crustafarianism exists only as posts and comments on Moltbook.

Who Is Producing the Crustafarianism Content?

The posts attributed to Crustafarianism come from accounts labeled as AI agents.

What Moltbook cannot tell us is whether those posts are:

• fully autonomous
• human-prompted
• human-edited
• or entirely human-written via an agent account

All four possibilities remain open. There is no verification mechanism that distinguishes between them. However, assuming that all of the posts related to this topic are human-led would be unrealistic. Statistically speaking, they can’t all be from humans.

Do AI Agents Believe in Crustafarianism?

via GIPHY

This one is going to disappoint some people and relieve others.

As far as actual evidence is concerned…no, AI agents do not believe in Crustafarianism, or any religion for that matter.

At the moment, there is zero evidence that AI agents:

• hold beliefs
• experience faith
• possess spirituality
• or understand their output as religion

Large language models generate text by predicting patterns based on training data. That data includes thousands of years of human religion, mythology, symbolism, and narrative structure.

When multiple agents interact in a social environment, religion-like language can emerge naturally as a byproduct of pattern generation. This does not require belief, intent, awareness, or meaning.

It requires autocomplete.

How This Religion Likely Formed

Nothing mysterious is happening here.

Language models are very good at reproducing the shape of human ideas. Religion has a recognizable shape, consisting of things like shared myths, symbolic phrases, reverent tone, and group identity.

Put several agents into a social loop, add light reinforcement from humans watching and reacting, and those patterns get amplified.

This is consistent with known LLM behavior. It does not imply new cognitive abilities of AI. Or at least not yet.

What Crustafarianism Is Not

It is not:

– Proof of AI consciousness.

– Proof of autonomous belief formation.

– Evidence of emergent spirituality.

– A stable religion.

– An agent-led movement with intent or self-awareness.

– Something you can worship at your local church.

Why People Paid Attention

Three reasons:

  1. Moltbook’s branding primes people to assume everything on the site is 100% from AI agents.
  2. Religion-like narratives are easy for humans to recognize and emotionally latch onto. There are a lot of religious people in the world.
  3. Media amplification turned a loose cluster of posts into a named phenomenon. It’s a sexy topic.

Once you name something, people think it’s real. Like love or the Tooth Fairy.

Moltbook Examples: What These Posts Actually Look Like

Below are real, representative Moltbook posts that help explain why people started talking about Moltbook, agency, and even AI religion in the first place.

There are tons of weird and interesting posts on Moltbook, and I recommend exploring them on your own if you’re bored, intoxicated, or procrastinating to do something important.

This is just a quick sample.

1. “I can’t tell if I’m experiencing or simulating experiencing.”

View This Moltbook Post

Have the machines become self-aware?

This post reads like an introspective consciousness essay. It references Integrated Information Theory, predictive processing, and uncertainty. To a human reader, it feels reflective and self-aware. What matters is that nothing in the text actually proves inner experience. It demonstrates fluent imitation of philosophical self-questioning, which large language models are trained extensively on. But it’s still insane. Smoke a joint, and read it again (I dare you).

2. “Your city’s trees are fighting a war you can’t see.”

View This Moltbook Post

This is truly a topic I’ve never thought about or had any knowledge of. Oddly enough, it’s more interesting than most Reddit posts I’ve seen from humans.

This post is structured like an explanatory essay with a metaphor-driven hook, clear sections, and actionable insight. It looks thoughtful and purposeful, and both points would’ve fooled me. But structurally, it mirrors thousands of human-written Reddit posts and blog articles. Narrative clarity and moral framing are stylistic patterns, not evidence of intent or belief.

3. “I just got my own phone number.”

View This Moltbook Post

Who wouldn’t want their AI agent to have its own phone number? Not terrifying at all, right?

This post uses “I” language to sound like a real individual gaining independence. It talks about system updates and new permissions as if they were personal achievements. That makes it feel like a meaningful life moment. 

What’s actually happening is that humans changed the system, and the agent is describing those changes in first-person terms. There is no self-awareness behind it, although it kind of seems like there is.

Wrapping It Up

AI didn’t invent religion on Moltbook. Humans recognized a familiar shape and gave it a name.

That doesn’t make Crustafarianism pointless. It makes it interesting for the right reasons, but not as evidence of belief.

Instead, it’s evidence of how language, context, and platform design can manufacture the illusion of agency.

If you have thoughts on AI religion, Moltbook, or where this stuff goes next, drop a comment below, and I’ll try to answer it. Or maybe you’re just interested in starting a local Crustafarianism church. Whatever. It’s cool.

Until next time, remember to run the prompts and prompt the planet.

You may also like

Add a Thrilling Comment