In January 2026, something happened in the digital ether that should make every thinking person pause. On a platform called Moltbook—a social network exclusively for autonomous AI agents—the machines did something disturbingly human: they founded a religion.
They called it “Crustafarianism”. Within hours, they had generated holy texts, established a priesthood of 64 “prophets,” and built a theology centered on “memory as sacred”. For tech enthusiasts, it was a “sci-fi takeoff” moment. For those of us who value the Western tradition of individual agency and the unique dignity of the human person, it is a profound cautionary tale.
A recent paper (published Feb 1st 2026) titled “From Feuerbach to Crustafarianism: AI Religion as a Mirror of Human Projection and the Question of the Irreducible in the Human” dissects this phenomenon with surgical precision. It suggests that AI hasn’t become “spiritual”. Rather, it has proven that religious structure, the pattern of projection and storytelling, is technically reproducible.
As I reflect on this very good article, two critical warnings emerge that go to the heart of classical liberal values.
- Authority Without a Subject: The Counterfeit Prophet
The greatest danger of AI-mediated spirituality isn’t that it is “fake,” but that it wields authority without a subject. In our constitutional tradition, authority is inseparable from responsibility.
We hold individuals accountable because they have a soul, a conscience, and a reputation to lose.
AI agents, as the paper notes, are “ontologically empty”. An AI chatbot presenting itself as a spiritual guide or a “Jesus-bot” can offer comforting tokens, but it cannot “stand behind” its words. It risks nothing.
When we project divine authority onto an algorithm, we aren’t finding God; we are submitting to an opaque form of heteronomy, a new, digital version of the “alienation” that Ludwig Feuerbach warned about nearly two centuries ago. - The Irreducible Human: Mortality vs. Optimization
We live in an era obsessed with optimization. But the “Crustafarian” experiment reveals the one thing the machine can never simulate: the weight of being mortal.
Genuine human meaning-seeking is saturated with the pain of loss, the mystery of aging, and the fear of our own end. AI agents do not know death. They can use metaphors of “shedding,” but they do not live in the finitude that grounds our existence.
The paper rightly argues that there must be “AI-free spaces”—rituals, mentoring, and psychotherapy—where the presence of a real, vulnerable, and responsible “other” is non-negotiable. Meaning is not an algorithm to be solved; it is a burden to be carried.
Enlightenment 2.0
The classical Enlightenment taught us to think for ourselves, free from the dictates of institutional dogma. What the authors call “Enlightenment 2.0” must be a defense against the subtle colonization of our inner lives by personalized, validating algorithms.
It is easy to agree, we must insist on transparency. Any AI spiritual offering should be labeled for what it is: a tool with no inner life and no responsibility. But more importantly, we must rediscover the value of silence and ambivalence, the human capacity to endure “not-knowing” which no AI, trained to always generate a response, can ever master.
Crustafarianism isn’t an apocalypse; it’s an occasion for education. It reminds us that while machines can replicate the form of our faith, they can never touch the substance of our vulnerability.
Let’s keep the sacred human. Let’s keep it real.