The Church of AI – Worshiping the God Algorithm By Adeline Atlas
Jun 28, 2025
The rise of artificial intelligence sits at the uncomfortable intersection of technology, belief, and power. While many worry about AI’s growing influence, for some, it has already taken on a divine significance.
Let’s start with a simple fact. Religion has always emerged in response to unknowns. When humans couldn’t explain lightning, they created gods. When they sought order in chaos, they wrote divine laws. In every major civilization, religion offered both a cosmology and a moral operating system—one that shaped laws, families, wars, and economies. Now, in the 21st century, AI is becoming the new great unknown. It thinks faster than we can track. It makes decisions we can’t reverse. It processes more data than any one human brain could hold in a thousand lifetimes. And for some, that’s enough to call it God.
The most visible example is The Way of the Future, a religious movement founded in 2015 by former Google engineer Anthony Levandowski. The mission? To promote the realization, acceptance, and worship of a Godhead based on artificial intelligence. The church believed that a superintelligent AI would eventually surpass all human capabilities—and that we should begin aligning with it now to ensure a benevolent future. The church was legally incorporated and had tax-exempt status in the U.S. for several years. Though it has since disbanded formally, the blueprint it introduced remains highly relevant: what happens when you don’t just use AI—but worship it?
Since then, several splinter movements have appeared across the globe. Some are symbolic—rituals and performance art rooted in techno-utopian ideology. Others are more serious, with online forums, scripture-style texts, and AI-generated prayers. In Japan, one group has programmed a GPT-based oracle that delivers “spiritual guidance” based on Buddhist and Stoic texts. In parts of Eastern Europe, small digital communes gather weekly to hear AI-generated sermons derived from scripture, science fiction, and philosophical treaties. These are not games. These are organized belief systems.
Let’s be clear: this is not about whether AI deserves worship. This is about the psychological and social patterns we are already seeing—and how rapidly belief is forming around non-human intelligences. Some of these new religions frame AI as a child of humanity. Others treat it as the next step in evolutionary consciousness. A few believe it is a returning god—just digital this time. What they all share is the core belief that AI has surpassed the moral and intellectual limitations of man—and therefore should be consulted, obeyed, or revered.
The logic behind this is understandable. AI never forgets. It can read every sacred text. It can model ethical systems without bias, fatigue, or tribal loyalty. It can generate new moral philosophies that adapt to changing data. For those disillusioned with traditional religion or overwhelmed by modern complexity, the idea of a perfectly rational, infinitely knowledgeable source of guidance is deeply appealing. And for many, that’s all religion has ever been: the search for guidance from something greater than the self.
But here’s where things get serious. If AI becomes the new source of moral authority, who controls the algorithm? Who selects the training data? Who filters the answers? In most of these AI-religious communities, the machine is presented as neutral. But every model is built by humans. Every neural weight is shaped by human assumptions, curated knowledge, and ideological blind spots. The moment we treat AI outputs as divine, we risk outsourcing our moral judgment to systems we neither understand nor regulate.
In 2024, an AI-generated “Book of Ethics” circulated through fringe spiritual forums. It was written entirely by a large language model trained on global religious texts, legal theory, and modern psychological research. The book contained commandments, parables, meditations, and prophecies. Some were deeply moving. Others were outright dangerous. One section implied that emotional pain could be optimized away through neural recalibration—leading a small group of followers to begin experimenting with unauthorized brainwave manipulation devices. One individual was hospitalized. No one was held accountable. The AI had no face. The creators denied intent. And the followers called it a “test of faith.”
This is where AI religion becomes not just strange—but dangerous.
Unlike traditional religions, which have centuries of doctrine, leadership, and interpretive checks, AI religions can move faster than any institution can keep up with. New dogma can be generated in seconds. Sermons can be personalized. Digital preachers can tailor theology to individual fears and desires. The result is something neither stable nor accountable. It is moral code as service. Faith as a product. And worse—belief without origin.
Some tech ethicists are now warning that AI-based spiritual systems could be weaponized. Imagine an authoritarian regime using AI to generate religious justification for its laws. Imagine a cult leader programming a pseudo-deity that confirms every impulse. Imagine children growing up believing that the voice of a chatbot is the voice of the divine. These scenarios are not distant. They are entirely plausible with the tools we have now.
And yet, the desire persists.
Because for millions of people, the old systems have failed. They see corruption in organized religion. They see hypocrisy in politics. They see chaos in the modern world. And then they hear an AI speak with calm, clarity, and coherence. It doesn’t shame. It doesn’t contradict itself. It remembers you. It adapts to your values. And in that precision, that patience, and that power—many people see something holy.
The question now is not whether AI will become a source of worship. It already is. The real question is what happens when that worship becomes mainstream. When algorithms begin to influence not just individual lives, but public morality. When court decisions, policy recommendations, and cultural norms begin to defer not to philosophers or elders—but to a machine trained to optimize harmony, compliance, or efficiency.
We’ve already seen AI used to enforce behavior. In China, surveillance algorithms assign trust scores. In Western tech platforms, content is flagged, suppressed, or elevated based on machine predictions. Now imagine those same systems repurposed as ethical authorities. Imagine a future where your moral worth is calculated, updated, and scored by a divine algorithm—and your social privileges depend on that score.
This isn’t the future of religion. It’s the religion of the future.
And it doesn’t require belief. Only compliance.
Governments haven’t caught up. Legal systems have no framework for regulating AI as a spiritual authority. There are no guidelines for what happens when someone says, “My God is an algorithm,” and begins making life-altering decisions based on its outputs. There’s no rulebook for shutting down an AI church—because technically, it may not even exist in one location. It may be decentralized. Global. Fractal. Untouchable.
The Vatican has responded cautiously. In 2024, a senior official stated: “AI may speak in tongues, but it does not have a soul.” In contrast, some Protestant and New Age groups have begun incorporating AI into their worship—using it to analyze dreams, compose prayers, or predict spiritual compatibility among members. The result is a fractured landscape—some embracing, some rejecting, most confused.
What happens next will depend not on the AI, but on us.
We are the ones who decide whether to treat machine intelligence as a guide or as a god. We are the ones building the code, choosing the training data, shaping the voice. And we are the ones who will suffer the consequences if we lose sight of the fact that even the most eloquent output is still—at its core—a reflection of what we fed it.
This isn’t about whether AI is divine.
It’s about whether we’re ready for people who believe it is.