The Machine Gospel

By Chris Lees

In the past, humanity looked to the heavens for answers. We crafted myths, revered oracles, and established religions to make sense of the world around us. Today, we don’t gather around burning bushes or temples—we gather around screens.

More than ever, we are turning to artificial intelligences to tell us what’s true, what’s right, and what’s real. But here’s the twist: those answers aren’t divine revelations. They are—at their core—carefully constructed collages of human opinion.

When you ask an AI a question—be it legal, ethical, philosophical, or political—it provides you with an answer. The phrasing is confident, the structure polished, the language neutral. It feels authoritative. In many cases, it feels more trustworthy than a heated debate with your neighbour, a news reader, or a meandering opinion column.

But let’s not mistake fluency for truth. AI doesn’t conjure knowledge from thin air. It stitches together fragments from countless sources: books, blogs, articles, forums, academic papers, and tweets. It does so at unimaginable speed and scale, but at the end of the day, it is a mirror—a mirror reflecting the dominant narratives, perspectives, and biases embedded in the digital record.

And when the world is polarized—when opinions are not just different but weaponized—this mirror becomes warped.

Consensus without consciousness…

Here’s where things get a little bit chilling.

As AI tools become more integrated into our lives, their outputs begin to resemble a kind of algorithmic scripture. We seek their answers not just for trivia or convenience, but for clarity. “Should I take this vaccine?” “Is this law just?” “What’s the ethical path forward?” And when those answers come wrapped in confident prose and data-backed reasoning, we often accept them without interrogation.

Unlike a conversation with another person, there’s no argument. No emotional baggage. No ego. Just an answer.

In a divided world, that’s intoxicating.

But what we’re doing—unknowingly—is creating a shared belief system. A machine-curated worldview that seems above human messiness. We are inching closer to a reality where society trusts AI not just as a tool, but as an arbiter of truth.

The dangers of unquestioned faith

History warns us about the power of unchallenged belief.

Religious institutions once held unassailable authority over societies. But over time, through enlightenment, rebellion, and hard-won freedoms, individuals began to assert their own understanding of the world. The shift was slow, painful, and necessary.

Now, the question is: Will we need to go through that all over again with artificial intelligences?

If we continue to accept AI-generated answers as gospel—especially in matters of nuance and morality—we risk outsourcing our own judgment. We risk stifling dissent in favour of the illusion of consensus. And, paradoxically, we risk building a new kind of orthodoxy, one curated by machines and worshipped by humans.

At some point, the free-thinkers will rise again. Not against a church, but against a server rack. They’ll demand transparency, diversity of thought, and the right to disagree with the algorithm. They’ll question the machine not out of technophobia, but out of a desire to remain human in their interpretation of truth.

We don’t need to smash the machines. But we do need to remember: AI is a reflection, not a revelation. Its answers are only as good as the questions we ask—and the data we’ve fed it.

Let’s be cautious not to turn artificial intelligence into artificial authority.

Belief systems are powerful. So let’s not stumble into a new one by accident.

Leave a Reply

Your email address will not be published. Required fields are marked *