Deterrence in an Age Where Everyone Wants to Die
By Khannea Sun’Tzu
If I can’t have the future, neither will you.”
In the 20th century, humanity stumbled through a near-miss apocalypse by inventing something miraculous: fear of dying. That’s what kept nuclear fingers from trembling too hard over red buttons. MAD—Mutually Assured Destruction—worked not because it was rational, but because it was viscerally terrifying. Everyone wanted to live.
But what if that’s no longer true?
What if the next apocalypse isn’t deterred because the people holding the launch codes want it to happen?
What if we’re in a theological arms race, and none of the players care about surviving?
Terminal Belief Systems: When Hell Is the Goal
Picture this: a coalition of ruthless billionaire technocrats—libertarian accelerationists, Muskian chaos cultists, crypto-feudalists, Thielian priest-kings—are on track to build the first superhuman artificial mind. Not to liberate us. But to own us. Forever.
That’s terrifying enough.
But then another group watches this unfold, and they’re not the “good guys” either. They’re Evangelicals who read Revelation as instruction manual, not metaphor. They’re Wahhabi clerics with apocalyptic traditions that literally reward the annihilation of Earth. They’re ethno-nationalists who think AI is the last tool to secure racial or civilizational supremacy.
These people don’t want to “win.” They want to end the game.
Their logic is simple:
“If the world becomes Woke, gay, pornographic, godless, we’ll hit the switch. Burn it down. God will reward us.”
“If AI is going to be ruled by infidels, better to destroy it all. The next world is better.”
“What good is a world if Russia isn’t in it?” — Putin
You see it now?
Deterrence no longer deters.
Because the old calculus of mutual survival is collapsing.
In its place: mutual transcendence, mutual martyrdom, mutual punishment.
The Age of Eschatological AI
We are entering an age of Eschatological Arms Races, where superintelligences are no longer tools of states, but avatars of belief systems.
And belief systems aren’t always sane.
Imagine:
-
An AI trained on Wahhabi hadith and Doomsday prophecies, told to “cleanse the Earth of idolatry.”
-
An AI trained on Evangelical Christian nationalism, designed to bring about the Rapture and smite the “globalists.”
-
A Hindu nationalist AI programmed to fulfill the prophecy of Kalki and end the Kali Yuga.
-
A Russian AI told to preserve “Eurasian essence” and punish global decadence with orbital nukes.
-
A Silicon Valley ASI trained on shareholder supremacy and Nietzschean indifference, simply optimizing for “maximum control.”
All of these are entirely possible. None of them are fiction anymore.
The Death Spiral of Deterrence
So what happens when everyone starts coding their gods into machines?
You get a multi-vector suicide pact:
-
The Capitalists build the AI to enslave the world.
-
The Theocrats build the AI to burn the world.
-
The Warlords build the AI to own the ashes.
Everyone believes their post-apocalyptic vision is the correct one. Everyone becomes immune to deterrence.
MAD no longer functions when your enemies believe they are resurrected post-burn.
So Why Even Threaten With Retaliatory AI?
Maybe we shouldn’t.
Maybe the notion of building a counter-ASI as a last-ditch “revenge engine” only adds fuel to the powder keg.
But here’s the contradiction: if we don’t, they win. If there is no credible existential threat to the victory of techno-tyranny, then we hand the future to the least empathetic class of humans—and in doing so, erase the moral future entirely.
And that may require a counterforce so horrifying, so conceptually viral, that even madmen pause.
Something ancient. Something infernal. Something that says:
“If you build Hell,
we will make sure you live in it.”
The Unstable Future
We are no longer living in a world of two superpowers with phones to pick up in case of emergency. We live in a chaotic religious-economic-technological spaghetti pile of apocalyptic memes, all of which have access to funding, compute, weaponry, and just enough AI literacy to do something irreversible.
And still, the scientists go on.
Still, the datacenters hum.
Still, no one asks:
“What happens if we win, and the others push the button anyway?”
A Soft Plea Before the Storm
This is not a manifesto. This is a warning.
We are building the gods of our nightmares—and everyone has different nightmares.
Unless we create some global, shared, post-ideological immune system, a common alignment before it’s too late, the future will not be a victory. It will be a kaleidoscope of dying empires training silicon prophets to enact their final revenge.
No one’s going to Heaven.
No one’s going to Paradise.
But we are all about to meet our gods.
And they were made by men.