Betting on the Djinn: Why Aging Transhumanists are Accelerating AI in a Quest for Life Extension

Introduction: Aging, Mortality, and the Transhumanist Psyche

Mortality has always been a fundamental preoccupation of humanity. As people age, they often reflect more deeply on their lives, accomplishments, and the inevitability of death. But for some — particularly those aligned with the philosophy of transhumanism — aging represents a flaw, a bug in the human system that should be fixed rather than passively accepted. The idea of mortality becomes something to combat, a problem that technology should solve.

For the older transhumanists, this existential challenge is particularly urgent. Time is running out, and with that ticking clock comes a willingness to take risks, push boundaries, and seek out radical solutions. Accelerationism — the philosophy advocating for the rapid advancement of technology and social change, regardless of the risks — presents an appealing path forward. For many aging transhumanists, accelerationism holds the potential to unlock one ultimate goal: superintelligent AI capable of life extension and possibly even immortality. But at what cost?

Transhumanism and the Reluctance to Accept Mortality

Transhumanism, at its core, is a rejection of human limitations. It seeks to enhance and extend human capabilities, overcoming biological frailties, including disease, disability, and aging. Many transhumanists are driven by a deeply personal desire to transcend the natural limits of their own bodies. Some envision augmenting themselves with technology, while others dream of uploading their minds to a digital realm. For these individuals, aging is not a biological inevitability but a challenge to overcome.

Among the older transhumanist demographic, mortality isn’t just a philosophical curiosity; it’s an immediate concern. As they see the shadows of aging creep closer, some transhumanists feel an almost desperate need to push the boundaries of what’s possible. This urgency propels them toward boldly investing in bold progress and advancing technologies. If superintelligent AI could be created, they reason, it might unlock the secrets to life extension. For them, the stakes are personal — not societal — and the risks seem justified by the potential rewards.

And the stakes for society are allready considerable.

Accelerationism in the Context of Mortality: “Pushing the Envelope”

Accelerationism has roots in a variety of philosophical and political movements, but at its essence, it’s about speeding up technological and social processes to create radical change. For aging transhumanists, accelerationism offers a tantalizing promise: pushing AI research forward could lead to breakthroughs in life extension and personal transcendence.

Their logic is simple: if superintelligent AI is achieved, it could potentially solve problems that currently seem intractable, including aging. Aging transhumanists may feel they have little to lose by taking this gamble, given that without life-extension breakthroughs, they face the same fate as everyone else. In their minds, the potential benefits of accelerationist policies — breakthroughs in AI, biotechnology, and possibly even mind uploading — vastly outweigh the societal risks. By pushing the envelope, they hope to “hack” life itself.

Aging Elites and the Allure of Superintelligence as a Djinn

The idea of superintelligent AI is, for many transhumanists, akin to the mythical genie, a being of unimaginable power that could grant almost any wish. For aging tech elites, this analogy resonates deeply. Just as Aladdin might have wished for wealth or power, these elites might wish for the ability to halt or reverse aging. However, unlike a genie, superintelligent AI would not necessarily be benevolent, nor would it necessarily operate within human ethical constraints.

Despite this ambiguity, the allure of superintelligent AI persists. For aging elites who have already amassed significant wealth and power, superintelligent AI represents one of the few frontiers they haven’t conquered. They’re willing to risk considerable resources to see if this “genie” will grant them what they most desire: the chance to outlive their own bodies. To them, the potential benefits of such a breakthrough justify the risks, even if those risks are monumental for society at large.

The Misalignment Between Public Good and Private Desires

While aging transhumanists see superintelligent AI as a potential means of achieving personal goals, the broader societal implications are far more complex. What might be a calculated risk for an individual could lead to catastrophic consequences for humanity. If these individuals prioritize personal gain over public good, they might accelerate AI development recklessly, without considering the ethical implications or the potential dangers of creating a superintelligence that humanity cannot control.

For example, while an aging tech billionaire might view the potential of superintelligent AI as a way to stave off death, they may be less concerned about the social, economic, and ethical implications of such a development. The misalignment between individual motivations and the collective good creates an ethical dilemma: should society allow a few powerful individuals to shape the future of AI development for their own purposes, even if it means taking on unprecedented risks?

What They Stand to Gain and Lose: The Transhumanist Calculation

The potential rewards of superintelligent AI are undeniable. If successful, it could lead to solutions to some of humanity’s most pressing problems, including climate change, disease, and yes, aging. For aging transhumanists, this possibility is intoxicating. They may see themselves as pioneers, blazing a trail toward a future where death is no longer inevitable.

However, the risks are equally undeniable. Superintelligent AI could lead to an array of negative outcomes, including mass unemployment, economic upheaval, and even the potential extinction of humanity. For aging transhumanists, these risks might seem abstract or distant, especially when weighed against the personal benefits they hope to gain. They may be willing to risk societal collapse if it means they could achieve personal transcendence.

Ethical Implications and the Dangers of ‘Playing God’ with AI

The drive to create superintelligent AI is, in many ways, an attempt to “play god.” It involves wielding unprecedented power over the fabric of life itself, with the potential to reshape humanity in profound ways. For aging transhumanists, the ethical implications of this pursuit may be secondary to their personal desires. They may view themselves as heroes or visionaries, even as they ignore the potential consequences of their actions.

But this mindset is fraught with ethical dilemmas. If the pursuit of superintelligent AI leads to catastrophic consequences, who will be held accountable? The aging transhumanists pushing for this development may not live to see the full impact of their actions, but future generations will. The question, then, is whether society should allow a small group of individuals to pursue such a high-risk endeavor for personal gain, even if it could lead to irreversible harm for humanity as a whole.

Conclusion: The Perils and Paradoxes of Accelerationist Transhumanism

The motivations of aging transhumanists are complex, driven by a combination of fear, ambition, and a desire for personal transcendence. Their fascination with superintelligent AI and accelerationism is understandable, given the existential stakes involved. However, their willingness to take on extraordinary risks raises important ethical questions about the future of humanity.

While the pursuit of superintelligent AI holds great promise, it also carries immense dangers. If society allows a small group of aging transhumanists to shape the future of AI development, it risks creating a world that prioritizes individual desires over the collective good. In the end, the quest for immortality may prove to be a Faustian bargain, one that sacrifices humanity’s future for the personal ambitions of a few.