“Effective altruism” — it sounds like a modest miracle of modern ethics, doesn’t it? Finally, a way to make doing good scientific. Effective! Altruistic! Who could object?
But let’s set down the Kool-Aid and pick up a scalpel. Imagine calling it instead: Altruism Triage.
Triage: the grim battlefield practice of deciding who gets treatment, who waits, and who quietly dies — not from cruelty, but from brutal necessity. That’s closer to the core. Effective altruists aren’t just sprinkling charity around with a tear and a smile; they’re slicing with cold, quantitative knives. Dollar for dollar, life for life, they choose winners. They have to — they say — because sentimentality is unaffordable. Empathy is inefficient.
Maybe you’re thinking: “Well, maybe that’s fine. Maybe we should be selective in who and what we support.” A fair point. But look closer at who is doing the selecting.
A surprising number of effective altruists are — how to put this politely — right-wing libertarians. Not the friendly, tie-dye libertarians of old. We’re talking about the pro-market, pro-capitalism, anti-tax, anti-welfare, anti-feminist, anti-‘woke’, small-government, privatize-everything crowd.
Scratch the surface, and effective altruism isn’t just about doing good more smartly. It’s about doing good in a way that leaves capitalism untouched, power structures unchallenged, and the idea of collective obligation suspiciously absent. Helping malaria victims? Great. Questioning why those victims live in extractive postcolonial poverty? Don’t hold your breath.
In this light, effective altruism begins to look less like radical compassion and more like a moral laundering service for the wealthy and powerful. Spend a few million on global health interventions while quietly helping prop up the systems that cause those very problems. Make altruism just another market — competitive, efficient, and above all, private.
Maybe that’s the real triage here: saving the world just enough to save the system.
Big Names in Effective Altruism — and the “Other Things” They Say
-
William MacAskill
Philosopher. Oxford don. Public face of EA.
MacAskill popularized the core tenets: maximize impact, focus on neglected causes, take utilitarian math seriously. He sounds gentle — and to his credit, earnest — but here’s the quiet part whispered into a more selective audience: longtermism.
Longtermism: the notion that the real moral imperative is protecting future generations — trillions of hypothetical future people. It’s EA’s pivot from mosquito nets to preventing AI apocalypse, even if that future is centuries away. Saving lives today? Maybe a rounding error compared to saving 10^30 future minds.
Conveniently, this also shifts focus away from messy issues like inequality, racism, and capitalism’s real-world wreckage. -
Nick Bostrom
Futurist. Oxford philosopher. Co-founder of the Future of Humanity Institute.
Bostrom is another architect of longtermism and existential risk obsession. He’s known for serious work — but also for cringingly resurfaced comments about race and intelligence in some old writings. (Let’s be transparant – for decades I have consistently simped about Nick. I am convinced his writings might literally be what might save humanity. However, every silver chalice has a black lining so to speak...) EA proponents hurried to distance themselves from some of Nick’s ideas, but there’s a pattern: a tendency to treat human beings as statistical abstractions, and a blindness to how ugly that calculus can get when you assume all variables are neutral.
Let’s just say, Bostrom’s flirtations with genetic IQ theories didn’t come from nowhere. -
Eliezer Yudkowsky
AI alarmist. Rationalist community guru.
Yudkowsky is big on AI doom — like, Skynet-is-coming-and-you’re-all-fools doom. EA circles have adored this angle because it positions existential AI risk as the ultimate cause. Meanwhile, poverty, war, and climate change fade into the background noise. Bonus points: if you believe unaligned AI is about to vaporize us, you don’t have to engage with any leftist critique about the economic systems right now.
Also, his Rationalist cultish tendencies (“LessWrong,” “Sequences,” “Bayesian reasoning as salvation”) carry that tech bro libertarian smell: anti-democracy, pro-epistocracy, vaguely eugenics-curious at times. -
Peter Thiel (Peripheral influence)
Tech billionaire. VC god-king. Political chaos agent.
Thiel isn’t EA per se, but he’s loomed near it. He bankrolls adjacent movements, idolizes rationalist culture, and pours cash into AI safety via OpenAI (early days) and other initiatives. Thiel is famously anti-democracy (“I no longer believe that freedom and democracy are compatible”), pro-seasteading, pro-libertarian paradise.
Why build a better world when you can build a fortified island?
EA and Thiel aren’t twins, but they’re kissing cousins on the ideological family tree. And really, you should hear this charmer on his ideas of Feminism and alleged “rights” of women. -
Sam Bankman-Fried
Crypto criminal. Fallen EA superstar.
Bankman-Fried once styled himself as the patron saint of effective altruism, pledging billions to “high-impact” causes. And then — oops — he lost it all in a massive, shady crypto collapse (FTX scandal).
His defense? He was “playing utilitarianism” — making as much money as possible, by any means, to later give it all away.
What did he say to a journalist before it all collapsed? “All the dumb shit people do is strictly dominated by just, like, making more money.”
Ethics as speedrun capitalism, essentially. Just keep hitting the “profit” button until you die or crash. -
Toby Ord
Philosopher. Oxford. Founder of Giving What We Can.
Ord’s a cornerstone of EA and a key pusher of longtermism — with a side hustle in existential risk studies. His book The Precipice makes the case that nothing matters more than preventing existential catastrophes — not poverty, not inequality, not political instability — unless it might end all future human life. Very neat way of quietly sidelining messy, political causes in favor of speculative sci-fi disasters. -
Holden Karnofsky
Co-founder of GiveWell and Open Philanthropy.
GiveWell is the prototypical EA organization: spreadsheets, “evidence-based charity,” Return on Investment for human lives. Holden eventually shifted toward Open Philanthropy, which now funds — surprise — a lot of longtermist, AI-alignment, and biosecurity projects. Holden’s model? High-net-worth individuals privately deciding what matters most, with minimal messy democracy involved. -
Rob Wiblin
Research director at 80,000 Hours.
Wiblin’s all about steering ambitious young minds into “high impact” careers — often meaning Wall Street, tech, or AI safety — where you can “earn to give.” The subtext? It’s more effective to become a hedge fund quant than to become a social worker or a public defender. Scale over solidarity, profit over protest. -
Nick Beckstead
Philosopher. Longtermist advocate. Fund manager at FTX Foundation (oops).
Beckstead’s key claim to fame: arguing that people in rich countries are morally obligated to donate to the most efficient global charities — and that investing in future humanity is more important than addressing present injustices. Also, an advisor to Sam Bankman-Fried during his rise and fall, proving that even the best laid triage plans can end up drenched in scandal and fraud. -
Alexander Berger
Co-CEO of Open Philanthropy.
Another EA powerbroker controlling how hundreds of millions of dollars get allocated, often by funding quiet, “neutral” projects like AI alignment, pandemic prevention, or criminal justice reform (the “respectable” face). But again — this is private money deciding public priorities. You don’t vote for this. You don’t deliberate. It’s philanthrocapitalism behind a polite Oxford accent.
The Broader Pattern:
It’s not just that they’re white guys. It’s that they’re white guys from the same cultural hegemony:
-
Elite academia (Oxford, Harvard, Stanford).
-
STEM fields and analytic philosophy.
-
Rationalist culture and techno-futurism.
-
Libertarian or market-sympathetic ideologies.
-
Quiet disdain for “messy” human politics, especially social justice movements.
-
A preference for hypothetical future people over real, suffering ones now.
And almost all of them believe — with dead seriousness — that private money and elite calculation will save the world better than messy, democratic politics ever could.
What These People Tend to Also Say
-
Markets are efficient, and interfering with them is dangerous.
-
Redistribution sounds nice but is inefficient and maybe immoral.
-
Large-scale social programs? Bureaucratic bloat.
-
Feminism? Critical race theory? Social justice? Nice luxuries for rich countries, irrelevant to the grand calculus.
-
Climate change? Maybe an existential threat, but maybe not as bad as rogue AI or engineered pandemics.
-
Government regulation? A blunt instrument compared to the elegance of private innovation and targeted philanthropy.
-
Collective political action? Distracts from the individual moral responsibility to earn to give — become a tech mogul, hedge fund shark, or crypto bro, and then “give effectively.”
The Black Lining on the Silver Chalice
-
Who Gets to Decide?
The philosopher-kings of EA aren’t elected, accountable, or particularly diverse. They’re self-anointed custodians of human welfare, claiming authority because they did the math. You don’t get a vote. There’s no referendum on what matters most — just closed-door decisions by people who think they’re smarter than democracy. -
Hypocrisy of Means and Ends
EA is obsessed with ends — maximal lives saved, maximal future flourishing. But the means? Murky.
SBF (Sam Bankman-Fried) exemplified this: he explicitly adopted a Machiavellian “ends justify the means” attitude. Fraud, deception — if it gets you a bigger pot to donate from later, what’s the problem?
Effective altruism quietly smiles and looks away when its golden boys cut corners in the name of greater good. -
Whiteness and Western Chauvinism
Effective Altruism often claims global impartiality — a “view from nowhere.” But the philosophical underpinnings are suspiciously Western, white, male, and elite.
Few EA priorities focus on indigenous rights, postcolonial justice, or structural reform — messy, political causes. Helping “the global poor” is framed as saving them, not empowering them. A faint whiff of the missionary complex lingers in the air. -
The Specter of Eugenics and Social Engineering
Longtermism — the darling philosophy of EA — flirts dangerously with eugenic thinking.
If future lives matter more than current ones, and future societies will presumably be smarter, healthier, and better, then doesn’t it make sense to optimize which humans exist?
Some EA thinkers have already been caught quietly wondering if it’s better to ensure “high IQ” future populations. Old ghosts stirring. -
Market Fetishism
EA rarely, if ever, questions the sacred cow of free markets.
Why did the Global South end up impoverished? Why does climate change disproportionately affect the poor? Systemic critiques are inefficient. EA is engineered not to dismantle markets or capitalism, but to work within them. Like a band-aid factory attached to a chainsaw production line. -
Moral Narcissism
Underneath all the utilitarian calculus is a deep, unspoken self-regard: We are the ones smart enough, rational enough, enlightened enough to fix the world.
Not by solidarity or humility — but by superior cognition. It’s not white man’s burden anymore; it’s rationalist’s burden..
The Grand Irony
Effective altruism might well save lives. It might well forestall disasters. But it also perfects the art of moral deflection — making sure the people who broke the world stay rich, stay powerful, stay in charge of deciding how to fix it. In the end, the black lining isn’t just a blemish on the chalice — it’s structural. It’s forged right into the silver.
The Pattern Emerges
Effective altruism — under the microscope — starts looking like an ethical spin on libertarian neoliberalism. Don’t fix the system. Don’t redistribute power. Don’t talk about historical injustice or collective action. Just earn more, give more, and trust the market-like structure of moral triage to sort out the mess. A world where a small intellectual aristocracy calculates which 1% of the world gets the help it can statistically justify — while the other 99% remains a spreadsheet error.
Conclusion: Charity Is Good — Let’s Not Get Lost in the Weeds
Charity, at its core, is unambiguously good. It’s the act of recognizing suffering and reaching out — not because you have to, not because it’s profitable, but because it’s right. Whether it’s a few dollars to a food bank, a hand to a neighbor, or millions for a clinic in a forgotten part of the world, charity is one of the clearest expressions of human solidarity.
Everyone, if they can, should give. Everyone should be free to give as they choose, to the causes that move them — whether that’s mosquito nets, soup kitchens, libraries, or art programs for underprivileged kids. The impulse to care is not the problem. Gatekeeping based on arguments is essentially arguing against some kinds of Charity, with extra steps.
In fact, in a better world — a truly just world — we might not even need charity. Systems would be fair enough, institutions strong enough, safety nets woven thick enough, that private generosity would be a luxury, not a lifeline. But let’s be honest: that world, if it comes at all, is centuries away.
The problem begins when a certain cadre — the “I Am Very Intelligent” crowd — starts to explain why some charities are “less desirable,” “less rational,” or “wasteful.” When doing good becomes a game of spreadsheets and triage tables, when empathy is replaced by cold calculation, when giving is transformed into a private market for moral efficiency, something vital is lost.
Charity stops being about love, solidarity, or simple human decency — and starts being about optimization. About control. About power.
It’s good to want to be effective. It’s good to think about impact.
But it’s dangerous — even obscene — to let the perfect become the enemy of the good, to argue that only certain types of suffering are worthy of alleviation, or that only the smartest elites can decide what charity should really mean.
Charity is not a math problem. Charity is a human act. And that, ultimately, is why it matters.
Altruism Triage, indeed.