Effective altruism: From grassroots to Big Philanthropy
Effective altruism, or EA, has become an institutionalized arm of Big Philanthropy, complete with billionaire megadonors and six-figure incomes for a slew of Western white-collar professionals.
Why it matters: That's a big, fast change for a movement that started out about 10 years ago on a much more grassroots level, focused on things like veganism and giving away most of your earnings.
The big picture: EA in general has come a long way from the early idea that if $4,000 can save a life in Uganda, perhaps by providing a simple tool like mosquito nets, then I have a moral obligation to send that $4,000 to Uganda, rather than, say, spend it on travel or restaurants or art.
- The bed-nets philosophy is exemplified by GiveWell, and is characterized by rigorous quantification of lives saved (or massively improved) per dollar spent.
- GiveWell's biggest early donors, Facebook and Asana billionaire Dustin Moskovitz and his wife Cari Tuna, have since expanded into much more conventional philanthropy, like giving millions of dollars to a Washington think tank, or funding "a summer boot camp for PhD students on the economics of innovation", all under the increasingly broad umbrella that is EA.
Sam Bankman-Fried, or SBF as he's universally known, is possibly an even richer EA, depending on how crypto is doing. SBF's conception of EA has expanded to include old-fashioned political donations, or even buying sports arena naming rights for $135 million.
- The idea is that the naming rights will prove profitable for his crypto company, FTX, and thereby generate more money for important causes, even as the naming-rights money is put towards fighting gun violence and poverty in Miami.
Driving the news: EA's leading thinker, Will MacAskill, has been doing a media tour in advance of the publication on Tuesday of his new book, "What We Owe The Future"; he's written a good précis for the BBC, and a slightly longer version for the NYT. Even though MacAskill's "longtermism" is highly controversial within philanthropy circles, nearly all of the coverage has been positive.
- What they're saying: MacAskill's thesis is provocative, builds on centuries of moral philosophy, and is unafraid to come to unintuitive conclusions. "If you could prevent a genocide in a thousand years, the fact that 'those people don't exist yet' would do nothing to justify inaction," he writes. "The future is just as real as the present or the past."
- Go deeper: The New Yorker's Gideon Lewis-Kraus has the best profile of MacAskill, including a short history of EA and explanation of where it has ended up.
Between the lines: As EA moved from late-night Oxford grad student conversations to Silicon Valley billionaire strategy decks, it inevitably grew in ambition, and flipped from one extreme to the other of the charity-philanthropy spectrum.
- Simply saving lives of people living today is no longer enough; the new emphasis is on saving or improving the lives of people who might not even be born for hundreds of thousands of years. That project, naturally, involves funding a lot of academics.
- Somehow, the result seems to be, in the words of Lewis-Kraus, that "a group of moral philosophers and computer scientists have happened to conclude that the people most likely to safeguard humanity’s future are moral philosophers and computer scientists."
The bottom line: The funding decisions made at the big EA shops — Open Philanthropy and the FTX Future Fund — undoubtedly have internal logic. They can also look quixotic and driven by the founders' personal whims, just like the decisions made at most other large philanthropies.