The Case for Shorttermism
Concern for generations unborn is laudable and right, but is it really a pre-requisite for saving the world?
This weekend the New York Times devoted its weekly ideas showcase—the front of its Sunday Opinion section—to “longtermism,” the idea that, as the essay in the Times put it, “positively influencing the long-term future is a key moral priority of our time.”
The essay was written by the philosopher William MacAskill and comes from his new book What We Owe the Future, which is rightly getting a lot of attention (including on the latest episode of Ezra Klein’s podcast). Before becoming a leading proponent of longtermism, MacAskill helped found the “effective altruism” movement, which aims to help people expend their philanthropic resources efficiently (and which, five years ago, MacAskill discussed with me on my podcast).
Effective altruism and longtermism are kind of a package deal. They tend to get discussed in the same places, by the same people, and there are logical connections between them that make this natural. And, though both ideas can be described in ways that make them sound not very profound (see above two paragraphs), they’re important for at least two reasons:
1) They have mojo. Both ideas generate interest and even commitment among smart, conscientious college students and have also gotten traction in the culturally significant “rationalist community.” Plus, they have friends in high places, especially in Silicon Valley. Elon Musk and Peter Thiel have both funded some of the philosophers associated with these ideas, including seminal longtermist Nick Bostrom (who is more famous for his argument that we’re probably living in a simulation).
2) In principle, the synergy between effective altruism and longtermism could save the world.
I say “in principle” because I have doubts about whether, as a practical matter, preaching longtermism is the most promising path to global salvation. I personally think an underrated path is to make people better at shorttermism. But before explaining what I mean, let me give effective altruism and longtermism their due.
Effective altruism—EA for short—is a pretty straightforward extension of utilitarian moral philosophy and drew much of its founding inspiration from the most famous living utilitarian philosopher, Peter Singer. The basic idea is that you should maximize the amount of increased human welfare per dollar of philanthropic donation or per hour of charitable work. A classic EA expenditure is on mosquito nets: You can actually count the lives you’re theoretically saving from the ravages of malaria and compare that with the number of lives you could have saved by, say, funding a water purification project.
Longtermism, by adding all generations yet unborn to the utilitarian calculus, vastly expands the horizons of effective altruists. Instead of just scanning Earth in search of people who can be efficiently helped, they can scan eternity. That may sound like a nebulous enterprise, but it has at least one clear effect that I applaud: It can get effective altruists focused on the problem of “existential threats”—things that could conceivably wipe out the whole human species.
After all, if your utilitarian calculus includes not just the several billion people now alive but the thousand jillion people who could live in the future, then it makes powerful sense to address threats that could keep those thousand jillion people from ever living.
How powerful? So powerful that a longtermist effective altruist could devote great effort or much money to addressing threats that aren’t too likely to materialize in the first place. In the math of longtermism, eliminating a threat that otherwise would have a one in a thousand chance of pre-empting a thousand jillion future lives is the same as eliminating a threat that was otherwise sure to pre-empt a jillion future lives. Either way the “expected return” on the investment is a jillion future lives. (Similarly, a Silicon Valley venture capitalist sees an investment guaranteed to yield a million dollars as having the same expected return as an investment that has a ten percent chance of yielding ten million dollars and a 90 percent chance of yielding nothing.)
And, before you get nit-picky about the word “existential”—before you point out that, actually, so-called existential threats like nuclear war and climate change and an unleashed genetically engineered bioweapon are exceedingly unlikely to wipe out all human beings and completely cut our species’s lifeline to the future: The math of longtermism puts a spotlight not just on literally existential catastrophes, which would leave zero surviving humans, but also on the danger of Planet Earth getting mired in a recurring cycle of quasi-existential catastrophes, a cycle that could prevent our species from advancing much technologically and thus prevent it from reaching escape velocity—from flowering into the massively prolific (and possibly intergalactic) generator of intelligent life it might otherwise flower into. (I’ll skip discussion of Musk’s Mars mission and its longtermist rationale—except to say that I’d personally rather put resources into solving Earth’s problems than into an escape plan premised on our failure to solve them.)
In theory, I should be a big booster of longtermism. After all, one way I’ve characterized the mission of this newsletter is as the “Apocalypse Aversion Project.” And I do support longtermism, at least in the sense of embracing the spirit of the enterprise. I do think that, as an academic matter, generations yet unborn deserve a place in the grand utilitarian calculus.
But, as a practical matter, I have a question for longtermists: Are you sure that our failure to think long term is the problem? Are you sure that the reason we’re not doing a good job of heading off global calamities is that we’re not spending enough time worrying about all the people who might or might not be around in 1,000 years?
Here’s my radical thought: The biggest existential threat we face—a kind of meta-existential-threat that keeps us from addressing the more commonly enumerated existential threats—is that humans aren’t good enough at shorttermism. If people were skilled shorttermists—if they pursued short-term interests wisely—our long-term problems, including the existential ones, would be manageable.
I’ll admit right away that I’m defining “shorttermist” expansively: A shorttermist, in my book, is someone who worries about themself and their close relatives—including next-generation relatives like offspring, any grand-offspring, and for that matter nieces and nephews (or at least the well behaved nieces and nephews). So by this definition shorttermists do care about the fate of the world beyond the shortest term. Still, they don’t care about the fate of the world beyond what is typical for humans. So shorttermists, as I’m defining them, are the kinds of people we’ll be dealing with if humankind fails to fall under the spell of longtermism.
Note that, when it comes to saving the world, shorttermism should in theory do a lot of motivational work. After all, many of the big existential and quasi-existential threats that people like me and the longtermists warn about—nuclear war, a bioweapon unleashed, a natural pandemic much more lethal than Covid—could plausibly unfold within the next decade or two, conceivably even the next year or two. And climate change is already unfolding. Threats this immediate should get your attention even if you don’t spend time thinking about an infinite series of unborn generations. So: Even for shorttermists—that is, ordinary human beings—there should already be strong motivation to support work on apocalypse aversion.
And yet… I just don’t see a big flock of apocalypse worry warts out there. Not many people, when evaluating a candidate for high office, ask questions like, “But do they have a good plan for dealing with the highly challenging bioweapons problem? And would their Russia policy slightly increase the chances of a nuclear exchange that would in turn bring a somewhat elevated risk of all-out nuclear war?”
And the problem isn’t just the mindset of the average voter. Our “policy elites” aren’t doing much better. There’s no prominent discussion of what combination of technical and governmental innovations (including innovations in global governance) it would take to make the world safe from bioweapons. And even as the US-Russia nuclear arms control infrastructure crumbles—bit by bit—philanthropic funding for nuclear arms control and nuclear safeguards seems to be shrinking.
As it happens, the fact that more enlightened shorttermism would go a long way toward solving this problem was actually (if implicitly) made this year on the podcast 80,000 Hours, an effective altruism and longtermist outlet that is ultimately part of the Centre for Effective Altruism, which MacAskill co-founded.
Host Rob Wiblin (who has been on my podcast, and vice versa) was talking to Joan Rohlfing, head of the Nuclear Threat Initiative. She was discussing the implications of her estimate that “over the balance of this century, there is a 40 percent chance that we suffer a catastrophic nuclear event. That’s not good news for my son. It’s not good news for those of you who have children.”
No, it’s not. Nonetheless, as she went on to explain, there is little preventative funding, and there is “an absence of intellectual capital invested in solving this problem.” In other words, Rohlfing has a message that should in theory get lots of traction among rational shorttermists, yet it’s not getting much buy-in.
And if you can’t get people to focus on something that has a not-entirely-trivial chance of killing them and their children, I don’t see how it helps to add that it also threatens people who would otherwise get to jetpack to work in the year 2323. Indeed, that kind of rhetoric might make what is in fact a near-term threat seem like a more remote threat.
So why aren’t people getting the message about near-term dangers? Why is it hard to turn heads—the heads of ordinary Americans, but also the heads of philanthropists, politicians, journalists, and “policy elites”—by pointing out that (for example) right now, in thousands and thousands of labs around the world, a handful of malicious actors could be engineering a pathogen that, if released, would make Covid look like the common cold?
Well, for one thing: Take a look at the world! It is riven by international tension, international conflict, and intranational tension and conflict. These things absorb attention and eat up time. They make pondering hypothetical catastrophes seem like a luxury we can’t afford.
Also: The very antagonisms that are stealing time and attention from existential threats make addressing those threats seem nearly hopeless anyway. When you look at all the things a truly effective bioweapons treaty would require—such as all nations, including Russia, China, and the US, opening their labs to intrusive monitoring—well, maybe 2022 isn’t the year when that happens. Two big enemies of global cooperation are cold war psychology and hot war psychology, and right now both of those are on the march.
And note that hot wars and cold wars have big short-term downsides—immediate death and suffering in the case of hot wars, economic dislocation and all kinds of day-to-day tension in the case of cold wars—and yet we still can’t figure out how to keep hot wars and cold wars from unfolding. In other words: Our species is failing at shorttermism even if you define shorttermism strictly, as “not screwing up the world right at this moment.”
Let me emphasize the sense in which none of what I’ve said is relevant to MacAskill’s case for longtermism. He is an academic philosopher, and it is the job of academic philosophers who focus on ethical philosophy to wrestle with the question of moral obligation. And longtermism makes sense to me as an outcome of that wrestling match.
But MacAskill is also an activist, and I think longtermism has pros and cons as the rallying cry for his mission.
True, there are bright young idealists whose energy can be harnessed to that rallying cry, and that matters. But I’m not sure that their running around repeating it is the best way to get buy-in from the many less young and less idealistic people whose buy-in we need. Indeed, it’s a rallying cry that could confirm the suspicions of these people that young idealists are too other worldly to take seriously.
In a sense, the output of the aforementioned 80,000 Hours podcast makes my point about the pros and cons of longtermism. Propelled by longtermist enthusiasm, it has produced a lot of valuable conversations about existential risks. Still, I suspect that one of the more motivating parts of that episode on nuclear risks was the shorttermist warning directed at “those of you who have children.” The guest also warned that a nuclear catastrophe would be bad “for the future,” but that just didn’t pack the same punch.
I could be wrong about the potential of the longtermist rallying cry. Voiced by the right people, it could turn out to have grassroots appeal. And there’s no doubt that, as a sheerly logical matter, longtermism greatly strengthens the case for addressing existential risks. (If we could magically make parents care about future humans half as much as they care about their children, averting the apocalypse would be way, way, way easier. On the other hand, if we could make them care about all living humans half as much as they care about their children, averting the apocalypse would be way, way easier.)
The one thing I’m pretty sure I’m not wrong about is this: There won’t be true salvation for Planet Earth until we make inroads on some big and enduring problems that lead to short-term suffering, notably including the gratuitous antagonisms and conflicts that fracture nations and divide them from each other. Prospects won’t start looking bright for humans in the distant future until humans in the present get their shit together.
"Prospects won’t start looking bright for humans in the distant future until humans in the present get their shit together."
Yes, this a 1000x. And by deferring to some imagined future one invariably fails to meet one's appointment with the present--the moment that matters most.
I don't sense much compassion and empathy for the billions who live now on the part of those who advocate for longtermism (most prominently Elon Musk). So I think it's not unreasonable to assume that they don't truly care about people in the future. It seems more of an abstract intellectual shell game rather than a genuine attempt at improving the lot of the planet (including billions and trillions of other sentient creatures).
There are strong philosophical arguments against the notion of the three times (e.g., by Nagarjuna and Vasubandhu, both of whom were very concerned about how to reduce suffering). Plus the idea that one can adequately measure (or even qualify) well-being and thereby find an algorithm to maximize it will likely increase suffering, not lessen it. By trying to reduce the happiness and well-being of sentient beings (or charitable giving in the case of effective altruism) to abstract math, one ends up creating a world saturated with abstractions, devoid of meaning and connection, which are the things that matter to most of us, irrespective of where we are on the evolutionary rungs.
“Meaning emerges from engagement with the world, not from abstract contemplation of it.”
-- Iain McGilchrist
Great piece, thanks Bob.