If you find yourself in perplexity, go to the master post for the read-along schedule.
While feverishly reading Part II to keep this read-along on track, I took a Twitter break and stumbled upon this book review. The book is called What We Owe the Future by William MacAskill, a prominent “longtermist”, a philosophy associated with Elon Musk and other tech-bro luminaries. The quote that caught my eye was:
According to a study commissioned by MacAskill, however, even in the worst-case scenario—a nuclear war that kills 99 percent of us—society would likely survive. The future trillions would be safe. The same goes for climate change. MacAskill is upbeat about our chances of surviving seven degrees of warming or worse: “even with fifteen degrees of warming,” he contends, “the heat would not pass lethal limits for crops in most regions.
Longtermism says that ethical decisions should be made with the long-term future in mind. In application, this seems to mean using math and logic to discount the wellbeing of people who are alive today, prioritizing anything that makes it even slightly more likely future humans get to do things like colonize other planets, or upload their consciousnesses into a simulation (assuming we’re not already in one!)
MacAskill is saying that nuclear holocaust and climate disaster, and the horrendous suffering that ensue, aren’t that big a deal, as long as some humans survive and have “good enough” lives.
As I went down this rabbit hole, I started seeing some connections to some of Dostoyevsky’s favourite themes, as one does.
- Longtermism is based on utilitarianism, a philosophy that says right and wrong should be based on logical outcomes, as opposed to, say, religious doctrine. Dostoyevsky looks at utilitarianism in many of his novels, famously in Crime and Punishment. Raskolnikov tells himself that is was okay for him to murder the old pawnbroker because of the pain and suffering she caused, though a couple mental breakdowns later, he’s not so sure.
- Longtermism is also based on effective altruism, a much newer philosophy of the 2000s that suggests people use logical outcomes to figure out how to do the most good, and then do that. Using this framework, it’s okay to make money doing unethical things, if you donate some of it to good causes. This reminded me of how Dostoyevsky points out the hypocrisy inherent in a lot of charity in The Brothers Karamazov. For example, Zozima’s mentor donates the proceeds of his heinous murder-robbery to an almshouse “on purpose to ease his conscience regarding the theft.”
To be very clear, in this era of “problematic authors”, Dostoyevsky was not exactly a proponent of utilitarianism, nor am I suggesting he’d be a proponent of effective altruism or longtermism, despite my click-bait title. I just can’t help but wonder [/Carrie Bradshaw voice] if he’d create a longtermist character if he were writing today.
And maybe he sort of did, in Elder Zozima.
In Book 6, “The Russian Monk”, we hear Elder Zozima’s life story as interpreted by Alyosha. The parable “Can One Be the Judge of One’ Fellow Creatures? Of Faith to the End” first got me thinking about how Dostoyevsky would have had much to say about social media outrage cycles (and not for the first time), with the Elder advising us to stop doomscrolling and do something useful with our lives:
“If the villainy of people arouses indignation and insurmountable grief in you, to the point that you desire to revenge yourself upon the villains, fear that feeling most of all; go at once and seek torments for yourself, as if you yourself were guilty of their villainy… you will understand that you, too, are guilty, for your might have shone to the villains, even like the only sinless One, but you did not. If you had shone, your light would lighted the way for others, and the one who did villainy would perhaps not have done so in your light. And even if you do shine, but see that people are not saved even with your light, remain steadfast, and do not doubt the power of heavenly light; believe that if they are not saved now, they will be saved later. And if they are not saved, their sons will be saved…your work is for the whole, your deed is for the future.Pevear and Volokhonsky translation
Then it got me thinking about longtermism, and how it’s similar to to religious faith, even though the longtermists probably don’t see it that way. These guys are more Ivan than Alyosha. But religious and longtermist worldviews are both about sacrifice in the present for a future that may never come to pass.
And I can’t help noticing that these guys are all, well guys: the priests and monks demanding we sacrifice and be faithful, awaiting reward in heaven, and the philosophers demanding we put the interests of hypothetical, future people ahead of actual, living people (which, incidentally, sounds like a religious “pro-life” talking point).
It’s also pretty convenient that the Elders preach, but don’t have to deal with the messy business of living in society. Similarly, these overwhelmingly male philosophers want the future population to balloon into the “trillions”, but won’t be the ones carrying, giving birth to, or caring for all these babies (unless they’re quietly working on some sort of incubator pod in between rocket launches.)
Dostoyevsky died 135 years before “longtermism” became a thing, but he certainly thought about the future, how to do good, and how to determine right from wrong. Maybe longtermism is descendant of the nihilism, utilitarianism, and atheism he wrote about so astutely. Sadly, he didn’t live long enough to write a planned sequel to The Brothers Karamazov, let alone a novel satirizing Millennial philosophy bros.
(For an even more critical overview of longtermism, check out this article, which, not surprisingly, ties longtermism to men’s rights advocacy, cryptocurrency, and eugenics. Ferris Bueller was right!)