“2. Probability and Speculation” in “Calamity Theory”
2. Probability and Speculation
AS AN OFFSHOOT of analytic philosophy, existential risk is an interdisciplinary field that boasts of many actual and desired ties to the natural sciences. For existential risk analysts, “science” means three closely related things. First, existential risk claims to be a rational and sometimes empirical, data-driven approach to theorizing and mitigating the possibility of human extinction. In this case, rational means quantification and mathematical calculation, a combination of probability theory (used in risk analysis by the insurance industry, for example) and utilitarian ethics. Second, existential risk scenarios are meant not to be fantastical but plausible from the point of view of current knowledge. In this case, science means a realism of plausible futures extrapolated from current knowledge. As we show here, this distinction echoes arguments in literary studies for the need to distinguish speculative fiction from science fiction. Third, in its effort to be scientific, existential risk is saturated by the tacit influence of science fiction, a genre of literature intricately bound up with the history of science. With these three meanings of “science,” we are concerned with how science fiction imaginaries morph into a science of the possible.
For us, this triple relation with science means that existential risk can and should be critiqued from the vantage of science and technology studies, where scholars from N. Katherine Hayles to John Johnston and Ruha Benjamin already address transhumanism and AI.1 In this chapter, our second critique of existential risk, our critical process employs techniques of second-order observation. One system, science and technology studies, will observe the observations of another, existential risk, in order to understand its conditions of possibility and its blind spots. Second-order observation includes analyses that are both internal and external to the field being observed, resituating the terms immanent to the field and using these terms in a self-critical way. At stake is how we interpret the eschatological implications, the very outer frame, of “our” ostensibly secular scientific worldview.
The theoretical models used by existential risk analysts vacillate among deep time speculations, unknown risk horizons, and policy making in the present. What kind of scientific reasoning best fits these timescales and the “unprecedented” nature of the events in question? The philosophers of existential risk deploy a model of probability far more than actual calculation. But they promise that their account of “possible” extinctions and the actions that will help humans avoid them can and will have a quantitative foundation. Their claim is that low probability plus extinction-level significance means we should care more about existential risks than anything else—and that this level of analysis is truly the ultimate priority of “effective altruism.” In the case of avoiding remotely probable existential risks, the benefits and rewards are postponed for a future distant enough that it will be hard for most to care deeply about it, with the added complication that many of these extinction scenarios might not be possible in the first place. It would not be fair to say that existential risk is fundamentally incoherent when assessing risk probability and leave it at that. But this caricature gets at something that we will take up with more rigor in this chapter.
The field of existential risk combines utilitarianism and probabilistic risk analysis toward a fortuitous “ok” outcome (Bostrom’s “maxipok”). On the one hand, existential risk analysts aim at the utilitarian goal of establishing the greatest good for the greatest number. On the other hand, they seek to use mathematical probability models to estimate the chance that a given cause will lead to the extinction of humanity, or even of life in general.2 The crux of existential risk analysis is the synthesis of these two modes of calculative reasoning. But this crux is also a failure: as we show in this section, existential risk analysis is constituted by a radical mismatch between method and object of study. This mismatch leads to a comic effect: the wonderfully strange spectacle of serious philosophers (Bostrom), scientists (Sir Martin Rees), and entrepreneurs (Elon Musk) discussing how to mitigate the risk of deep future extinction events as though they were calculating the probability that one will die in a car accident based on ample statistics about the frequency of such events in the past. The effect is similar to that of Michael Madsen’s documentary Into Eternity, when we watch engineers become speculative philosophers as they ponder the one hundred thousand-year time frame of their deep geological repository for storing nuclear waste, especially when Madsen asks them what human societies might be like in such a distant future.3 What Mark McGurl calls “posthuman comedy” is here the effect of rational solutions applied to cover over the unthinkable, unimaginable, and uncontrollable otherness of any future that stretches beyond a few human generations.4
Existential risk veers from “comic” analyses of extremely remote scenarios, such as the “aestivation hypothesis” (aliens are sleeping so as to wait billions of years for the universe to cool down enough to run cosmic-size computers), to “tragic” assessments of the high near-term likelihood of some massive extinction-inducing event that would be predictable but not avoidable.5 If we were to start calculating the probability that a climate tipping point or AI will bring an end to our evolutionary line, then not only would we be unable to arrive at a reliable estimate that bears any analogy with, for example, finance or insurance risk analysis, we wouldn’t even know if the event is possible in the first place. We would be trying to estimate the probability of something that can only happen once, without any evidence to go on. Not only that, we would be trying to use this knowledge to act in a way that will prevent it from happening. Or at least, because existential risk analysis operates within a fundamentally probabilistic universe, the aim is to lower the fatal event’s probability: the closer to zero, the more immune humanity’s potential will have been.
For some, this might already be enough to dismiss the idea of rational study of remote existential risk scenarios. As one AI researcher reports, “I don’t worry about [AI induced extinction] for the same reason I don’t worry about overpopulation on Mars.”6 There is a common-sense idea that we should not spend too much time or too many resources to prevent something when we don’t know if it’s possible (though Bostrom and others would remind us that common sense is what prevents us from seeing the implications of probability theory clearly enough to act). As our characterization of existential risk’s model of probability and utilitarianism suggests, we lean toward skepticism about the field’s claim to rationality and scientific rigor. For readers steeped in critical theory, such a quantitative approach to politics, ethics, and extinction may look to be another chapter of the dialectic of Enlightenment, bound to end badly. But the epistemological assumptions of existential risk should attract greater attention and critique—not dismissal as another overreach of rationalism, but analysis and historicization.
The Rhetoric of Probability
Writing for Vox, journalist Dylan Matthews covered the Effective Altruism Global Conference at the Google campus in Mountain View, California, in the summer of 2015. The title of his article—“I Spent a Weekend at Google Talking to Nerds about Charity. I Came Away . . . Worried”—divulges his take on the ideas and affects that circulated at the meeting.7 “Effective altruism” is the philanthropic practice of attempting to do the greatest good through means that are efficient and data-driven rather than sentimental. Many in the existential risk community also subscribe to the effective altruism movement, such as philosopher Toby Ord, founder of the society Giving What We Can (a group admirably committed to donating at least 10 percent of income). In Oxford, Bostrom’s Future of Humanity Institute shares office space with the Centre for Effective Altruism. Effective altruists see themselves embracing “the cold, hard data necessary to prove what actually does good.”8 This makes the movement a candidate for study from the perspective of science and technology studies, as an ethical philosophy that claims for itself scientific imprimatur. Matthews says that he identifies as an effective altruist; he also admits that “EA is very white, very male, and dominated by tech industry workers.”9 The topic of existential risk occupied center stage at the conference in 2015, and it gave him pause even as someone who embraces quantitative ethics.
Matthews’s skeptical account of this “X-risk” takeover is a good example of how existential risk combines fantastical, deep-time probability calculations with utilitarian ethics. In the example he cites, a panel featuring Bostrom and Musk, the starting point for determining the greatest good for the greatest number of persons was to calculate the greatest number of lives. The presenters did so not with respect to a narrow time frame, as in the greatest number alive on Earth today or during a 100-year period. Instead, they reasoned that if humanity lasts “another 50 million years,” then “the total number of humans who will ever live is . . . 3 quadrillion.”10 But they went on to decide that this number fails to take into account future extraterrestrial inhabitants of the solar system, “the potential value of our posthuman future,” or what Phil Torres calls the “astronomical value thesis.”11 Given the same arbitrary time scale of 50 million years, they concluded that the number of people we need to take into account is more like 1052 lives of 100 years each—a vastly greater number than 3 quadrillion, so much greater that the mathematical notation for exponents seems more accessible than obscure words like sexdecillion.
Bostrom then shifts from such uncountable numbers to discussing the ethical implications for us today: “Even if we give this 1054 estimate ‘a mere 1% chance of being correct,’ . . . we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”12 As Matthews continues to paraphrase,
the number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people. That argues, in the judgment of Bostrom and others, for prioritizing efforts to prevent human extinction above other endeavors. This is what X-risk obsessives mean when they claim ending world poverty would be a “rounding error.”13
Since we read this article, we have mentioned Matthews’s rather satirical report to a number of friends and colleagues. Their reaction is always amusement at the comical absurdity of this scenario. They are right, in a sense, but in this case the reductio ad absurdum of deep-time utilitarianism is strange because it is also perfectly rational, if “rational” means consistent with the concepts thinkers like Bostrom have applied. Notwithstanding their highly speculative assumptions about the time scale of fifty million years and interplanetary travel, the numbers reported by Matthews are faithfully deduced by combining probability theory with the first principle of the mathematical ethics of utilitarianism, even if the greatest “good” is reduced to mere existence for the greatest number. This is the “effective” side of effective altruism, now stretched to an unimaginable future.
In another example, Bostrom offers a box essay in Superintelligence about the human lives and happiness that are really at stake when it comes to the risk of malevolent AI. Different from Matthews’s example, he adds a quantification of happiness that goes beyond the negative value of avoiding the foreclosure of countless future lives:
Assuming that the observable universe is void of extraterrestrial civilizations, then what hangs in the balance is at least 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 human lives. . . . If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.14
This implied utilitarian calculus stretches the limits, to put it mildly, of any known means of deciding how we should act, what is right and wrong, and how to calculate the odds of a given extinction. Yet they remain calculations. They have a certain rationality despite the absurd ambition. They also provide the foundations for a normative prescription for action, even if it reads slightly tongue-in-cheek given the image of an ocean filled with tears of joy. The aspiration here is to make an ethics of extinction avoidance rational by grounding it in the relation between probability of a given risk and the idea that doing the right thing means dividing the total amount of total happiness, now and in the future, by the total number of humans. Working with such vast numbers is what gives the idea that even slightly lowering the probability of an extinction event is worth “astronomically” more than any justice achieved in the present.
Another example of Bostrom’s use of probability appears in the first chapter of Superintelligence, where he posits that AI will reach human-level intelligence during our century. In this case, the evidence with which to calculate such probabilities comes secondhand from surveys of AI experts recalibrated by Bostrom. Rather than probabilities calculated on the basis of known conditions such as the two sides of a coin or all the data about car accidents used by insurance companies, we have researchers guessing at probabilities, other researchers averaging them out, and Bostrom reporting the results. So he is right to admit that
small sample sizes, selection biases, and—above all—the inherent unreliability of the subjective opinions elicited mean that one should not read too much into these expert surveys and interviews. They do not let us draw any strong conclusion. But they do hint at a weak conclusion. They suggest that (at least in lieu of better data or analysis) it may be reasonable to believe that human-level machine intelligence has a fairly sizeable chance of being developed by mid-century, and that it has a non-trivial chance of being developed considerably sooner or much later; that it might perhaps fairly soon thereafter result in superintelligence; and that a wide range of outcomes may have a significant chance of occurring, including extremely good outcomes and outcomes that are as bad as human extinction. At the very least, they suggest that the topic is worth a closer look.15
Bostrom rhetorically hedges several layers of uncertainty, writing not just that we should take the numbers with a grain of salt, but that a “weak conclusion” “suggests” that it “may be reasonable” to believe in a “fairly sizeable” chance of the development of human-level AI, “which might perhaps fairly soon” become superintelligent, which could then mean a “significant chance” of human extinction alongside the more neutral and optimistic possibilities. This rhetoric of probability does not amount to anything very different from an unguided estimate. More generously, we can say that the changing opinions of experts allow for guesses that are easier to trust secondhand, and that these guesses can be updated as new evidence arrives. Yet they remain guesses about several things that we do not know to be possible in the first place.
There would seem to be a qualitative difference between such “probabilities” (if this is still the right term) that involve educated guesses and secondhand surveys, and the kind of probabilistic risk analysis that, though its conclusions remain uncertain, is able to use data about real past events to calculate a probability. Surveying the expert community can tell us the probability that a given expert will believe in human extinction by AI, not the probability of the event itself. Given Bostrom’s hedging, we are not just dealing with uncertainty but, what is more abstract, rhetorical play with uncertainty about uncertainty. We would have to be very generous readers to grant that there is some truth to these forecasts when the author is explicitly telling us not to read much into them. Given the fundamentally speculative nature of extinction scenarios, such guesswork is understandable. The problem is when it is also treated as “scientific” and “rational” grounds for political policy—even as the only grounds worth mentioning.
Stepping Back from the Precipice
A similar problem is found in Toby Ord’s use of ostensibly Bayesian probability in his recent book The Precipice. Ord’s analysis draws on a form of probability theory that, in Ian Hacking’s words, “offers a way to represent rational change in belief, in light of new evidence.”16 For Hacking, Bayesian theory matters most to “personal probability” or “belief-type probability.” It is a way of formalizing the process of updating one’s beliefs—or guesses about probability—by learning from experience. The basic idea is that given Pr(H1) as a “prior probability,” new evidence E will give a posterior probability, Pr(H1/E). The process is iterative. The posterior probability can be fed back into the equation as a new value for H, and evaluated again in terms of new evidence, and so on. There are further complexities that we leave aside. The point is that new evidence is necessary for this looping process that offers a good way to incorporate prior beliefs—that is, to avoid the blunt tool of an empiricism that claims to think only with firsthand sensory evidence.
Like Bostrom, Ord is discussing the probability of AI-caused human extinction, but in the context of a general chapter about “quantifying risks” in the twenty-first century “risk landscape,” so his statements about method apply to the field’s basic model as well. For Ord, the greatest risk in the next hundred years comes from AI. He rates the risk of extinction at one in ten, then notes that such a high number for such a “speculative risk” needs more explanation:
A common approach to estimating the chance of an unprecedented event with earth-shaking consequences is to take a skeptical stance: to start with an extremely small probability and only raise it from there when a large amount of hard evidence is presented. But I disagree. Instead, I think the right method is to start with a probability that reflects our overall impressions, then adjust in light of scientific evidence. When there is a lot of evidence, these approaches converge. But when there isn’t, the starting point can matter.17
In an endnote to this passage, Ord goes on to explain that this approach to probability theory is Bayesian: it entails “starting with a prior and updating in light of the evidence,” and the starting point matters.18 Like Bostrom, he bases the updating calculation on “the overall view of the expert community that there is something like a one in two chance that AI agents capable of outperforming humans in almost every task will be developed in the coming century.”19 Unlike Bostrom, Ord does not include numbers from surveys or a citation, but he asks us to take his word about the AI expert’s beliefs, then to accept his leap from the idea of human-level AI to AI as an extinction threat. While all of this is cast in the language of Bayesian updating, the Bayesian calculations themselves are promissory—as they must be given that we have no evidence to fill the variable “E” and temper our initial beliefs. So Ord’s calculations are based on his own beliefs as an existential risk theorist and on the opinions of the uncited experts; Bostrom’s calculations are based on his own beliefs and the average of a small sample of expert opinion. Arguably these probabilities are all prior. The authors give lip service to a Bayesianism that cannot be operationalized by definition for an unprecedented event.20
One could imagine a situation in which these ideas were more innocently speculative, so that the effort to treat guesswork as a foundation would be less problematic. But the mixture of utilitarianism and the rhetoric of probability that forms the field’s basic model entails norms about what should be done in the present to mitigate existential risks. These promissory probabilities and their norms don’t take place in a political-economic vacuum, isolated from practice. The Future of Humanity Institute, The Future of Life Institute, and the Centre for the Study of Existential Risk have received considerable funding, interest, and participation from tech billionaires. YouTube videos by Bostrom are widely viewed. As their institutional and media footprints suggest, this field is not at all shy about capturing attention and funding. After offering his belief-type probabilities, Ord argues that humanity has a 1/6 chance of extinction over the next hundred years, and lowering this number should be “a key global priority.”21 All five of the major existential risks—nuclear war, climate change, other environmental damage, AI, and designed pandemics—“warrant major global efforts on the grounds of their contribution to existential risk” (169). But the long menu of “minor” extinction risks merits major action as well. Given that the destruction of “humanity’s entire potential” would be “so much worse than World War II,” Ord suggests that it would be justifiable to create a global body for mitigating total existential risk by analogy with the UN. Another possibility is to “create a body modeled on the IPCC, but aimed at assessing existential risk as a whole.”22 For existential risk theorists, then, the policy and budgetary changes based on their rhetoric of probability ought to be dramatic and immediate. And when it comes to the popular nonacademic writings and declarations, the promissory calculations recede even further into the background, from appendices and box essays to gestures about the future of the field. Perhaps this is where the rhetoric of probability becomes most problematic. Given the public-facing nature of existential risk, this problem can hardly be seen as a necessary side effect of simplifying specialist work. Quite simply, the core practice of the field is not calculation but commentary about future calculation with normative weight in the present.
Radical political change is needed to ease the effects of global warming and preserve biodiversity, even if they are not extinction risks to humanity. But existential risk studies take such a bird’s eye view of humanity that it is incapable of intervening in a context of austerity measures for education and public services, systemic racism, and extreme, worsening class inequality, among other social problems. So these genuine proposals for a massive expansion of existential risk mitigation must compete with other sorely needed fields that are at constant risk of being defunded. In the university space, existential risk studies gather steam and institutional support while austerity reigns for traditional humanities disciplines as well as for more recent fields like critical race studies that educate people about human histories of oppression in a time of nationalist and white-supremacist reaction. The coronavirus pandemic has highlighted the racial inequality of infection and death rates just as it has shown the need for radical change in our agro-industrial food systems. We need research at multiple scales across the sciences and humanities into these existentially crushing structures of oppression, how current relations between cultures and natures became disempowering for so many, and how both can change in the near future.
Existential risk analysis is not risk analysis. This statement is only slightly hyperbolic. To be fair, there are some cases where calculating rough probabilities of planetary extinction seems possible. The frequently cited example is asteroid strikes. But even Ord admits the theoretical limits inherent in calculating the probabilities of unprecedented events—especially the kind that, by definition, would leave no survivors to perform Bayesian updates. From Bostrom and Ćirković’s early Global Catastrophic Risk to the growing archive of relevant papers on the website of the Centre for the Study of Existential Risk, our reading suggests that there is very little actionable probabilistic risk analysis going on in this field. Perhaps this is due to conceptual incoherence: the study of existential risk is founded on an impossible data point. Most of the extinction events that form its core object of study are simply not tractable in its model. We can study extinction through other methods, but we cannot estimate the probability of most extinction events—especially the kind that matter most to these theorists, which are the “human-caused” or technological ones over which “we” would seem to have the most agency.
The problem with the lion’s share of this discourse is that it gains legitimacy by offering quantification in an academic and corporate research world that privileges quantitative fields. But what it really offers is typology, speculation akin to scenario planning, and—most important for this chapter—a promissory rhetoric of probability rather than rigorous calculations. Moreover, valuing the future of humanity quantitatively is not possible because such value is relative and qualitative.23 But this has not in the past stopped attempts to evaluate humans quantitatively, and this evaluative “logic” appears incapable of extricating itself from scientific racism, biopolitical management, and eugenics. Bostrom himself casually examines the application of eugenics toward creating “greater-than-current-human intelligence” in Superintelligence.24 He notes that “any attempt to initiate a classical large-scale eugenics program, however, would confront major political and moral hurdles” (36). But this is not a condemnation of such schemes, more an assessment of their inefficiencies and unlikeliness to work. Bostrom does point out that state-run breeding programs might also be used to produce docility and control, but nevertheless concludes his brief analysis of eugenics by noting, “Progress along the biological path is clearly feasible” (44). If the study and mitigation of existential risk were to become what its practitioners envision, then it is difficult to see how these blunt tools would avoid creating new ways of instrumentalizing lives in the service of a future they will never see with their own eyes. The tools of immunity have a tendency to attack what they are supposed to protect.
Extrapolation and the Unprecedented Event
In Bostrom’s work, as we noted above, only the most extreme risks count as existential risks, the ones that constitute an extinction threat for humanity or for “Earth-originating intelligent life.” Many of these risks share something in common with discourses about climate change and ecological collapse, as with nuclear war discourse before them: the fear that we are heading toward an apocalypse that will mean the collapse of “civilization” (a dated term that appears often in Bostrom’s work and makes it reminiscent of grand narrative, “big picture” historians such as Arnold Toynbee, Oswald Spengler, Jared Diamond, and now Noah Yuval Harari). In order to “firm up where the boundary lies between realistic scenarios and pure science fiction,” as Rees puts it, these scenarios must be possible and plausible even if their probability seems low.25
As we have seen, there may be no way to calculate the probability of most of these scenarios, with the exception of examples such as a terminal asteroid strike. Here the geological record and astronomical observation can provide evidence, and the possible asteroid event already has precedents like the K/T extinction event of about 66 million years ago. Indeed, the study of fossil records of past extinctions beginning in the late eighteenth century drove the first wave of naturalistic, secular apocalypticism. With such precedents, as we saw above, one has a chance of estimating the probability that a strike will happen during a given period of years. Without them, the estimate becomes a different kind of probability altogether. As Carl Sagan wrote about the threat of nuclear war: “Theories that involve the end of the world are not amenable to experimental verification—or at least, not more than once.”26 To elaborate our critique above, when existential risk analysts do suggest numerical probabilities, what they do is offer a probability with a baked-in assumption of possibility or plausibility. There is no distinction between events that require such speculation and those that do not. There are even efforts to argue that such distinctions are untenable by merging them into a continuum.27
From the perspective of a critique of existential risk concerned with the epistemic structures that shape what it can say and its relations with science, there remains the question of what the field might be if not risk analysis. The question ultimately hinges on the way existential risk handles the logic of an unprecedented event that compels us retroactively, as though it has already happened. This logic is similar to what Ray Brassier calls “posteriority” in his own discussion of extinction: because the sun will burn out and the universe ultimately flatten in entropic dissolution, “the subject of philosophy must . . . recognize that he or she is already dead” and that “the posteriority of extinction indexes a physical annihilation which no amount of chronological tinkering can transform into a correlate ‘for us,’ because no matter how proximal or how distal the position allocated to it in space-time, it has already cancelled the sufficiency of the correlation” between subject and object.28 We return to Brassier in the next chapter. What needs discussion here is the distinction between events about which we can extrapolate through what Hacking called “the taming of chance,”29 and events that involve pure speculation like the arrival of genocidal aliens. By comparing extrapolation with posteriority, we get a fuller picture of the epistemology behind existential risk.
The anonymous AI researcher cited by Khatchadourian in the New Yorker conveys something more than glibness when they mention that they “don’t worry” about AI ending the human species “for the same reason I don’t worry about overpopulation on Mars.”30 This is another of the many sci-fi-adjacent thoughts that abound in the field (more below). The existential risk analyst might say that this remark just indicates a dangerous, cross-that-bridge-when-we-come-to-it attitude about something that could result in total human extinction. But another reading is more instructive when it comes to epistemic structures that should be more explicit in the work of existential risk analysts. The remark could also be about the difference between probability and possibility that so often slips into the blind spot of their work. The remark could mean that extrapolation from a place where we don’t even know if human-level AI is possible to a place where it becomes superior and genocidal is an absurd counterfactual. One can think it, like thinking overpopulation on Mars. But how much should we worry about it or build our politics around a risk when we don’t even know it to be possible in the first place? To do so would be to cross a threshold from extrapolation to another method entirely.
Thus a key question for any critique of existential risk is just what this “threshold” is and how it matters politically. After all, thinkers like Bostrom, Ord, and Torres might be right that although existential risks feel counterintuitively vague, impractical, and temporally distant, this does not necessarily make them unimportant—it just means that we have not evolved or learned the mechanisms to cope with them if not control them. From this perspective, long-term thinking seems salutary. Isn’t turning “long-range forethought” into “a moral obligation for our species” what environmentalists have always wanted?31 If so, where does realism grade into fantasy, and how well defined is the line? How should the thresholds between that which is tractable to probability theory and speculation about possible impossibles factor into nascent critical theories of extinction? To answer these questions, we need to continue down the path of positioning existential risk in its epistemic context.
One way to critique the “risk” models of existential risk might be to treat them as scenario planning rather than risk analysis. As Torres notes, existential risk studies can be traced back to the historical and technological conditions of the Cold War, which gave rise to a version of futurology geared to planning for possible futures without being able to predict them. The main risk and context was of course nuclear war, which is why existential risk authors often evoke Carl Sagan, who popularized the dire scenario of nuclear winter in the early 1980s. Going further back, another important precursor is Herman Kahn, the Cold War scenario planner and think-tank guru who worked for the RAND Corporation (a nonprofit funded in part by the U.S. Department of Defense). In recent years, scholars of the environmental humanities, anthropology, critical race theory, and speculative fiction have discussed this form of forecasting in some detail.32 For anthropologist and Foucault scholar James D. Faubion, there is a “scenaristic” rationality that distinguishes this kind of governmentality from the statistical modes of risk society, which is fundamentally biopolitical. The scenario planning of figures like Kahn and Pierre Whack of Royal Dutch Shell is different and “parabiopolitical” because of its narrative method.33 These two governmentalities, biopolitical and scenaristic, are not mutually exclusive. Given its tenuous grip on probability and its deep interest in apocalyptic narratives, perhaps existential risk is scenario planning under the quantitative guise of risk analysis.
R. John Williams makes points conducive to this interpretation when he discusses such “futurology” in fine-grained archival detail in “World Futures.” His striking discussion of the scenario planning adventures of RAND and Shell during the 1950s and 1960s shows how they
initiated a new mode of ostensibly secular prophecy in which the primary objective was not to foresee the future but rather to schematize, in narrative form, a plurality of possible futures. This new form of projecting forward—a mode I will refer to as World Futures—posited the capitalizable, systematic immediacy of multiple, plausible worlds, all of which had to be understood as equally potential and, at least from our current perspective, nonexclusive. It is a development visible, for example, in a distinct terminological transition toward futurological plurality.34
There is more to say about this approach and its contextualization in Cold War and corporate extractivist resource dominance agendas. For us, what it clarifies is a distinction between probabilistic risk analysis and a narrative method that schematizes possible futures, embracing their plurality and unforseeability. The probabilistic understanding of future worlds does make existential risk similar to the kinds of corporate futurology discussed by Williams: for Bostrom and company, the point is that we cannot and should not ever know which of these futures will materialize as the extinction event; rather we should focus on lowering the probabilities of every imaginable extinction scenario that lurks in the future of humanity. The IPCC of existential risk would always be monitoring and updating (with what evidence is often unclear) the probabilities. What Eva Horn writes about “today’s awareness of the future as catastrophe” applies here as well: “We are dealing with a metacrisis composed of many interrelated factors, dispersed into a multitude of scenarios, and distributed among many different subsystems.”35 Horn adds that these “metacrises” serve as staging platforms to make the future more legible and amenable to securitization and play out new norms of acceptable risk and reward. Even if existential risk does not play the same role in political economy as scenario planning, the former can most certainly attract funding and the attention of an audience savvy about the promissory, hedged, venture-capitalized futures of the start-up world. Another similarity is what Williams describes as the “ostensibly secular” and “quasi-theological” nature of World Futures. This strain of “ontotheology,” to use Martin Heidegger’s term, has clear connections to the apocalypticism of Bostrom and company, as with transhumanism’s desire for immortality through the technological transcendence of finitude and embodiment.
But this is where the similarities end. The first difference between scenario planning and existential risk is that the latter’s extinction scenarios are not to be understood as “equally potential,” but rather differentially probable, with the differences intended to guide policymakers. A second difference is simply that scenario planning stays focused on the more limited and practical domain of “possible futures” and “plausible worlds.”36 Scenario planning seems just as tame as one would expect from the officious “masters of the world,” those who aspire to present to an audience in the Pentagon or a skyscraper owned by Shell. Whereas existential risk, with its discussions of the simulation hypothesis side by side with doomsday prepping, hews to audiences familiar with escapist longings, life-extension research, and sci-fi fandom. Surely existential risk analysts would say that their approach is resolutely secular, and that the extinctions they discuss are entirely “plausible” and “possible.” But as we’ve seen, these words mean something very different in this context than when they point to the near-future examples discussed by Williams.
The final difference to note is that existential risk does not imagine how we would act during its scenarios, since there would be no “we” to act. With scenario planning, even for nuclear war and obviously for energy-geopolitical economies of the future, the focus has been on developing plans of action that planners could imagine unfolding within each future world. They would hedge by anticipating multiple futures at once, futures that would not be mutually exclusive but might interlace and recombine in the complexity of a reality that always exceeds anticipation. But with existential risk, there is no acting within the scenario itself. Ord is quite explicit about this preemptive logic. He notes that “necessarily unprecedented” risks are especially difficult to work with because “by the time we have a precedent, it is too late—we’ve lost our future.”37 This means, again, that “to safeguard our potential, we are forced to formulate our plans and enact our policies in a world that has never witnessed the events we strive to avoid” (195). Existential risk analysts are in a difficult place: unable “to fail even once.” To act on their probabilistic forecasts, they will have to take proactive measures: “sometimes long in advance, sometimes with large costs, sometimes when it is still unclear whether the risk is real or whether the measures will address it” (196). Existential risk analysis preempts; by definition it cannot guide our actions if the scenarios manifest. So the parallel breaks down on multiple fronts, and existential risk only partially overlaps with the paradigm of scenario planning or “World Futures.”
There is something that needs further attention as we continue to understand the epistemic structures behind existential risk: the implications of Ord’s concept of the “necessarily unprecedented.” For existential risk theory to be as rational as it claims to be, it must depend on a kind of future anterior logic. We have to see our world as one in which the event of extinction has “already happened” or “always already might happen.” The scale of its risk means that we should treat it as retroactively real. In turn, this realism of the present invoking a future gazing back on the past (a retroactive or posterior realism) makes the choice to act on low-probability risks seem rational. If we consider the foregoing arguments, it seems that existential risk hovers between two interlocking scientific realisms: (1) the realism of plausible, extrapolatable futures, and (2) the retroactive realism of an event so serious that, though “it is still unclear whether the risk is real,” compels us to act as if it were real in the first sense of plausible extrapolation. This makes it something like a Kantian regulative idea.
Here we can see the difference from Brassier’s concept of transcendental posteriority, where inevitable extinction retroactively shows the truth of scientific realism and nihilism. For existential risk, realism applies to what is possible but not yet actual. We are not “already dead”; we should act as though the total probability of extinction is high in order to prevent it. But this approach does correspond with Brassier’s idea that “extinction is real yet not empirical”: the consequences are great enough that we are asked to bracket skepticism about the distinction between probability and possibility and do without empirical observation.38
Wagers and Thresholds
Such scientific realism does not, however, mean that deep histories of religion are easy to escape, as we saw too in Williams’s comments about the “quasi-theological” aspect of World Futures. Indeed, Ord’s concept of the necessarily unprecedented event effectively reinvents Pascal’s wager for a secular eschatology.39 Pascal’s wager is a troubling argument in favor of faith in God. For Pascal, it is rational to believe that God exists, or at least to try even though we can’t know one way or another, because if he doesn’t the consequences will be finite: the loss of some pleasures, some freedoms, and so on. But if God does exist, then the rewards for belief are infinite, as are the punishments for disbelief. So the rational choice is to try to have faith and bet on the infinite rather than the finite. The rationality of the choice will have been verified from the perspective of a future reality if one dies and learns that God existed all along. Otherwise, one just dies. In the meantime, all we have to go on is the difference of kind between outcomes: finite and infinite.40
Despite being a secular eschatology based on probability rather than a theological one based on metaphysics, existential risk shares something in common with Pascal’s wager because it suggests that risks with extremely low probabilities should be treated as clear and present dangers. Indeed, Bostrom cites the wager in Superintelligence, and he has published about it elsewhere, going as far as to transform Pascal into a devotee of a utilitarianism that seeks to maximize “astronomical” or “infinite” value.41 Another use of this cosmic wager logic appears in Sir Martin Rees’s foreword to Torres’s book Morality, Foresight, and Human Flourishing, where Rees writes, “the stakes are so high that those who are involved in [the study of existential risk] will have earned their keep even if they reduce the probability of catastrophe by a tiny fraction.”42 The risk is calamitous enough that it bears an analogy with Pascal’s distinction between finite and infinite (and here we recall Derek Parfit’s argument that losing 99 percent of humanity would be catastrophic but losing 100 percent would be infinitely worse43). Like Pascal’s wager, Rees’s statement involves a retroactive logic signaled grammatically by his use of the future perfect tense: existential risk analysts “will have earned their keep.” Even if we do not know if a given extinction is possible, much less its probability, acting in a way that proleptically prevents an extinction event is the right thing to do, and society’s resources will have been earned if the dice roll goes our way. Like the choice between infinite punishment and the same death that would have happened anyway whether you believe in God or not, it is rational to pay their salaries because these risk analysts are able to choose paths that lead away from Parfit’s “infinitely worse” 100 percent death rate. So long as we’re still here, they might have been successful. But if they fail, as Ord reminds above, by definition no one will be left to know it.
This is indeed a strange legitimation, a strange logic of verification, and a strange relation between the rational and the real. We are tempted to see it as the kind of overreach of rationalism common in analytic philosophy, where statements that can be simplified into logically coherent and noncontradictory forms are seen to have a privileged access to reality rather than just (as we think) illuminating a very decontextualized region of the space of reasons. But theorists of existential risk do make a strong case for the counterintuitive nature of their claims, suggesting that common sense militates against long-term thinking and “safeguarding” human existence. As we have tried to suggest throughout, some of their ways of stretching common conceptual frameworks and political norms to meet the cosmic scale of their topic are worthy of careful critique—not dismissal, but the effort to occupy and unpack the shadow cast by existential risk. For now, we elaborate how the logic of the unprecedented and retroactive rationality hinge on a threshold between the plausible and the thinkable.
The language of thresholds in complex systems has become common in discussions of climate change and Earth system science, and a number of scientists have recently argued, in high-profile publications, for the existence of planetary “tipping points” that might be an existential threat to humanity. If the climate were to pass such a threshold, it would stop warming in a linear way that reflects the rate of accumulation of carbon in the atmosphere. Instead, it would change in a sudden and nonlinear way, driven by self-reinforcing feedback loops that quickly spin out of control. Such “hothouse earth” scenarios create the greatest anxiety for those who think global warming might lead to the extinction of humans or even life on Earth, not just suffering or the collapse of modern societies.44 Not all existential risks would involve the same kind of thresholds as radical climate change, but the concept is helpful because it illuminates an epistemic break between futures that can be extrapolated from the present and futures that are more speculative.
Alicia Juarerro makes clear how this notion of the complex threshold interacts with epistemic frames in an essay that draws on the work of Stuart Kauffman, where she writes, “It is impossible to predict emergent properties even in principle because the categories necessary to frame them do not exist until after the fact.”45 With this claim, which casts doubt on a strict division between ontology and epistemology because frames of knowledge depend on real emergences in complex systems, much depends on what Juarrero means by “prediction.” For the purposes of the critique of existential risk, we note that her point goes beyond the idea that these emergences cannot be predicted with certainty, a limitation which is absolutely accounted for in the way existential risk’s basic model processes uncertainty. She goes beyond this kind of probabilistically tractable uncertainty by introducing a break in the continuum that Bostrom and Ćirković argue for in Global Catastrophic Risks, precisely in the moment when they discuss the field’s relation with science:
Although more rigorous methods are to be preferred whenever they are available and applicable, it would be misplaced scientism to confine attention to those risks that are amenable to hard approaches. Such a strategy would lead to many risks being ignored, including many of the largest risks confronting humanity. It would also create a false dichotomy between two types of risks—the “scientific” ones and the “speculative” ones—where, in reality, there is a continuum of analytic tractability.46
The break that this continuum papers over is a crucial one because it suggests a limit on using the tools of probability to extrapolate from current conditions. Even probabilistic prediction is prediction. But the inability to predict certain emergent properties from initial conditions would suggest that the very parameters that would enable this kind of prediction, which are also crucial for climate modeling, are set to change. If Juarrero is right that the “categories necessary to frame” such emergent events do not exist until after the fact, then existential risk theorists are making category mistakes when they argue for extrapolative and retroactive realism.47 If such thresholds exist and can be used to interpret the unprecedented events of existential risk analysis, then there is no way grope toward them from the present nor use them retroactively (in the mode of Pascal’s wager) to compel belief and action today. The very concept of an unprecedented event demands that we conceptualize a break in the continuum that stretches from past to future, known to unknown.
We think it is important to emphasize the breaks in the continuum between “scientific” and “speculative” risks. There may be others, but we focus here on one kind of break in order to flesh out the logic of the unprecedented event. Here our approach is “critical” in the sense that it reverses the process of collapsing differences into an identity (a continuum suggests oneness) that then serves as a standard for rationality and political action. If existential “risk” continues to be well-funded and impactful outside the academy—and if it takes on even a fraction of the power its adherents think it should—then critique of the field’s reductions will be all the more important.
Science, Scientism, and Science Fiction
To think through the distinction between scientific and speculative is to return once again to problems of demarcating what counts as science and what does not in a time when the social authority of the sciences is both greater than that of the humanities and treated with skepticism (as the pandemic has shown) by a growing list of “post-truth” political projects. When Bostrom and Ćirković argue against “scientism,” the idea that a monolithic Science is the only kind of knowledge that matters, they are marking out a “rational” space for risk analysis that cannot be limited to things about which we can collect evidence. They rightly eschew scientism. But their mathematical metaphor of the continuum proceeds to bring science back into proximity with their approach. If scientific and speculative risks are on a continuum, then Bostrom and Ćirković are implying that their means of reasoning about extinction might become scientific after all.
By claiming science and reason for their own approach to extinction, such arguments also make it more difficult to open what counts as knowledge to a multiplicity of ways of knowing beyond reductive notions of “science” and “reason.” One way is the kind of existential ecology that we discuss in more detail in chapter 3. Another is science fiction, which we might have given more space in this book. Such ways of knowing the world seem much better suited to topics like future extinction and cosmological scales than the mixture of probability and utilitarianism at work in existential risk, but they lack the legitimacy that attaches to the quantitative. Scientism might well be blocking our intellectual cultures from addressing some structures of existence that have the potential to guide alternatives to cynical individualism and toxic short-term thinking.
In the world of debates about literary genres associated with science, one relatively recent distinction is surprisingly similar to one we tried to make by insisting on a break in Bostrom’s continuum. Here, too, the logic hinges on where extrapolation to plausible futures gives way to more remote speculation. Margaret Atwood distinguishes “speculative fiction,” the name for novels that deal with the near future in terms of socially relevant extrapolation, from “science fiction,” which entertains fantastic powers beyond the ken of known science such as time travel or magic monsters. Hannes Bergthaller finds that the distinction between speculative fiction and science fiction “hinges on the realism and probability of the fictional world depicted in a given text: while the latter supposedly deals in ‘things that could not possibly happen’ . . . the former is concerned with ‘things that could happen but just hadn’t completely happened when the authors wrote the books.’”48
One influence on the recent burst of interest in climate fiction follows a clash between Ursula K. Le Guin and Atwood that took place in the 2000s. Atwood marks a difference between the kind of science fiction she considers politically useful and the more fantastical regions of the genre. She distinguishes between the impossibilities of “science fiction proper” and “speculative fiction, which employs the means already more or less at hand, and takes place on Planet Earth.”49 But in a 2009 review of Atwood’s The Year of the Flood, Le Guin takes issue with Atwood’s distinction. Le Guin makes her point comically, writing that she would like to review Atwood’s novel using “the vocabulary of modern science-fiction criticism, giving it the praise it deserves as a work of unusual cautionary imagination and satirical invention.”50 But since Atwood insists that she is not writing science fiction, Le Guin constrains herself to Atwood’s “wish,” using only “the vocabulary and expectations suitable to a realist novel.” What follows is a dry evaluation of the novel based on its plausibility. For example, the flood itself (a pandemic) is an “abstraction, novelistically weightless,” and the characters lack three-dimensional, Austenean depth. Le Guin’s target is the high-culture pretense of distinguishing speculative fiction from science fiction, devaluing science fiction like her own, which lacks realism, as less serious, less political (Atwood also calls speculative fiction “social science fiction”), lacking aesthetic quality, and thus relegating it to “the genre still shunned by prize awarders . . . the literary ghetto.”
What the clash between Atwood and Le Guin illustrates is an effort, on different terrain, to process the threshold between plausible extrapolation and speculation about unprecedented events. If such a recent, formal, and literary distinction is relevant to our study of existential risk’s relation to science, then what is the role of science fiction in the critique of existential risk? Some points about science fiction help to set up a concluding attempt to deconstruct what Ord calls the “basic model” of existential risk.
The textual ties between existential risk and science fiction are peppered throughout the literature we’ve been citing. Science fiction, in turn, has always been in dialogue with human extinction, AI, and other scientific speculation about the future. One reviewer of Bostrom and Ćirković’s Global Catastrophic Risks encapsulates the book as “risk assessment meets science fiction.” As Rees puts it, “There needs to be a much expanded research program, including natural and social scientists, to compile a complete register of possible ‘x-risks,’ to firm up where the boundary lies between realistic scenarios and pure science fiction, and to enhance resilience against the more credible ones.”51 The literature of existential risk is sprinkled with both disavowals of science fiction (in the form of “x scenario is not just science fiction, but something to take seriously”) and conveniently picked references to science fiction authors, but it utterly ignores sci-fi scholarship. Such an absence of engagement could be the result of long-standing conflicts between continental theory and analytic philosophy, imagination and reason; it could reflect the low profile of literary studies from the perspective of humanities fields that hope to receive the blessing of the sciences, or even to be included, themselves, in the scientific pantheon. In any case, there is in existential risk a tendency to use the word “rational” in the fully positive sense and “irrational” in the fully negative, without the kind of questioning that comes from pragmatism, phenomenology, the Frankfurt School, postcolonialism, feminist theory, or critical race studies.
Disciplinary quibbles aside, however, the most pressing issue for our critique is that this omission of the entire preexisting literature on science fiction is, for existential risk, symptomatic of an untheorized relationship to science fiction as a genre. Science fiction is “fetishistically disavowed” throughout the literature of this field that offers itself up as a science of the unobservable, or at least a rational approach to it. In Freud’s sense, fetishistic disavowal means repeatedly casting something outside the sphere of one’s argument or way of life in a way that suggests that one is actually obsessed with it or desires it on an unconscious level. “Bostrom dislikes science fiction.”52 In our case, the concept does not apply to the psychic lives of the risk analysts but to the discourse of existential risk more broadly. It seems that science fiction cannot be taken seriously, but that it is always in the blind spot of this field. Perhaps what is really being disavowed here, at a theoretical level, is not any discipline or genre, but rather the shift in theory and practice that should take place, but doesn’t, when existential risk theorists speculate about what they can’t possibly know.
We look at one such shift in theory and practice in the next chapter: from existential risk to existential ecology as an approach to extinction. Another shift would be a thorough focus, already well established, on narratives of apocalypse and utopia and on the work of scholars who have been studying these modes for decades. Horn’s The Future as Catastrophe is particularly attentive to the fact that “knowing and communicating about the future is impossible without stories: stories that ‘look back’ from the future to the present or that extrapolate from past predictions about what is to come.”53 She argues that these narratives, especially science fiction and scenario planning, “structure the way we anticipate and plan for the future and, above all, how we try to prevent catastrophic futures from occurring” (10). She pushes this claim further when she writes that “fictional scenarios of the future in literature, film, popular culture, and popular nonfiction . . . are neither mere symptoms of the collective psyche nor simply media of ideological indoctrination but epistemic tools to understand and discuss potential futures” (10). In so doing, they reveal something that already exists in the present while priming the imagination so that drastic social changes can become conceivable. Horn is in agreement with Ursula Heise’s point that “the basic strategy of science fiction is to present our own society as the past of a future yet to come.”54 This is only a glimpse of what science fiction scholarship does with futures of extinction and utopia, but it provides an alternative to the idea that science and reason are the only forms of knowledge that count.
Research like Horn’s takes a qualitative and narrative approach to the same topic that preoccupies existential risk theorists. In this moment of quantitative dominance, when statistical methods are applied even to topics that seem utterly intractable to them (when “predicting the unpredictable and empirically studying the unverifiable” seems viable to some55), qualitative (aesthetic and critical) studies might seem unrigorous to both scientists and policymakers. But they should be taken seriously, especially when the object of study is so constitutively fictional. This would mean embracing the idea that fiction is a source of knowledge, not just a convenient reference.
In light of this chapter’s critique based on existential risk’s relations with science, we can sum up and draw some conclusions that question its modeling practices and presuppositions. The basic model treats extinction probabilistically, applying risk analysis concepts used by insurance companies and war games, among other industries, then combines it with a utilitarian approach to ethics. Any risk can be studied in terms of its probability and its potential damage to humanity, often calculated in terms of humanity’s entire future potential. By adding this deep-time calculus of the value of humanity to the question of how we should make political progress today, the basic model opens itself to assumptions about what humanity will be like far into the future and uses these as the basis for how we should act now. We argued that the field’s use of probability is more often rhetoric of probability that promises future calculation, and we questioned the value of quantitative politics for the study of extinction. By looking at the extrapolative and futural scientific realisms of existential risk, we argued that it ignores the thresholds across which extrapolation of possible worlds from the conditions of the present must fail and turn into speculation. Yet existential risk also makes its claim to realism retroactively, looking back from a necessarily unprecedented future event to argue about what is real and what we should rationally act on today. In scenario planning and science fiction, we saw two closely related genres, wired in parallel, as it were, with existential risk. These genres show alternate methods that often cryptically and obliquely shape what is meant to be a rational and scientific modeling. In short, there are fundamental epistemological problems with the field of existential risk.
Yet studies of existential risk do evince an intriguing encyclopedic aesthetic in their need to cover every imaginable extinction scenario, whether or not we know it is possible. Behind the accessible, action-oriented writing of its proponents, there lurk weirder regions of philosophy, especially the kind that adjoins to logic and math. Alternative-world rationalists from Leibniz to Charles Sanders Peirce to David Lewis are only a few steps away from the field’s publications. If they had gotten closer, of course, they might have pushed the policy and philanthropy professionals in a different direction.
There is also a current of scientific modernity that existential risk understands very well: the impact of chance and probability, whether we see it through the lens of David Hume, the rise of statistics, risk, and biopolitics, the rise of statistical mechanics in physics, or the role of probability in quantum theory. We can imagine an existential risk chapter in Dennis Danielson’s The Book of the Cosmos, which chronicles western cosmologies from the ancient Greeks to Einstein and Hawking to the “multiverse.”56 The cosmology of existential risk would have very little interest in origins but remains fascinated with churning out every possible end for Earth, life, or humans. The adherents of this cosmology would constantly cycle from one scenario of the end of the world to another, as though they were numbered balls drawn from a Bingo machine. Each conflagration would appear more or less often at a rate set by its constantly updated probability. The story could go on, much like in Olaf Stapledon’s First and Last Men (1930), which narrates multiple civilizational beginnings and endings over a two billion-year span on Earth. The intriguing thing about all of this is how existential risk generates counterintuitive ways of linking the past and the future on the terrain of probabilistic ontology.
The idea of an institution dedicated to the deep future or to possibilities that might not be possible sounds wonderfully decadent and intriguing. One could imagine the novels and films that would address it by combining plots about technocracy, mixed with Borgesian thought experiments to account for possible worlds. This institution would be a kind of grand, metascientific, interdisciplinary priesthood of extinction (remember Ord’s humble analogy with the IPCC), auguring the improbable events and courses of action they might prevent through mathematically and rationally intricate manipulations of current knowledge, which surely resemble some financial tools used to hedge bets on the market. Bostrom seems to traffic in a new kind of grand narrative invested in the idea of predicting and determining the course of history. But it is less a grand narrative march toward transhumanism than a vast, fragmentary array of (im)possible worlds. The scope is as big as it can be, but the logic is counterintuitive and open-ended, not totalizing. There are ways to repurpose some of these ideas for an approach to extinction that would be more critical about its existential structures. Perhaps a less instrumentalized approach to the limits of the modern cosmic worldview would help in reinventing the discouraging (especially as we write, in 2020) project of achieving social and environmental justice.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.