AMONG ITS OTHER MERITS, Vilém Flusser’s strange treatise on vampyroteuthis infernalis is a fable about information at the literary limit. Comparing “the vampire squid from hell” and Homo sapiens sapiens, he proposes a fantastic convergence that links the odd existence of a tentacled life-form, complexly equipped for probing the deep ocean, to the inhuman consequences of our emerging system of new media. Humans increasingly approximate the strategies of invertebrate life, Flusser writes:
As our interest in objects began to wane, we created media that have enabled us to rape human brains, forcing them to store immaterial information. We have built chromatophores of our own—televisions, videos, and computer monitors that display synthetic images—with whose help broadcasters of information can mendaciously seduce their audiences.1
Is this assessment hyperbolic? Probably not. Recumbent with chromatophoric gadgets, humans become more and more cephalopodan, probing, probed by, and propelled through an endless ooze of immaterial information. Increasingly, our environment is, in so many words, the seemingly unfathomable abyss of Big Data plumbed fitfully by inhuman algorithms.
Like the vampyroteuthic dataverse, Big Data presents a literary problem that defies sensible human scales.2 Big data signals an end game for a literary humanism that depends on the kind of specially selected, reasonably portioned, yet easy-to-lose information anyone might come to value in the reference frame of a single lifetime. Formerly, humans worked hard—and mostly failed—to give their information durable form in terms of finite objects—grasping problems, transforming “intractable things into manageable ones,” coming up against the “last things” “that could not be transformed or overcome,” archiving what they found out as best they could for later.3 Now, things have changed; the sheer pile-up of inexpensive, new-style stuff overwhelms us from all sides with unmanageable nonthings “negligible from existential point of view.”4 “Humans,” returning to the ever-prescient Flusser, “no longer realize their creative potential by struggling against the resistance of stubborn objects. . . . From now on, humans can realize their creative potential by processing new and immaterial information” and then by selling it back at “inflated prices” to a “dominated” humanity.5 The change from handling in-formed objects to processing an abyss of potential information is decisive. In effect, Flusser observes a state-shift from an age when the inevitability of further processing was not a given—when raw data were not an oxymoron, to invoke Lisa Gitelman—to a time when it is inevitably so.6 The idea of unprocessed information—and things which are intractable to processing—increasingly seems like a naive archaism to those saturated with so much potential information.7 Now, every nonthing comes preformed with massive quantities of metadata—date and time stamps, above all else—a vapor trail of data-as-waste, which our insidious gadgets leave us willy-nilly, whether it matters or not. Mostly not—we hope.
Hovering in the background here is the “friendly reminder” from new media studies that data are always already “cooked” and never entirely “raw,” an allusion to Claude Levi-Strauss’s structuralist classic The Raw and the Cooked, invoked to emphasize that data are cultural, not natural, phenomena.8 One way or another, data always come prepared, it’s held. Yet, following Levi-Strauss, raw and cooked are linked empirical categories, and cooking is the middle phase—“a form of mediation,” he repeatedly intones.9 Perhaps it’s worth pressing a poststructuralist point: raw is a discourse effect of cooked; the unmarked/marked pair is mutually constitutive. Raw is more than relational, in other words; it’s also meditational and semiconductive. In later work, Levi-Strauss nuances his culinary–cultural matrix, adding gradients extending from almost raw to more than cooked and including such additional states as burnt and rotten as well as bland and spicy.10 The inevitability of such category shifting—changes in state from given to taken, for example—indicates that, despite appearances, categorical terms like data have little self-sufficiency. If data implies something like available for further processing, it manifests the same problems as guest and host, eating off one another. In the beginning, as Michel Serres reminds us, comes the parasite: “Real production is unexpected and improbable, it overflows with information and is always immediately parasited.”11 The break in the flow—the interruption—is formative static.
In other words, data don’t need to be raw to be considered oxymoronic. Contradiction is already on board, yoking the sharp and the dull, as the etymology of oxymoron suggests. Raw data are a kind of pleonastic oxymoron, presupposing multiplicity as accumulation—processing plural to singular into a set, a matrix, a database, and so forth. The ambition is to fit all data—to process everything; they becomes it. Switching to singular, then, data come always already sharpened, the argument goes; data are less raw than constitutively messy. Raw and/or cooked, the massive amounts of static implicated in inhuman scales make archival issues conspicuous in several ways. For better or worse, hangovers of obsolete form—skeuomorphs—hover about our metaphors about managing data, information, and knowledge. Google isn’t an archive per se. It’s a inhuman finding aid for making sense of its monstrously expanding archives. For all their ubiquity, analogies drawn between in-formed objects with finite capacities and the nebulous communication dataverse designated by the cloud are also simultaneously monstrous and clumsy.12 Humans once fashioned in-formed objects with archival powers, Flusser points out in his fable, for orientation and memory—the bent twig used to indicate a direction on a path, for instance. Once upon a time, we gathered info to disseminate it and fabricated artificial memories. Now that ever larger quantities overwhelm our objectworld, we require special, systematic finding aids—search engines, above all. Input comes prior to information. Consider the rise of the algorithm along these lines: algorithms require archives to function, and we trade on a host of archival metaphors to talk about them, yet they are not archives. Instead, they are methods for producing systematic outputs—for cooking data, so to speak—recipes that get themselves archived and that need various archives to work. It may be that “new-style information,” as Flusser puts it, is “negligible from the existential point of view,” yet does this existential melancholy even matter if every ephemeral state is now collectible and available for potential use?13
Tracking a genealogy for making literary sense of spoilers and scales of input and output takes us back to the future, back to modernist concerns with deep time, impossible author functions, and maximal and minimal audience structures. The spoiler archive is a futurist ersatz-archive, search engines engineered for grasping what’s “impossible to get a hold of,” processed at inhuman scales and expanding reference beyond the Anthropocene. The murky abyss, as it were, includes speculation about deep futurity, time after human extinction, for instance, that draws on the scientific extrapolations of cosmology. It also confronts the technically assisted observation of a “reality anterior to the emergence of the human species—or even anterior to every recognized form of life on earth,” as Quentin Meillassoux characterizes it. The arche-fossil that Meillassoux describes—clearly rooted in modern knowledge of inhuman timescales advanced by geology and platetectonics—is also a fossil of the future, “designat[ing] the material support on the basis of which the experiments that yield estimates of ancestral phenomena proceed—for example, an isotope whose rate of radioactive decay we know, or the luminous emission of a star that informs us as to the date of its formation.”14 Such ersatz-archives are not merely eccentric; they exit the orbit of the human-scale archiving practices altogether. Seeing and hearing by technical means—devising techniques of observation, experiment, and reflection—is the crux of what Siegfried Zielinski sees as Flusser’s media philosophy and also the lodestone of his own project of media variantology: “technical media had been a pile, a treasure of possibilities (or perhaps better: potentialities), which permanently had to be explored, every day and everyday new.”15 This chapter takes such variantology as occasion for literary speculation about the scalar implications of a long inhuman turn in modernity, administering complexity of temporality as an inevitable encounter with entropy and expiration—navigating the abyss of expired databases, dated theories, and dead links.
“Take your books of mere poetry and prose; let me read a timetable, with tears of pride.”16
So says the hero of G. K. Chesterton’s The Man Who Was Thursday observing not only that time is information but also that it has affective texture. Let me name one of the commonplaces of the literary modern: searching for aesthetic possibilities in a newly regimented world. As modern time becomes increasingly administered on and by the clock, human timescales—seconds, minutes, years—become increasingly remote, displaced from fundamental science first by deep geological time and then by the discoveries of quantum mechanics and scientific cosmology. The new intervals of physics are so short that they are impossible to relate to, whereas the tempos of cosmos are so long that they prove similarly inhuman.
The literary modern affords special epistemological status to unforeseen circumstances beyond human control. This chapter concerns the collection of literary knowledge in inhuman quantities and the literary implications of scalar shifts in information—including temporal information—in modernity. In particular, it explores modernist interest in “collecting” absurdly long time frames, scales of the deep future that dwarf another standard developed by remapping the psychologized, subjectivized mind with the topographies of the classical epic (i.e., on Bloom’s day, June 16, 1904, or Quentin’s day, June 2, 1910, or Mrs. Dalloway’s day). In this regard, H. G. Wells’s 802,701 A.D. is a critical search result for an alternative inhuman (in-Bloomian, sub-Quentinian, ex-Dallowegian) modernism that has ongoing implications. In The Time Machine, for instance, his thought experiments in extreme futurism reveal a decidedly literary pedigree for the engineered machines currently plying the World Wide Web.17 Considering the Wellsian search engines alongside other cognate versions proposed by Isaac Asimov, J. G. Ballard, and others, I argue that a conceivable end of human knowledge frameworks—the “death of the sun,” using the critical shorthand proposed by Jean-François Lyotard—provides something like a new modern sublime: the cold return of the inert and the quiet, the background temperature of outer space, the unlit, unvoiced stone, the exhaustion of exhaustion. Here we might find the inhuman happiness that follows postmodern nihilism and the recent consolations of the archive: never give up on a better past.
H. G. Wells is the ideal literary instrument for thinking about human and inhuman time for two reasons, first, his literary name prepossessed by the idea of generation. You see this in his own writings when Wells discusses his relation to other writers, in particular, in his Experiment in Autobiography composing the Wellsian brain.18 What Wells does again and again is to place his powers of attention out of the frame of the contemporary. Despite the popular reputation of Wells as a visionary, the purported originator of modern science fiction appears to suffer, however ironically, from the maxim that nothing ages faster than the future. That so many of his visions of the future “came true”—as is often remarked—seems of little moment in this regard. To the midcentury professoriate, Wells feels older than Henry James, the author who’s a full generation younger; Wellsian style seems more dated. More of our times; less our contemporary. What can datedness mean, as Justin Clemens and Dominic Pettman have well observed, when we’re caught in “the great aesthetic whirlpool [that] neither validates nor rejects any particular epoch”?19 This sense of Wellsian datedness may follow from his status as a name on the edge of this whirlpool. His search engines are calibrated to overleap generational scales; his alternative histories report to impossible future readerships. One might say that Wells was interested in the formula generation after generation not so much as a commonplace of sequence but as an inhuman problem: what happens to the idea of generation after generation is a zombie concept.
I’m thinking especially of The Sleeper Awakes, in which the protagonist goes into a trance in 1897, sleeps through the Martian wars, and wakes up centuries later to find that through compounded interest, he literally owns the future.20 I’m thinking of the sleeper, the time traveler, the subject of The World Set Free, the Wellsian premise of “humanity surviving extinction, of overvaulting the end of time and historical epochs, not toward the future or the past, but toward the heart itself of time and history,” to borrow some Agambenian language.21 Before spooling to the end, some preliminary remarks about modernism and time—historicity, contemporaneity, futurity, and the concept of generation itself—may be in order. In Graphs, Maps and Trees, Franco Moretti notices three intervals of literary time, which he imports from the Annales school.22 At one end of the spectrum is the very short term—the bread and butter of modernist studies: the event, the break, or the rupture, the instantaneous experience of novelty, the aesthetic effect just there that changes everything. At the other side is the very long term, the epoch, the era, or the longue durée: the period-spanning historical thesis that in its own way is another second mainstay of modernist scholarship.
Or else, maybe more accurately, one could say that the very short is constitutive of modernist studies, whereas the very long is regulative of it. With all the emphasis on temporal particularities and generalities, we modernist scholars have ill-served the middle term, the temporal span that Moretti calls the cycle. Cycles, as he has it, “constitute temporary structures within the historical flow.” These represent an “unstable” “border country” between the incremental and structureless shock of the new designated by the event and the static and overly structured critical forensics designated by the epoch.23 It’s no secret that what Moretti means by the cycle, his so-called temporary structure, is more commonly understood as genre. In so many words, genres are “morphological arrangements that last in time,” durable but never permanent literary dispositions for, say, imperial gothic or teenage vampire abstinence novels.24 In his examples derived from nineteenth-century cases, they last between twenty-five and thirty years. Whether or not the life cycle of genres accelerates with the technomedial evolutions of the twentieth century is a key question for Moretti’s methodology. Isn’t the Morettian cycle in fact the Agambenian whirlpool? I want to put pressure on Moretti’s insight by making an observation: thinking about genre as the life cycle of forms—beginnings and endings, birth and death—inevitably calls forth something like generation. Genres, generations. The beginnings and endings of genres resemble the birth and death of generations by more than mere homology. A generation signals a cohort born at roughly the same time, shaped by the same generic conditions, events, shared forms of technomediality: “each generation tallies its new talent and catalogues its new forms and epochal tendencies in art and thought.”25 It is no coincidence, I submit, that the conventional measure afforded to a generation—twenty-five to thirty years—is basically the span of time that Moretti apportions to the life of a genre.26
And yet, the ultimate distant reader, the computational knowledge engine, Multivac, in the end, at the end of Isaac Asimov’s “The Last Question,” can at last report the hard-won findings to . . . no generation, the universe being finally over.27 Jean-François Lyotard writes that, with the inevitable exhaustion of the sun—4.57 billion years and counting—comes the death of death.28 And, is that not a good thing? A happy end, in a manner of thinking, to thinking? Fiat lux and nowhere to report the search results. Here we find the nonhuman “happiness” that follows postmodern nihilism, a happy refusal of the consolations of the archive. Don’t go back to the airport flyover, it tells us, like the prisoner in Chris Marker’s La Jeté (1962), blocked for the future because he can’t give up on his desire for a better past.
That said, our blueprints for the administration of inhuman time include J. G. Ballard and Isaac Asimov: Ballard to understand the consequences of an inevitably minimal unhuman end to human knowledge frameworks, Asimov to formulate a critical concept of the literary as a thought experiment about maximal knowledge. I file this report, however truncated and schematic, to pursue a contrarian path for criticism through a vast science fiction/speculative fiction corpus, a path avoiding the familiar Heinlein–Dick “postmodernist” axis. Instead, Asimov and Ballard get pride of place in a paleofuturist genealogy of speculative, intellectual modernist outliers writing impossible scenarios about unhuman literary machinery, a genealogy that stretches from Ursula K. Le Guin to Olaf Stapledon to H. G. Wells, among whom we could certainly count Jorge Luis Borges and Franz Kafka. The prolific Asimov has his pulp, genre-fiction bona-fides and hard SF credentials, but his emigrant background (a child of Yiddish speakers, a man without papers from an impossible cultural-linguistic zone) and his standing as a committed scientific popularizer and as an actual professor of the natural sciences pushes his profile beyond the frothy space operas, interplanetary romances, and rocket and ray-gun escapades of many of his cold war contemporaries.
But, before we crank up the dials of Asimovian maximalism, first Ballardian minimalism. Ballard’s career stretches from the mid-1950s to the day before the present. And he may be even more difficult to classify. Less interested in the gear of hard science fiction and its attendant idiom of techno-Benthamism, Ballard liked to claim kinship with the surrealists: “My science fiction was not about outer space,” he wrote, “but about psychological change, psychological space.”29 Sometimes a bright line is drawn between Ballard’s early, more “conventional” SF stuff and his later, more experimental, and thus more “literary,” work. Nonetheless, Ballard manages across the range of his oeuvre to remain obsessively interested in the near-term catastrophic end: the collapsing inner space of human knowledge systems. His first four novels all describe the psychological consequences of various transformations of the Earth into a hostile, alien environment; his later fiction seems to come to a conclusion that this conceit was a superfluous fantasy. The Earth was already transformed: “Earth is the only alien planet,” he claimed, famously, “and the future is five minutes away.” “Our concepts of past, present and future are bring forced to revise themselves,” he writes. “The past, in social and psychological terms, became a casualty of Hiroshima and the nuclear age. . . . The future is ceasing to exist, devoured by the voracious present.”30
He’s the past master of what we could call stalking inner space, a term he’s credited with inventing in 1962: the mind scanning the tuner for weaker and weaker information signals.31 Despite the inherent minimalism of the activity, it yields surprisingly warped and uncanny encounters as in Tarkovsky’s Stalker (1979). “We have annexed the future into the present,” Ballard writes: “Options multiply around us, and we live in an almost infantile world where any demand, any possibility, whether for life-styles, travel, sexual roles and identities, can be satisfied instantly.”32 Ballard’s fiction is full of search engines stalking five minutes into the future. The instrument panel choked with dust and filth, social and psychological channels clogged with chatter, Ballard’s fiction still probes for messages. One such story set “five minutes in the future” is “The Message from Mars” (1992). The tale has an interplanetary space crew of celebrity astronauts returning from Mars in a hermetically sealed, self-sustaining ship.33 Parked on the landing strip after the seemingly triumphant voyage, the passengers refuse to disembark, protected from the entreaties from without by an impenetrable space-age ceramic shell, sustained by a reactor and a well-stocked larder. “Rejecting the [outer] world with a brief wave,” they choose instead to live out their lives in “a sealed [inner] world, immune to any presses from within and without,” for reasons impossible to clarify by observers outside.34 A capsule full of Bartlebys or Garbos. An allegory of the current media ecology rests on the passengers’ petulant refusal to play along with a massive propaganda-and-PR apparatus setup to record and hype their mission for a global audience, a noisy Space Family Robinson reality show trading on all-too-facile plotlines and debased audience expectations.
Inner space is quieter than outer space. The allegorical capacity become unglued with the passage of time. Elsewhere, Ballard notes the futility of escape from the autonomous life pod:
In the past we have always assumed that the external world around us represented reality, however confusing or uncertain, and that the inner world of our minds, its dreams, hopes, ambitions, represented the realm of fantasy and the imagination. These roles, it seems to me, have been reversed. The most prudent and effective of dealing with the world around us is to assume that it is a complete fiction—conversely, the one small node of reality left to us is inside our own heads.35
In the end, after NASA itself passes into history, the ship—decommissioned, forgotten, its life support still operational—ends up an inscrutable piece of hulking junk deposited on a parking lot. Another mobile home in a trailer park. A graduate student eventually rediscovers the ship there and hooks up various instruments and magnetic imaging equipment—search engines of a certain kind:
An aged couple, Commander John Merritt and Dr Vanentina Tsarev, now in their late eighties, sat in their small cabins, hands folded on their laps. There were no books or ornaments beside their simple beds. Despite their extreme age they were clearly alert, tidy and reasonably well nourished. Most mysteriously, across their eyes moved the continuous play of a keen and amused intelligence.36
Despite the sensitivity of the instruments, able to pick up the smallest details of their eye movements, I find the meaning of this scene inscrutable. It has something of the quality of Ti and Bo and the Heaven’s Gate Cult. Or Pong. I don’t question the veracity of the data about their “keen and amused intelligence,” only that the meaning of the observation remains a cipher. It could be fruitfully matched with another mysterious comment Ballard made about the task of the writer:
What is the main task facing the writer? . . . The writer knows nothing any longer. He has no moral stance. He offers the reader the contents of his own head, a set of options and imaginative alternatives. His role is that of the scientist, where on safari or in his laboratory, faced with an unknown terrain or subject.37
Here we can turn from the minimalist response conveyed by Ballard’s minimalist time machine to a short detour Asimovian search engines working maximal overdrive. First, a short detour back to Wells, who was, not surprisingly, a professed influence of both writers.
In The Time Machine, the Time Traveler jumps ahead to 802,701 A.D. The number is big, but in a specific, almost too ordinary and uneventful way—a bigger version of the famous 42 in Hitchhiker’s Guide to the Galaxy, to which I’ll come back shortly. It is as if evolutionary, epochal time, scaled for charting the origin of species, has been tacked on to the historical measure of a human lifetime, as in the joke about a museum guard at the natural history museum who tells visitors that the brontosaurus bones are 150 million and five years old . . . because he’s been working there for five years. The number signals, as it were, enough lapsed time for class difference, that is, the historical driver, to take on irreparably evolutionary form. Nearly a million years hence, with no other legacy of humanity to speak of, the over- and underclass of economic modernity have branched into separate species, the Morlocks and the Eloi. The ironic twist is, of course, that the descendants of the upper classes have become the food source for the proles.
While the main interest of the novel may rest in this inversion, I’m more interested in an odd scene at the end of the novel. With allegory seemingly laid to one side, the search engine pushes forward to the end of the spool, 30 million years ahead. Here, on a desolate beach, the Time Traveler observes the final sunset:
Suddenly [he reports] I noticed that the circular westward outline of the sun had changed. . . . The darkness grew apace; a cold wind began to blow in freshening gusts from the east. . . . From the edge of the sea came a ripple and whisper. Beyond these lifeless sounds the world was silent. . . . All the sounds of man, the bleating of sheep, the cries of birds, the hum of insects, the stir that makes the background of our lives—all that was over. . . . I saw the black central shadow of the eclipse sweeping towards me. In another moment the pale stars alone were visible. All else was rayless obscurity. The sky was absolutely black.38
This could be described as an early scene of secular, planetary snuff. Particular details to notice are the dynamic of information and noise. The time machine is, above all, an observational machine, a search engine, tasked with uncovering improbable results. The final fade-out is interesting only for its minimalism: it reports all that isn’t heard, the lack of murmurs and mumbles; bleating, bird sounds, and buzzing. Curiously, even muted nonnoise counts as noise in this context. The only information is visual, minimal light inscribing the landscape as if on a photographic plate, the human observer machine, a camera basically, placed before a decidedly unhuman end, affording the time of one last, noneschatological long exposure. In any sense, save for a presence that stands in for the author function, there is nothing to be known and no audience structure to know it. This unvoiced stone under a burned-out ember says even less than Ozymandias. Without belaboring the paradoxes of observation, Wells’s point is that the meaningful human condition ended uneventfully some time before during the span of eight hundred thousand odd years between the present and the Morlock–Eloi scene. This final sunset is a quarrel with the humanness of endings as such.
Asimov also takes an even longer view. Most famously, his Foundation series—in fact, a cluster of interlinked stories and novels—follows through on a logic of temporal maximalism, stretching over thousands of years into the future. Fans have pegged 25,621 A.D. as the latest date referenced in the series, but it is a bit confusing, because new calendars are introduced at several junctures in the series.39 Furthermore, the entire sequence is menaced by the threat of information death, the edifice of human knowledge slouching into ruinous quagmires of ignorance, coming epochs when even the measurement of history becomes impossible. Asimov uses robots, institutions, corporations, and other durable nonhuman dispositions to overcome the narrative limitations of the human life-span. I’ll spare the intricacies of a work conceived as an intergalactic retelling of Edward Gibbons’s Decline and Fall of the Roman Empire. I only want to touch on two elements of Foundation. Each in its own way takes the form of mystified literary scholarship: the theoretical-concept of psychohistory and the project of a reference work called Encyclopedia Galactica. In The Time Machine, the traveler goes forward to discover increasing discontinuity between what he observes and human beings and their knowledge frameworks. The scene in the disused museum is paradigmatic in both Wells and the dystopian tradition. In Asimov, who is more sanguine about humanity’s prospects than either Wells or Ballard, psychohistory and the Encyclopedia Galactica are conceived of as two durable forms extending the continuity of human knowledge frames beyond the natural life-span. Psychohistory is essentially an extrapolative form of social science—the wisdom of crowds writ large. A computational concept engine gathers massive amounts of data about human behavior; it uses its databases to calculate future human history. The more massive the data set, the more predictive it gets. Eventually, psychohistorians determine that humanity is doomed to enter a catastrophic phase, a ten-thousand-year span of stupidity, and conceive of an enterprise to preserve knowledge and mitigate against the hazards of this epoch. They set out to compile a massively comprehensive Encyclopedia Galactica to preserve all knowledge. This task, it is said, has the power to shorten the projected dark age by a factor of 10. The efforts of the galactic encyclopedists consume the resources of an entire planet where they are eventually exiled. In effect, the planet itself is co-opted into a giant knowledge engine, planetary size.
Let’s explore the literary dimensions of these two fictional superliterary projects and some “real-world” cognates of them. Since Asimov, the Encyclopedia Galactica fantasy—humanity preserved through a redemptive ark of comprehensive and curated knowledge—is a pervasive meme. It plays no small part in inspiring the Wikipedia phenomenon, for instance. Douglas Adams’s Hitchhiker’s Guide to the Galaxy was a fairly self-conscious parody of it, too, a kind of Encyclopedia Galactica for Dummies. Allegedly filched from galactic encyclopedists, the Guide, Adams writes, “has many omissions and contains much that is apocryphal, or at least wildly inaccurate.” Of course, one doesn’t need a science fiction frame to recognize that the encyclopedia from Diderot forward is in itself motivated by a principle of epistemological maximalism as an almost manic drive. In terms of Wikipedia, I think it is safe to say that Asimov would be aghast at its open-source editing imperatives and the knee-jerk distrust of primary expert knowledge enshrined in its practices. Asimov was open source when it came to mass data collection but left the analysis, the computational psychohistory, to the positronic brain.
Asimov’s related Multivac stories, a different fictional world from the Foundation series, show the fault lines of the project. In effect, Multivac draws together the two strands of inhuman humanism and puts them in a black box: first, massive information collection of the past, and second, oracle-like analysis and computational prediction of the future. As a fictional computer, the physiognomy of Multivac differs from both familiar archetypes: the humanoid robot, automaton-doppelgänger with a body and the glorified space nanny, identified with the space ship itself, charged with holding steady on the steering wheel, managing human life support for interstellar express. As a knowledge engine, Multivac is a cipher for the author function itself, I submit, a disembodied search engine for information. Like Wells’s Time Machine, Asimov’s very long timescales and machines to probe them can be read as thought experiments about extreme configurations of author functions and impossible audience structures.
Ending 10 million times farther in the future than The Time Machine, the Multivac story called “The Last Question” (1956) surely has the record for one of longest time frames ever conceived in fiction (and as such the least Aristotelian ever written).40 It concerns a succession of Multivac responses to the question about the ultimate fate of the universe. Asimov gives the question both a cybernetic spin (What happens at the end of information?) and a thermodynamic one (Can entropy be reversed?). In seven vignettes spaced over vast segments of cosmological time from May 21, 2061, to the event of would-be entropy death, 10 trillion year later, seven varieties of this question are put to the machine and its successors. Six times, the only answer returned is “THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.” The final time the question is asked, human beings and their evolutionary progeny are long gone, no data are left to be collected, and all is “completely correlated and put together in all possible relationships.” The seventh computational engine—existing only in hyperspace, an information-processing entity akin to “a computer . . . far less than was a man to Man”—achieves total knowledge against the backdrop of entropy death. Asimov writes, “There was now no man to whom AC might give the answer of the last question. No matter. The answer—by demonstration—would take care of that, too.” Given the ingredients of (1) omniscience and (2) the universe’s absolute nullity, that answer is—you probably guessed it—let there be light. A clever trick. The clue in the setup is the idea that the answer would come by demonstration, with no information left, and no matter, this is the only possible thing to say and to do. There is only one question to ask at the end of the universe: what’s next?
If Hitchhiker’s Guide to the Galaxy anticipates Wikipedia, consciously or unconsciously, Asimov’s futurist engineered artifacts and computational knowledge machines gesture toward the internet. They point to one of the constitutive problems concerning the kind of epistemological maximalism that the internet signifies, namely, interface, instrumentation, and computation—in a word, intelligence. How do you put the question and calibrate it to get back a meaningful answer from the accumulation of vast amounts of mostly undigested chatter, that is, the stuff people have seen fit to upload? You’ve heard the one about a million monkeys with a million typewriters eventually writing the complete works of Shakespeare. The internet, so goes a well-known joke, proves this to be false. One can think of the internet as a vast assemblage of information debris: like the dumpster behind your apartment clotted with garbage, phonebooks, office materials, and the occasional appointment calendar and obsolescent Filofax. It started as a flotilla of documents lashed together by hyperlinks—various digital flotsam, jetsam, ligan, deposited, jettisoned, discarded, claimed, derelicted, who can say. In the early going—its incunabulum era perhaps—this assemblage of documents was countable and thus indexable. In 1993, the number of websites was in the hundreds; one year later, they numbered in the tens of thousands; today, Google estimates that the web contains 1 trillion links.41
The first search engines were simply efforts to measure the size of the information content of the web by tracing out all the links. For a while, the size was manageable, monitored by individuals who could be likened to Alexander and Bertram, the faithful attendants of the earliest Multivac. Their nonliterary cognates authored their own indexes, portals, and directories—“Jerry and David’s Guide to the World Wide Web,” the ancestor of Yahoo!, is one such example.42 This phase ended quickly. After the internet became too big to be overseen by anyone began the phase of automated web bots, which traced out the links and the search engines that allowed human users to search for key words in this mess, the same way a search feature works on their word processing program. Even here, the results were quickly too numerous. At last, Google’s algorithm came. Like its predecessors, it trawled for links—continuously scanning the cached snapshot of the internet stored on its servers. Then, and this is the key part, it ranked the results, using algorithms designed to assign each page with a ranked value based on the “quality” of its links. The most linked links ruled. In high school, the popular kids aren’t the ones with the most friends but the ones with the most popular friends. Google is the same. In other words, it may not be the Multivac, precisely, but Google applies its own psychohistorical principle of massification, namely, popularity. The most likely answer is the one the most people like. Google is a hide-and-seek machine: very good at finding what’s well found already and very bad at finding what’s very well hidden. All this tremendous expansion signals surface area and no depth. The deep web is a myth.
Now, in the contest for the title of real-life Multivac, there’s a new arrival, something called WolframAlpha. It is already been described as an un-Google. What is it? If you type in its text box, What are you? WolframAlpha replies, I am a computational knowledge engine.43 If you follow up with, What is computational knowledge? it answers, not altogether helpfully, That which I endeavor to compute. This circularity is instructive in its way, for computable knowledge is, in a sense, as computable knowledge does. WolframAlpha aims to treat information as information; as such, it’s only interested in material that can be subjected to processing, calculation, or analysis rather than simply Google’s hide-and-seek game of searching the static of the internet universe for signals. Thus it is necessary for WolframAlpha to maintain an internal storehouse of refereed expert-level knowledge ready made for computational machinery to process it. In other words, like Asimov’s Encyclopedists, it pursues a curatorial agenda led by experts on its own planet. Here’s how the WolframAlpha site explains the mission:
We aim to collect and curate all objective data; implement every known model, method, and algorithm; and make it possible to compute whatever can be computed about anything. Our goal is to build on the achievements of science and other systematizations of knowledge to provide a single source that can be relied on by everyone for definitive answers to factual queries.44
One interesting side of the maximalist project is the decision to make its interface communicate through “free form natural language input.” The theory for this seems to have as much to do with the interface between experts and nonexperts as it follows from the sense that the meta-language between the various disciplines of knowledge can be none other than natural language. “For academic purposes, WolframAlpha is a primary source”; consequently, unlike Google, but like the Asimovian computers of pop culture, it is personified.45
WolframAlpha is a “citable author,” so notes the FAQ, as a quasi-legal tidbit for educators and researchers.46 If you ask this literary construct, the same question asked the Multivac, How can the net amount of entropy of the universe be massively decreased? It answers the same way: THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.47 If you ask it, What is the Answer to the Ultimate Question of Life, the Universe, and Everything? it answers, 42.48 I take these answers to mean not that WolframAlpha thinks like Multivac—or Deep Thought—but that its programmers are aware of their literary or pop cultural pedigree. The WolframAlpha FAQ comments that “when computers were first imagined, it was almost taken for granted that they would eventually have the kinds of question-answering capabilities that we now begin to see in WolframAlpha.”49 “Is WolframAlpha an artificial intelligence?” then—a term I don’t find helpful in this discussion and have so far avoided, but one WolframAlpha’s website summons:
It’s much more an engineered artifact than a humanlike artificial intelligence. Some of what it does—especially in language understanding—may be similar to what humans do. But its primary objective is to do directed computations, not to act as a general intelligence.50
It makes sense to me to think of WolframAlpha as the conditions the author function finds itself in when facing trillions of elements of data: the black-boxed front end is less artificial intelligence than it is engineered artifact for detecting semiotic red shift.
Bruno Latour’s conception of “black boxing” usefully names the discursive imperative operating here, through which, in Latour’s words, “scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus paradoxically, the more science and technology succeed, the more opaque and obscure they become.”51 In a sense, “black boxing” explains the end user’s phenomenological situation in which technologies are routinized and made normative “no matter how controversial their history, how complex their inner-workings, how large the commercial or academic networks that hold them in place.”52 Open this black box and peak under the hood and, as I have been suggesting, you’ll find the component dreamwork organized around a mass-mediated postliterary fantasy. Furthermore, the engineered artifact massively mystifies the audience structures comprising it. Audience structures are not—or they are not simply—recording machines, all storage capacity and no processing power. Rather, they might be conceived of as explicit and implicit representations of mass-mediated reception, intake rendered by means of various spatial and temporal metaphorics. The internet itself is, along these lines, a specialized audience structure. On the internet, everyone is an author, and therefore no one is an Author: the scriveners rule with squatters’ rights and Bartleby-like truculence. Down with the jargon of first modernist technoauthenticity in time and space; think only of the psychohistorical predicament of readers and authors facing textual overload.
The thread that connects Asimov and Ballard here is their fascination with audience structures, maximal and minimal. Asimov’s “The Last Question” identifies a point of singularity between maximum authorial capacity—the supraliterary computational Author-God who knows and has read everything—and the minimal audience structure that there are no readers left. Once the Encyclopedia Galactica is completed, the world necessarily ends . . . or is ended. When everything is known, we’re at the end of the reel. Ballard often takes the opposite approach, dwelling on the likely possibility that we’ve already arrived at the end, overloaded with too much (non-)information and no obvious front end on the Multivac to illuminate things. On one hand, information is at its maximum overdrive—aggregational extremity; on the other, the only consolation is found in the minimalist path of computational quietism. Unfriending Mulitvac. In light of WolframAlpha’s trillions of elements of computable knowledge, one is tempted to say, We spooled to the end of the Encyclopedia Galactica pretty fast, right? And, if “overhead, without any fuss, the stars were going out,” as in Clarke’s famous story “The 9 Billion Names of God,” then don’t panic.53