Skip to main content

What If?: Afterword

What If?
Afterword
    • Notifications
    • Privacy
  • Project HomeWhat If?
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Cover
  2. Half Title Page
  3. Title Page
  4. Copyright Page
  5. Dedication
  6. Contents
  7. Introduction: Into the Slipstream of Flusser’s “Field of Possibilities”
  8. First Scenario: What If . . .
  9. Part 1. Scenes from Family Life
    1. Second Scenario: Grandmother
    2. Third Scenario: Grandfather
    3. Fourth Scenario: Great Uncle
    4. Fifth Scenario: Brothers
    5. Sixth Scenario: Son
    6. Seventh Scenario: Grandchildren
    7. Eighth Scenario: Great-Grandchildren
  10. Part 2. Scenes from Economic Life
    1. Ninth Scenario: Economic Miracle
    2. Tenth Scenario: Foreign Aid
    3. Eleventh Scenario: Mechanical Engineering
    4. Twelfth Scenario: Agriculture
    5. Thirteenth Scenario: Chemical Industry
    6. Fourteenth Scenario: Animal Husbandry
  11. Part 3. Scenes from Politics
    1. Fifteenth Scenario: War
    2. Sixteenth Scenario: Aural Obedience
    3. Seventeenth Scenario: Perpetual Peace
    4. Eighteenth Scenario: Revolution
    5. Nineteenth Scenario: Parliamentary Democracy
    6. Twentieth Scenario: Aryan Imperialism
    7. Twenty-First Scenario: Black Is Beautiful
  12. Part 4. Showdown
    1. Twenty-Second Scenario: A Breather
  13. Afterword
  14. Acknowledgments
  15. Notes
  16. About the Author

Afterword

Kenneth Goldsmith

Recently, I was invited to meet a couple of programmers at Google who were writing an AI engine that could produce literary works. They were eager to show me the fruits of their research that, upon first glance, looked an awful lot like Tennyson’s poetry. I had to admit, I was a bit disappointed that the world’s richest, most cutting-edge tech company could only produce literature that was au courant a century-and-a-half ago. Their poetry was certainly proficient and it made perfect sense—it even rhymed—which was their goal. Yet, I thought that if one of my undergraduate students had unironically produced the identical work, they would’ve received a failing grade. Nevertheless, I congratulated them on the fact that they made a robot parrot a dead poet, but then delicately began asking them exactly why they did this. They answered that they sought to replicate in artificial intelligence what they felt to be the apex of literary accomplishment, one rife with precise metaphor, dynamic rhythm, and uplifting lyricism. In other words, they were trying to train the AI bot to be a “good” poet.

But the problem is that around the same time that Tennyson was writing, the pursuit of “good” art had paradoxically been rendered obsolete by technology. After the invention of the camera, painting had ceased to act as the primary conveyer of representation; in order for it to survive, it had to find another way to be in the world, hence its turn toward abstraction, resulting in the extra-representational concerns of, say, the impressionists or cubists. Similarly, literature had been forced to change its mission by the then-emergent technologies such as the telegraph and the tabloid newspaper; think of Hemingway’s adopted newspeak as literature, writing terse books comprising sentences that more resembled headlines than nineteenth century triple-decker novels. And in music everyone from the futurists to the musique concrète composers incorporated the noises of industry into their compositions, resulting for the first time in unnotated composition. You could say that certain strains of modernism adopted certain strains of technology as their operating systems. Throughout modernism, it was the successive waves of technologies that kept nudging art forms—from surrealism to abstract expression to pop art—into new directions.

So I found it odd that in spite of that history, a tech company would entirely skirt what was essentially a technologically based modernist project. I suggested to the Google engineers (in all fairness, they referred to themselves as “engineers,” not “poets”) that perhaps they might consider supplementing their source text to include disjunctive modernist works such as James Joyce’s Finnegans Wake, Ezra Pound’s The Cantos, or Gertrude Stein’s The Making of Americans. Each one of those massive books (The Making of Americans alone clocks in at a half a million words) would certainly enrich and diversify the AI’s output; perhaps such a fractured idiolect might produce equally fractured language, resulting in a more contemporary literature. It wouldn’t be the first time that modernist literature has inspired the digital world: Finnegans Wake, with its lexical knots and neologistic wordplay, was a canonical reference text for early computer programmers, and was subsequently incorporated into early computational lexicons (the word “quark,” for example, first found in The Wake, was later adopted as the name of an early popular page layout program, not to mention its more common usage denoting the elementary particles that make up protons and neutrons). The Google engineers looked at me quizzically; they had never heard of these books.

But then again, there have always been pockets that have ignored or even outright dismissed modernism. Once in China, after giving a long lecture on avant-garde writing and computational poetics, an older woman raised her hand and said, “But Professor Goldsmith, you didn’t discuss Longfellow.” I thought for some time afterward about what she might have meant, and it occurred to me that over the course of her lifetime, modernism in China was snuffed by the Maoist regime. I wondered if her sense of a poetic trajectory proceeded from New England Fireside Poets to the digital age, a florid type of pre-modernism seguing directly into bits and bytes. I was reminded of when I was walking in my Manhattan neighborhood with my neighbor, a world-famous graphic designer, when we passed by a newly opened store. She stopped and scornfully commented on how atrocious the store’s logo was—a digital mashup of serif fonts with a naturalistic bent—for the sole reason that she couldn’t find any trace of the Bauhaus’s geometry in it.

I have previously written about how modernism is deeply imprinted into the DNA of the digital world:

There are bits and pieces salvageable from the smoldering wreckage of modernism from which we might extract clues on how to proceed in the digital age. In retrospect, the modernist experiment was akin to a number of planes barreling down runways—cubist planes, surrealist planes, abstract expressionist planes, and so forth—each taking off, and then crashing immediately, only to be followed by another aborted takeoff, one after another. What if, instead, we imagine that these planes didn’t crash at all, but sailed into the twenty-first century, and found full flight in the digital age? What if the cubist airplane gave us the tools to theorize the shattered surfaces of our interfaces or the surrealist airplane gave us the framework through which to theorize our distraction and waking dream states or the abstract expressionist airplane provided us with a metaphor for our all-over, skein-like networks? Our twenty-first-century aesthetics are fueled by the blazing speed of the networks, just as futurist poems a century ago were founded on the pounding of industry and the sirens of war.1

From computer glitches to spam to replication, linguistic fragmentation of modernism often expresses itself in the digital world. On social media, because of its asynchronous and replicative nature, shards of logical discourse are often fractured and decontexualized, landing in the midst of a feed, lacking the necessary rhetorical framework for them to make sense. These little disruptive outliers, identified as “noise” (not “signal”), are ignored and quickly scrolled past (ironically, headlines a la Hemingway, when employed on social media, always win the day). Or consider spam, often filled with AI-generated non-sense, is automatically deleted, dismissed as more “noise.” Even when absurdity and disjunction is programmed into, say, a Twitter bot like the now-defunct Horse ebooks feed, it’s fondled like a cute pet for a few rounds before swapped in for something emitting more “signal.” Similarly, on occasion, when Trump linguistically tweeted an absurdity (“covfefe”), it runs a few meme laps before “signal” replaces it. Whereas logical discourse (“signal”) is valued, disruption (“noise”) is ignored.

The digital generates vast amounts of information, which in itself becomes a sort of abstraction. While the bulk of discourse proceeds upon logical lines, abundance can symbolize disjunction. Again, as I’ve previously written:

Today we’re confronted with the abstraction of big data—large data sets, expressed in equally large and equally abstract numbers—and it’s assumed somehow that we can comprehend these. For instance, the WikiLeaks site contained 1.2 million documents a year after it was launched; and in 2010, it released almost 400,000 documents related to the Iraq War alone. The United States diplomatic cable leaks totaled 251,287 documents consisting of 261,276,536 words. A common complaint was that WikiLeaks released too damn much, prompting the journal Foreign Policy to call the release of such a vast amount of data “information vandalism”:

There’s a principle that says it’s OK to publish one-off scoops, but not 250,000—or for that matter 2.7 million—of them all at once? The former feels like journalism; the latter seems grotesque and irresponsible, more like “information vandalism” . . . And even if responsible papers like the New York Times have a chance to review and contextualize them, there’s no way they can dot every i and cross every t in the time allotted. There’s just too much. And with every new leak, comes a new metric of immensity: it is said that Edward Snowden initially leaked between 1.5 and 1.7 million documents.2

Enter AI, which thrives on this sort of linguistic feast, ravenously consuming and parsing it for “signal” while omitting “noise.” There is in fact a lot of sense in these documents (a massively high signal-to-noise ratio), upon which AI thrives because the bot reifies that which it already knows, thereby making it more “intelligent.” AI is trained to render sense out of bulk language—which from my perspective might be part of the problem; as a mimetic technology, AI apes what it’s fed, spewing out more of the same.

A case in point was when The Guardian recently published an essay written entirely by an AI bot. The first paragraph ends with, “I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!”3 The prose is as clichéd and as bland as the Google poetry was, feeling very much like its sources of blogs, newsfeeds, and social media outlets. Similar to the Google guys trying to get their AI to write “real” poetry, the bot was trained to write “real” science fiction: “For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way.” As a piece of prose, it’s thoroughly amateur; is it any surprise that the AI prompts were written by a computer science undergrad at UC Berkeley? To make matters worse, the piece was cobbled together from several essays—the AI was assigned to write five essays—after which the human editors “cut lines and paragraphs, and rearranged the order of them in some places” so as to come up with a really “good” version.


So what might a “bad” AI look like? For one, it could, taking its cues from modernism, use its intelligence to pivot away from sense into something more delicate, playful, provocative, and poetic. A bot that writes gibberish is too easy; training a machine to write absurd, slightly surrealistic sentences is an exercise straight out of Programming 101, but there’s a part of me that wants to see artificial intelligence bent and twisted in ways to show us truly new forms of language. Think of the Oulipo—a group of French mathematicians and scientists who in the 1970s proposed mathematical and scientific formulations as the basis for programmatic poetry—as a potential precursor to AI lit. Most famously the Oulipo produced George Perec’s highly readable La Disparition, a three hundred–page novel written without using the letter “e.” While it took Perec a tremendous amount of work to do the book, I’m certain that an AI bot could accomplish it fairly easily. Questions remain, of course, regarding taste, narrative, and content (Perec’s mind was famously complex and unique), but one might even train the bot on the corpus of Perec’s work alone to extend—and perhaps surpass—his oeuvre. One imagines voluminous and exhaustive Oulipian-inspired works in this vein, one more astonishing than the next. In a sense, AI could write hyperstructuralist works, ones in which the skeleton and bones of grammar and thought were made apparent on a microscopic level—call it a semantic-based genome project for the corpus of human language.

Can AI be “queered?” Could AI be trained to be intentionally perverse, something notoriously difficult to define, let alone program? The perverse is a nuanced subjective-based sensibility; how can a sensibility be programmed? This illogical entity would have to be broken down logically into its constituent parts in order to be reconstructed as itself, an exceedingly difficult task. Similarly, can one program intentional contradiction, something that even in human-based discourse is rarely intentionally deployed as a discursive strategy? Thrust into a world of logic-based computational binaries, intentional contradiction might actually crash a machine. Other “queered” sensibilities might be equally difficult to program; the literary theorist Sianne Ngai has explored liminal aesthetic categories such as the zany, the cute, the interesting, and the gimmick, mostly heretofore absent from AI.

Once again, art history might provide clues on how to proceed. Back in the late 70s, following the demise of conceptual art, a new painting movement arose known as “bad painting.” After a decade of being prohibited from actually painting, painters were itching to get back behind the easel. But, having been weaned on conceptual art, they knew they had to employ a perverse strategy in order begin painting again. So they started making “bad” paintings, purposely deskilled so as to convince the viewer that they weren’t really invested in painting; instead that they were, as was the fashion in postmodern times, wry comments upon the death of painting. They did things like paint with their left hand if they were right-handed or use degraded sources unworthy of fine art. It was a complex and convoluted move, visible only to art world insiders who followed such things. But it turns out that they were so talented that their paintings were soon recognized not only for the brilliance of the conceptual move, but ultimately as great “bad” paintings in and of themselves, opening up the floodgates for the revival of oil on canvas in the 1980s.

Could AI be trained to intentionally get it exactly wrong? Andy Warhol said, “I wanted to do a ‘bad book,’ just the way I’d done ‘bad movies’ and ‘bad art,’ because when you do something exactly wrong, you always turn up something.” What you turn up is anybody’s guess; call it the beauty of error. Warhol always made sure to keep the errors in his work—the misprinting of his silkscreens, the overexposure of his films, or the typos in his books. To him, trained as a commercial artist, error was a luxury, one that only art could acknowledge as having value. He was right: where else is error and wrongness embraced as potential except for art? From the fractured dream spaces of André Breton to the seemingly uncontrolled but highly controlled drips of Jackson Pollock, it was error that drove contemporary art.

Back in the 90s when “net art” first appeared, artist/programmer’s first task was to take functional technologies and to break them. So you had artists doing things like making interfaces shake and melt. Sometimes things got extreme, as in the case of the art collective JODI, who feigned computers under attack by viruses. Error in music—from incorporating vinyl scratches into MP3s to the sound of CD glitches—correlated with the “new aesthetic” of fragmented pixelated patterns that appeared on everything from clothing to architecture.

But error is the enemy of the programmer whose work is, by its nature, riddled with errors. One stray character in miles of code can cause a program not to function at all; and the last thing programmers want to do is to program in errors—imagine the process of re-bugging instead of debugging. In its necessary functionalism, code resembles traditional craft-based practices, whereby an artifact’s function trumps its form (of course, there are vast swaths of fine art practices that have grown out of craft, including nonfunctional glassware, pottery, or deconstructed fashion). And so craft too might give us a glimpse into the future of AI: like the dance of painting and photography, there comes a moment when, after functional issues have been resolved, a medium finds itself in search of alternative pursuits. At present, AI appears to still be stuck exclusively in search of “good” and will be as long as those training the AIs remain philistines, both aesthetically and conceptually. If the AI is fed pap, it will reproduce pap. If the minds editing the pap try to rearrange it into better pap, it will still be pap. The problem isn’t the AI, it’s the people training the bots; at the end of the day, we’ll just end up with more of what we already have—and we already have too much of it.


Vilém Flusser’s What If? is a bad book. From a literary standpoint, it’s a disaster. Clumsy chunks of grammar are set in disjointed and meandering sentences, culminating in what feels like a needlessly fractured narrative. Far from the richness of prose, these scenarios are often set in the stiff, impersonal form of press releases, but to whom they are addressed is never clearly stated, leaving the reader with a feeling of abandonment. And beyond that, there is no consistent narrative voice to guide us; instead, Flusser uses the device of the harsh jump cut, both within each scenario and across the book, making it a terribly jarring read (but not jarring enough to be in conversation with avant-garde literature). The scenarios themselves, too, are graceless, starting and stopping at random points, lacking both strong opening arguments and forceful conclusions.

If it fails as literature, it also flops as science fiction with each scenario presenting absurdly weird, improbable, and outrageous images, such as a self-copulating organism with 1,500 phalluses (to which Flusser adds, “male readers will justifiably turn pale with jealousy”). Closer to self-published fan fiction, it’s no wonder television producers passed on the script. Beyond that, outside of the first scenario / introduction, the book has little of the rigorous scholarship and theory that we’ve come to expect from his writing.

So if it’s not literature, poetry, theory, or science fiction, what is this book? And beyond that, why?

Read through a Warholian lens, this “bad” book is a self-reflexive exercise in the necessity of failure, particularly when played out upon the shifting sands of futurology, a field that is notoriously plagued by wrong predictions. Warhol was interested in getting it “exactly wrong” because when that happens, “you always turn up something.” By simply getting it wrong, you turn up nothing; but by intentionally getting it wrong, you open up a panoply of possibilities, a hallmark of Flusser’s complex and nonbinary thinking.

In fact, from the start, in his First Scenario, Flusser recommends that we refrain from precise calculation and instead embrace improbabilities. He appears to intentionally want to destabilize aspects of coherency, categorization, and predictability—the exact qualities that other authors strive for—when he states, “Probability is a chimera, its head is true, its tail a suggestion. Futurologists attempt to compel the head to eat the tail (ouroboros). Here, though, we will try to wag the tail.”

Undermining our expectations for the text, Flusser admits failure up front, claiming that this book will be an “impossible journey.” What people misunderstood about Warhol was the fact that his “mechanical art” was a rebuttal to the coldness and stability of machinic production. His off-register silkscreen prints injected unpredictability, irregularity, and error into the commercial mechanical process, thereby destabilizing standardization, a hallmark of industrial capitalistic production. By setting up similarly off-register qualities in this book, Flusser’s “badness” is an essential formal antidote to combat any expectations of success in a field in which success is notoriously hard to come by.

Flusser’s admission of failure is converted to a type of strength when deployed to critique power structures. The art historian Boris Groys speaks about the power of what he calls the “weak image” in modernist painting, so ambiguous and abstract that it could never project a “strong” singular image, one liable to be usurped by fascist entities; think of the “weakness” of a Malevich white-on-white square vs. the “strength” of the swastika. Often times in the twentieth century, abstraction and ambiguity were survival strategies (yet just as often, they were death sentences).

Outside of art and science, it’s hard to think of too many places in Western culture where failure is viewed as an asset. Try to imagine that the Google guys had created a literary program based on failure; they would’ve lost their jobs. Google’s funding of an AI program must pay off somewhere down the line, which is why the Google guys had no choice but to write “good” poetry. But what they didn’t understand was that poetry always fails. As if rebutting the Google poets three-quarters of a century earlier, W. H. Auden wrote that “poetry makes nothing happen.” And its strength is precisely this inability, this failure, and this weakness, in order to secure an increasingly rare space of completely weak but strong political resistance.

So suddenly, Flusser’s book takes on a different dimension, one in which a series of shattered vignettes add up to an equally fragmented and uncertain future, resulting in a book that is skeptical of overconfidence, self-assuredness, and success. And while this book might not be a towering work of science fiction or even of media theory, it is undoubtedly an essential work of art.

Annotate

Next Chapter
Acknowledgments
PreviousNext
The University of Minnesota Press gratefully acknowledges the financial assistance provided for the publication of this book by Greenhouse Studios at the University of Connecticut, through a grant from The Andrew W. Mellon Foundation.

Copyright 2022 by the Regents of the University of Minnesota

Translation and Introduction copyright 2022 by Anke Finger

Afterword copyright 2022 by Kenneth Goldsmith
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org