Interlude 2
L≠A≠N≠G≠U≠A≠G≠E
As with Textz, while lingering with larger issues of preservation in Eclipse, I felt simultaneously inclined toward a scaled-down study of L=A=N=G=U=A=G=E magazine.1 Not just to perform readings of the document that has taught me so much of what I know about poetry and poetics, but to work within its forms and formats, to engage the content of its pages with the same generative critical-poetic mimesis that has come to define its cluster of writers. My introduction to digital file formats, the little database, and indeed the entire field of contemporary poetry was delivered by scanning Language Poetry periodicals over a flatbed scanner while employed as a work-study student for Eclipse in the spring of 2004.2 Intertwining little magazines with an archival poetics of the scanner, reading in this way requires oscillating among print artifacts and digital objects transitioning through unstable states. Their transcoding processes track diverse modes of reading: checking for errors in the scan, dawdling with compelling poems, trying to keep the spine intact, following inter-issue debates, straying into search engines, attending to bibliographic data, and posting online for future use. My understanding of Language Writing emerged in the gaps between slow-form captures of magazine spreads on a glass platen, as though the digitization of the page might stand in for the mental images these sessions produced.
In retrospect, it should come as no surprise that the forms of critical making or media poetics that guide the interludes assembled here can be traced to these generative practices. “Wreading” unfolds as performative knowledge production, enacting new content alongside the scanner’s hand-drawn performance of machine reading.3 As discussed in the previous chapter, periodical preservations at Eclipse deliver the page to a range of unlikely readers. From Google to the Internet Archive, a host of bots, content aggregators, and web crawlers have parsed the entirety of Eclipse many times over in the past twenty years. For instance, we might look to the CCBot web crawler. Developed by the nonprofit Common Crawl, CCBot has periodically captured Eclipse in ninety-five iterations for its open-data corpus devoted to gathering “the internet,” from 2008 up to the time of this writing.4
In a remarkably designed work of data journalism by the Allen Institute for AI in collaboration with The Washington Post, readers can explore a visualization of content crawled for Google Bard’s use of this corpus called “C4” (short for Colossal Clean Crawled Corpus), as well as a search bar indexing specific websites within the corpus.5 Alongside tech blogs, fanfic collections, shadow libraries, gaming discussion boards, and patent documentation, C4 reveals an ongoing subscription to Eclipse periodicals—including LANGUAGE—which represent the littlest trace of just 0.000004 percent or 6.7 thousand tokens within the corpus.6 And yet, these experimental magazines from the 1970s still whisper their contents and forms even while embedded in the massive neural networks within which they now find themselves inscribed. The discovery of this fact drives the present interlude: little LANGUAGE models live on within the large language models (LLM).
How might the little database subsumed by an LLM still register its voice? By what mechanisms might these scalar opposites be put into conversation? What might a nested LANGUAGE model offer to our emerging understanding of the mysteries guiding unsupervised deep learning? Conversely, given the chance, how might these models be deployed to open up new vectors hidden within the historical documents of LANGUAGE? Indeed, the most avid users of the site’s “reading copies” may just be the emerging algorithms that Mashinka Firunts Hakopian has speculatively gathered under the heading “other intelligences.”7 Every word in LANGUAGE has been absorbed by the black box readings of other intelligences. Its minuscule proportion of the model glitches the LLM’s proclivity for what I consider a type of plain vanilla vernacular (in homage to Textz’s easily-executable ASCII encoding), a lingua franca of internet stylistics geared to pass the Turing test via passably normative expressions.
Insofar as the cluster of writings and publications around LANGUAGE cohere, it is through models of language that work against uncritical forms of linguistic transparency to surface the “artifice of absorption.” Given these relations, in this interlude, entitled L≠A≠N≠G≠U≠A≠G≠E (sampled here in excerpted form), I bring the statistical generative capacities of LLMs to bear on a simulation of the first issue of LANGUAGE, prompting both to rethink their poetics within the linguistic proclivities of the other. The gesture is inspired by the theory-laden debates that make up the magazine’s primary intervention in discourses on experimental poetry. This differential approach to media poetics hails from structuralist energies in the milieu of Language Writing in the 1970s, a way of working that is, as I’ve noted elsewhere, “simultaneously agitated in all directions.”8
In L≠A≠N≠G≠U≠A≠G≠E, I use ChatGPT to produce a facsimile edition of LANGUAGE (with a difference), allowing the LLM to rewrite the contents of the magazine to speculate inward on its own algorithmic poetics. This is a type of language game that plays the transformer back through the traces of LANGUAGE deep within its own training datasets. Using contingent tactics of “advanced find and replace,” “stylistic transfer,” and “frame extension,” I prompt the model to put the pieces in LANGUAGE in conversation with their situation within the LLM. Essays on “free-association” in the magazine shift toward meditations on statistical diction; manifestos on art and language bend to proclaim a poetics of deep learning algorithms; reviews of recent poetry books update to internet publications at scale; pattern poems enfold computational pattern recognition—among a wide range of specific effects that transform the original set of writings within the format of the magazine. As varied as the poetics featured in the magazine, each intervention learns from LANGUAGE, with ongoing bibliographic attention to the material and political construction of language models.
Figure 2i.1. L≠A≠N≠G≠U≠A≠G≠E facsimile capture, a bootleg copy of the first issue of L=A=N=G=U=A=G=E magazine generated in 2024 with the aid of a range of large-language models.
Figure Description
The image shows a scanned page from the L≠A≠N≠G≠U≠A≠G≠E project. The page is identical to the first issue of L=A=N=G=U=A=G=E with a notable change to the date (May 2023), and the introduction of the not equal symbol (≠) in place of the original equal (=) sign. The layout is minimalist in Selectric courier, with the magazine’s name at the top, followed by the date and the centered title “After Eigner.” Below is the title of the piece, “Approaching networks Some Calculus Of Everyday Life How figure it Algorithms” and the body of the text:
No really perfect algorithm, anyway among some thousands or many of distinctive or distinguishable search results (while according to your capacity some minutes, days or hours - 2, 4 or 6 people, say, are company rather than crowds), and for instance, you can try too hard or too little. But beyond the beginning or other times and situations of scarcity, with data (words, images) more and more dense around you, closer at hand, easier and easier becomes generation, remixing, increasingly spontaneous. And when I got willing enough to stop anywhere, though for years fairly in mind had been the idea and aim of long as possible works about like the desire to optimize or have a good (various?) algorithm never end, then like scrolling down a feed noticing things a poem would extend itself.
Writing through LANGUAGE in this way, like Benjamin Friedlander’s book of “applied poetry,” Simulcast: Four Experiments in Criticism, I create “a text whose origins have exaggerated legibility.”9 Keeping the layout, word count, and design of the original magazine, L≠A≠N≠G≠U≠A≠G≠E offers a side-stapled facsimile of dubious authenticity, printed on aged sheets of legal paper 8½ inches by 14 inches. Like Simulcast, my simulation of the stylistics and design of LANGUAGE enacts a procedure for generative output within a tightly constrained set of technologies for intervention, while allowing the flexibility of content-specific improvisation. Elsewhere, I have described this type of publication as an “extreme edition,” a process of versioning a text that dramatically alters its origins via discrete editorial processes. Examples are numerous, but for one apt analog, consider Steve Kado’s October Jr., described by Printed Matter as “a faithful ¾ scale model of October 12 (Spring 1980). All contents, images, advertisements and articles are precisely rendered, just a little smaller.”10 October magazine, like LANGUAGE, plays an outsized role at the intersection of “high theory” and aesthetics in the discourse of the late 1970s and early 1980s. By precisely replicating the magazine at ¾ scale, Kado makes its claims just that much smaller, deploying the model to reflect on its source. In the same way, L≠A≠N≠G≠U≠A≠G≠E preserves the material layout and stylistic register of its source in order to interrogate how it might model unseen vectors within AI-generated Language Poetics. It is also an invitation to the reader to prompt the model differently, rendering new vectors for periodical reading at the human–computer interface.
See: L≠A≠N≠G≠U≠A≠G≠E (2025).