Skip to main content

The Little Database: Textwarez

The Little Database
Textwarez
    • Notifications
    • Privacy
  • Project HomeThe Little Database
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Cover
  2. Half Title Page
  3. Series List
  4. Title Page
  5. Copyright Page
  6. Dedication
  7. Contents
  8. Introduction. Reading the Little Database
  9. 1. Textwarez: The Executable Files of Textz.com
  10. Interlude 1. EXE TXT
  11. 2. Distributing Services: Periodical Preservation and Eclipse
  12. Interlude 2. L≠A≠N≠G≠U≠A≠G≠E
  13. 3. Live Vinyl MP3: Echo Chambers among the Little Databases
  14. Interlude 3. Also This: No Title
  15. 4. Dropping the Frame: From Film to Database
  16. Interlude 4. Flash Artifacts
  17. Epilogue. The EPC: On the Persistence of Obsolescent Networks
  18. Acknowledgments.zip
  19. Notes
  20. Index
  21. Series List Continued (2 of 2)
  22. Author Biography

Chapter 1

Textwarez

The Executable Files of Textz.com

we are not the dot in dot-com, neither are we the minus in e-book. the future of online publishing sits right next to your computer: it’s a $50 scanner and a $50 printer, both connected to the internet. we are the & in copy & paste, and plain ascii is still the format of our choice.

—A. S. Ambulanzen (Sebastian Lütgert et al.), “napster was only the beginning,” 2001

Textz.com hosted 831 plain text files at the time the site was abandoned in early 2004.1 Released periodically on the influential <net-time> listserv, Textz grew at a rate of roughly 250 files per year. Ranging from experimental poetry and cyberpunk fiction to media theory and political tracts, the Textz database presents a diverse, yet highly curated, selection of text files gathered by a collective and edited by German artist-activist-programmer Sebastian Lütgert from 2000 to 2004. Its fiercely copyleft position garnered the site international attention following a lawsuit and arrest warrant filed by the copyright holder to a substantial portion of Theodor Adorno’s works. Freely distributing these works, Textz combated the economic structures of the about-to-burst dot-com bubble, while rallying against the tightening of intellectual property debates.

Ambulanzen, the anonymous collective speaking for Textz cited above, offers a précis of the logic governing the site’s operating procedures. They declare that digital texts are not electronic books, but rather executable binary files dispersed as free software or cracked programs. Not only do ASCII characters write executable files, Textz argues, but text files can be considered executable programs in their own right. In this way, the pirated “textwarez” of Textz presents a radical reevaluation of the ubiquitous ASCII text file. Reconceiving the .txt format as executable code running in the brain of its reader, Textz calls into question the limits of a file type while presenting an unlikely metaphor for computational processes.

From a critical digital humanities perspective, the site offers a compelling portal into online collections and the forms of poetic computation enabled by texts on the internet. The fact that Textz kept obsessive and inventive user logs is fortuitous. This chapter offers the example of Textz as an outlier counterpoint to a range of debates concerning the computational study of digital objects and techniques of literary interpretation formulated within the field.2 Marked by a playful skepticism toward statistical analysis and the neoliberal politics of copyright, the site challenges the field’s defining formulations in league with a range of global debates toward a critical digital humanities on the horizon of the present.3 Without lingering on the terms of these debates, which remain beyond the scope of my study, this chapter instead presents a little database demonstrably resistant to programmatic systems for computational interpretation. As Nan Z. Da reminds us, it is often the reality that the literary corpora subject to computation are, in fact, “usually not so large.”4 Indeed, the little database is eminently computable, even from a smartphone, which alone merits some exploration of the modes of reading that this capacity might enable or limit. Putting scale to the side for a moment, I contend that Textz and the contingent demands of the collections I study throughout necessitate an approach that departs from the prevailing methodologies of computational literary studies, and in particular the statistical analysis of texts.

I make this argument by playing a reading of what Ian Bogost has termed the “system operation” of Textz against a range of “unit operations” functioning in the site itself.5 Unit operations privilege discrete, contingent, interpretive components over the static structures of systematic operations—systems that we might properly align with the processes of standardized encoding protocols writ large. In so doing, unit operations tactically deemphasize totalizing, analytical structures and interpretive closures. Put into practice, my approach follows the feminist media practices initiated by Laine Nooney, a kind of media-archival “speleology” more common in studies of games and interactive fiction, following the twisty little passages of the collection as one might traverse a colossal cave system.6 Aubrey Anable further develops this method of spelunking, “to explore a potentially vast space that can be apprehended only a small section at a time.”7 Traversing the networked components of the Textz collection and related provocations, I conclude the chapter by exploring a series of “textwarez” code-works by Lütgert. Such code-works reimagine the Textz collection through a range of playful conceptual games with file formats, oriented toward a poetics and politics of distribution. These explorations lead toward an expanded conception of media format poetics, presenting an opportunity to consider how a little database long-since erased from the internet might yet yield new potentials for scholarship in today’s networked milieu.

Collections and Contents

To begin, a site like Textz is remarkably difficult to situate in relation to analog forms. Is it a collection, a library, an archive, an anthology? In colloquial terms, this kind of site is often referred to as an “online archive” or “digital library.”8 Neither classification quite works: “online archive” would be technically inaccurate given the absence of prepublication materials; and “digital library” doesn’t quite map onto the distribution mechanics or location characteristics of the database. Publisher, with its root in the act of making public, nearly fits, despite the absence of the imprimatur of a press. What’s clear, however, is that each of these tags imports a specific set of historical, contextual, and operational frames into our understanding of the site.

In 2009, when the question of naming was still somewhat fresh, Kenneth M. Price outlined the stakes of nomenclature in an article on The Walt Whitman Archive entitled, “Edition, Project, Database, Archive, Thematic Research Collection: What’s in a name?” For Price, each of these classifications, as well as Peter Shillingsburg’s proposed “knowledge site,” inevitably fails to account for the specificity and variety of scholarly projects online.9 Instead, Price offers the term arsenal for its “emphasis on [the] workshop since these projects are so often simultaneously products and in process.”10 Moreover, for Price, arsenal holds appeal for its etymological connection to the magazine. However, arsenal, too, is jettisoned in a final footnote to the article. Price writes, “I am less concerned that arsenal catches on than I am that we recognize the fresh features of new work underway and that we are self-conscious about what we want any new term to convey.”11 Indeed, Price’s dilemma usefully demonstrates the pitfalls of developing a classificatory matrix for works trafficking online. Nevertheless, a tentative typology of such projects is necessary to reflect on the way we understand diverse outputs of online collections, from renegade art repositories like Textz to scholarly websites like The Walt Whitman Archive.

Jeremy Braddock treads similar terrain in his introduction to Collecting as Modernist Practice. Braddock emphasizes a public model of the collection that mediates relationships between audience and artwork under the rubric of a “provisional institution.”12 These provisional institutions are privately assembled but publicly exhibited. Retaining the terminological specificity of museums and anthologies, Braddock foregrounds a “collecting aesthetic” tied to the assertion that “a material collection is itself an aesthetic object, even, more pointedly, an authored work.”13 What Braddock insists upon here is that a kind of authorial sensibility guides the assembling of a given collection. Or, as he puts it, “the anthology and the art collection exist not simply for the sake of their individual works; they are also systems with meaning in themselves.”14 Braddock’s conceptualization of the collection might helpfully be ported into a unit for understanding Textz as a provisional institution operating under Lütgert’s stewardship. Following Braddock, this chapter could sketch the collecting aesthetic of Textz while remaining attentive to the particularities of naming a digital collection of texts offered on the internet.15

Or, we might look at the more recent work of Abigail De Kosnik, who approaches these same questions from a decidedly vernacular vantage. Primarily tracking the growth of the fic hub Archive of Our Own (AO3), De Kosnik proposes the concept of a “rogue archive” maintained by the “archival repertoires” that modify the traditional categories of bibliographic terms.16 Just as the roguish repertoires of amateur archivists seek to preserve content the academe might never begin to register, these same “archontic productions” counter staid conceptions of otherwise dynamic processes of digital creation, collection, and preservation.17 In this regard, we might consider the little database to be a rogue archive: built and maintained by archival repertoires as a labor of love and subject to the vicissitudes of both academic and popular sea changes in the archival arts. Writing on the long “archival turn,” Kate Eichorn notes how these practices are bound to ordinary digital use, speculating that “the timing of the archival turn is primarily related to the digital turn (a technological and epistemological shift that brought the concept and experience of archives into our everyday lives).”18

In anticipation of the repertoires of rogue archiving, Craig Saper outlines the “sociopoetics” that structured the mail art and magazine assemblages of the last century. Saper describes sociopoetics as the “inherently social process of constructing texts . . . expanded to the point that individual pages or poems mean less than the distribution and compilation machinery or social apparatuses.”19 Like the poetics of repertoire, sociopoetics designates a field wherein the networked circulation of aesthetic products assumes a privileged status that exceeds the works themselves. In dialogue with the “net.art” practices of the late 1990s, Textz explores new modes of publication and dispersion as a sociopoetic practice. Not unlike the “libraries” that populate user hard drives, the Textz collection also presents a case study for reading heterogeneous sets of user-curated digital files, though published for wider audiences online.

Between archival repertoires, sociopoetics, and collecting aesthetics, we might begin to chart the instrumental agency of Textz as a collection, a rogue archive, and a provisional institution. Or, as Braddock puts it, paraphrasing Walter Benjamin, “as a mode of practice as well as an aesthetic (or historical) form.”20 Importantly, Textz represents a provisional institution developed in lockstep with the emergence of an internet activist community. It indexes a fascination with situationist politics alongside speculative narratives, media theory, and a wide range of titles in vogue around the turn of the millennium. Gathering these constituent parts together under the term “little database” is an attempt to retain a sociopoetic emphasis on network distribution and periodicity in the lineage of the little magazine while refreshing the technical apparatus to match the provisional repertoires of internet collections.

Updating the preceding media historical analogies, we might just as easily classify the site as a “pirate network” or “shadow library.” Indeed, as a profusion of recent studies of internet piracy have shown, “napster was only the beginning.”21 Though they feature prominently in the site’s legacy, an extended discussion of intellectual property and online piracy lies beyond the scope of this chapter. My intent is not to negate the importance of this line of inquiry. The literature on copyright is extensive and reveals many of the most pressing issues related to cultural objects trafficking online, including their very right to exist.22 Readers interested in this subject might most usefully turn to the excellent collection assembled by Joe Karaganis, Shadow Libraries: Access to Knowledge in Global Higher Education, which again and again highlights the asymmetrical and deleterious effects of IP policing on global knowledge communities.23

Rather than pursue well-trafficked avenues on intellectual property and fair use, I have elected to attend to qualities of the site that remain underrepresented in studies of little databases, whatever one might call them: the formats that undergird this communications circuit; the transformations that the online collection introduces into the works it hosts; and the processes of meaning-making that emerge among these manifold relations. While issues of copyright necessarily arise in any discussion of a site like Textz, the specific properties of the database’s objects and the interpretive possibilities of the literary works it hosts are rarely explored. From the outset, it should be clearly stated that Textz is nothing if not wanton in its disregard of copyright law. Distilled to its core conceptual premise, the site is predicated on the illegal transmission of intellectual property. The mere fact of illegality, however, often obscures the opportunity to investigate the significations produced by these influential, if shadowy, endeavors. Given that “napster was only the beginning,” Textz presciently anticipates the growth of cultural piracy on the internet, seen from the present as a missed opportunity in light of streaming ubiquity and platform hegemony. Nevertheless, it marks an anticipatory aesthetic of distribution that endures through the file lockers, peer-to-peer networks, bit-torrent platforms, and emerging “Web 3.0” decentralized internet currents that define file-sharing today.

Textz is not Project Gutenberg

Textz formed the theoretical core of Lütgert’s expansive web-ring of digital works known as Project GNUtenberg. Of course, the GNU in the name plays on the free software GNU Project, itself a recursive acronym for “GNU’s Not Unix.” The “GNU” in the title ciphers an approach to historical texts that is at once computational, critical, and communitarian, set forth at a moment of transformative technological change on par with the introduction of the printing press. At the same time, the GNUtenberg tag codes Textz as the shadowy double to Michael Hart’s public-domain collection, Project Gutenberg, the first text collection of its kind.24 By contrast, the site defines itself through a rejection of these digital publishing norms in its founding manifesto, contending that “this is not project gutenberg. it is neither about constituting a canonical body of historical texts, . . . nor is it about htmlifying freely available books into unreadable sub-chapterized hyper-chunks. texts relate to texts by other means than a href. just go to your local bookstore and find out yourself. the net is not a rhizome, and a digital library should not be an interactive nirvana.”25 With some historical distance, however, the projects might be seen to share more in common with each other than the paywalled digital libraries and “interactive nirvanas” of today’s internet.

Like Hart’s enterprise, Textz offers “plain vanilla texts,” ready for computational processing and easily reformatted for any variety of reading systems. Unicode text remains the functional backbone of digital text and all functional code, registered beneath the page of searchable PDFs, books, subtitles, search algorithms, large language models, and other text-based components of the media landscape. Beyond a shared format, Textz’s copyleft politics intensify Hart’s ideology of accessibility. In its similarity to Gutenberg, GNUtenberg is best able to articulate its politics as a differential gap. Where Hart’s historic first e-Text encoded “The Declaration of Independence” in 1971, the first file Textz distributes is Gilles Deleuze’s short article “Postscript on the Societies of Control.” The political valences of the two foundational releases could not be more disparate, situating these projects at radically opposed ends of the spectrum of online distribution practices.

Pitting Textz against Project Gutenberg, Lütgert against Hart, and Deleuze against the “Declaration of Independence” is precisely in line with the kind of playful statistics that Textz gathered in a series of “statz” pages. Keeping detailed logs of users and patterns of use was once a core component of internet publishing practices. The excitement of immediate publication was paired with the power to document a global network of IP addresses and access points. In Figure 1.1, “text patterns, trends, and surprises according to textz.com” are mapped as a series of ongoing serial contests between writers, countries, operating systems, and internet browsers. Similarly, Figure 1.2 presents an example of monthly updates published on the site that track these same categories as though they were stocks, rising and falling through gains and declines in public use.

A comparison of usage trends, displaying data for multiple pairings on the site rendered in blue and teal ASCII characters.

Figure 1.1. ASCII data visualization tracking “text patterns, trends, and surprises according to textz.com” between November 2001 and October 2002. Captured via Internet Archive.

Figure Description

This graphical representation features multiple comparisons between usage statistics on the site, pitting competitors against each other. The data spans from November 2001 to October 2002. The comparisons are shown in a grid format with the following pairs:

  1. Adorno vs. Deleuze
  2. Debord vs. Baudrillard
  3. Negri vs. Chomsky
  4. Artaud vs. Bataille
  5. Godard vs. Truffaut
  6. Theweleit vs. Kittler
  7. France vs. Italy
  8. Mac OS vs. Windows XP
  9. Mozilla vs. Netscape 6/7

Each pair has a timeline on the left side, listing months from November 2001 to October 2002. The visual data consists of columns of vertical bars, with blue and teal colors indicating the frequency or pattern of mentions for each name in the pair over the specified time period.

A series of top ten listings from January 2002, displaying top gaining and declining texts, authors, countries, systems, and clients.

Figure 1.2. A data visualization tracking top gaining authors and texts; declining authors and texts; and top countries, systems, and clients for textz.com in January 2002. Captured via Internet Archive.

Figure Description

The visualization presents an analysis of textz.com usage in January 2002, encompassing several “top ten” categories of data including top gaining texts, top declining texts, top gaining authors, top declining authors, top texts, top authors, top countries, top systems, and top clients. The data is displayed in a tabular format with rankings and percentage changes as follows:

Top Gaining Texts
  1. a.s.ambulanzen - deleuze.net not found (+2.05%)
  2. David Lynch - Mulholland Drive (+1.37%)
  3. Nanni Balestrini - Gli invisibili (+0.95%)
  4. Umberto Eco - La Bustina di Minerva (+0.86%)
  5. Umberto Eco - A Rose by Any Other Name (+0.72%)
  6. Stephen King - The Shining (+0.53%)
  7. Emile Zola - Nana (+0.53%)
  8. William S. Burroughs - Junky (+0.51%)
  9. Kathy Acker - The Language of The Body (+0.47%)
  10. Gilles Deleuze - Postskriptum über die Kontrollgesellschaften (+0.40%)
Top Declining Texts
  1. Ray Bradbury - Unterderseaboat Doktor (-0.22%)
  2. William S. Burroughs - The Electronic Revolution (-0.20%)
  3. Free Software Foundation - GNU General Public License (-0.15%)
  4. Diedrich Diederichsen - Die Lizenz zur Nullposition (-0.15%)
  5. Mike Davis - Magischer Urbanismus (-0.15%)
  6. Leo Trotzki - Über den Terror (-0.14%)
  7. Vilém Flusser - The Bag (-0.14%)
  8. Franco Berardi Bifo - Rhizomatisches Denken gegen die kalifornische Ideologie (-0.14%)
  9. Joost Smiers - Geistiges Eigentum ist Diebstahl (-0.13%)
  10. Marcel Proust - Le temps retrouvé (-0.13%)
Top Gaining Authors
  1. a.s.ambulanzen (+2.33%)
  2. Nanni Balestrini (+2.03%)
  3. Umberto Eco (+1.04%)
  4. David Lynch (+0.72%)
  5. Jean Baudrillard (+0.63%)
  6. Michael Hardt (+0.53%)
  7. Emile Zola (+0.51%)
  8. Charles Baudelaire (+0.51%)
  9. William S. Burroughs (+0.50%)
  10. Stephen King (+0.52%)
Top Declining Authors
  1. Franz Kafka (-1.29%)
  2. Matthew Fuller (-0.57%)
  3. Edgar Allan Poe (-0.56%)
  4. Diedrich Diederichsen (-0.15%)
  5. Geert Lovink (-0.15%)
  6. Hans-Christian Dany (-0.45%)
  7. Raoul Vaneigem (-0.14%)
  8. Guy Debord (-0.13%)
  9. Karl Marx (-0.13%)
  10. William Gibson (-0.36%)
Top Texts
  1. a.s.ambulanzen - deleuze.net not found (+2.05%)
  2. Kathy Acker - The Language of The Body (+1.52%)
  3. Douglas Adams - The Hitch Hiker’s Guide to the Galaxy Trilogy (+1.48%)
  4. David Lynch - Mulholland Drive (+1.37%)
  5. Nanni Balestrini - Gli invisibili (+1.25%)
  6. a.s.ambulanzen - Warum wir es nicht auf der Straße tun (+1.16%)
  7. Michael Hardt / Antonio Negri - Empire (+1.12%)
  8. Umberto Eco - La Bustina di Minerva (+0.98%)
  9. a.s.ambulanzen - Inder statt Kinder (+0.90%)
  10. a.s.ambulanzen - Feminists Like Us (+0.90%)
Top Authors
  1. a.s.ambulanzen (+6.29%)
  2. Guy Debord (+3.29%)
  3. Douglas Adams (+3.14%)
  4. Nanni Balestrini (+3.10%)
  5. Adilkno / Agentur Bilwet (+2.65%)
  6. Jean Baudrillard (+2.25%)
  7. Theodor W. Adorno (+2.18%)
  8. Antonio Negri (+2.15%)
  9. Edgar Allan Poe (+2.04%)
  10. Michael Hardt (+1.93%)
Top Countries
  1. Germany (44.22%)
  2. Italy (16.09%)
  3. Austria (9.38%)
  4. France (4.92%)
  5. South Africa (4.66%)
  6. Portugal (4.16%)
  7. United States (4.08%)
  8. Switzerland (3.80%)
  9. Netherlands (2.94%)
  10. Belgium (0.74%)
Top Systems
  1. Windows 98 (41.06%)
  2. Windows 2000 (24.04%)
  3. Windows ME (8.20%)
  4. Mac OS (7.54%)
  5. Windows NT (6.42%)
  6. Windows XP (5.91%)
  7. Windows 95 (4.12%)
  8. Linux (1.14%)
  9. Sun OS (0.05%)
  10. UNIX (0.05%)
Top Clients
  1. Internet Explorer 5 (52.53%)
  2. Internet Explorer 6 (15.38%)
  3. Netscape 4 (8.34%)
  4. Internet Explorer 4 (6.16%)
  5. Netscape 6 (4.00%)
  6. GoliZilla (2.85%)
  7. AOL (1.97%)
  8. Opera (1.62%)
  9. Mozilla (0.10%)
  10. iCab (0.10%)

As though anticipating contemporary trends in statistical analysis within the digital humanities, these “statz” seem to offer much of the content that a digital humanist might hope to see. And, indeed, these charts do offer insight beyond their respective numerical data sets. The selections made in Figure 1.1 map the attention of the site’s editors and users. Of course, among a net art milieu in 2001, Negri is besting Chomsky, Godard overwhelms Truffaut, and Mozilla is winning out over Netscape. More surprising on an intellectual history note, perhaps, is that Adorno continues to gain more readers than Deleuze over this period. In retrospect, we might speculate that Textz always had a particular fascination and draw toward Adorno on intellectual-property grounds. Ironically, media attention surrounding this copyright battle may have worked to drive up user downloads of the contested Adornian files. In this light, the “gaining” use of Adorno can be seen as a harbinger of the site’s demise in a few years’ time. Despite the conjecture we might make here, it remains difficult to imagine how these charts present questions that studying the site wouldn’t already reveal.

In Figure 1.2, the reader might track the range of texts that appealed to a specific audience at a particular historical moment (many long-since out of fashion). While it’s notable that Franz Kafka was down and David Lynch was up in January of 2002, there is little to offer by way of comment on these changes beyond the raw data of these surface-level statistics. For example, Douglas Adams seems a strange mainstay among Kathy Acker, Michael Hardt, and Guy Debord. One might also note that the editorial collective running the site—writing as A. S. Ambulanzen—tops the charts in both author and text categories. More interesting insights can be drawn from the demographic and interface statistics in Figure 1.2. German users commanded 44.22 percent of the site’s usage, while the United States trailed at seventh place with a mere 2.61 percent. Older versions of Windows and Internet Explorer topped their newer counterparts. Meanwhile, Mac OS and Mozilla captured small percentages of the user base in 2002. Comparing these numbers to stats collected by similar sites over time might lead to a productive historical conclusion concerning international use patterns and interface preferences. Nevertheless, from this vantage, the facts remain merely interesting: nearly trivia, given verifiable historical outcomes. Each figure and every comparative data set is incidental to the collection itself, and to the internet at the time.

Eschewing the efficacy of data analysis, we might instead consider the sociopoetics of these statistical displays. Just as “gains” in the digital humanities have delivered a newly invigorated critical perspective on quantification and the aesthetics of data-mapping, the early internet’s affordances for user tracking and graphical display created a vernacular fascination with visualizing the statistics of use. It was common for early sites to proudly display visitor counts, guest books, and other “widgets” for the quantification of usage, a once-pervasive (and still-available) aesthetic feature of the early internet that has long since fallen out of fashion, excepting nostalgic trends in postinternet art circles. On the one hand, Textz critiques this gesture by offering unlikely comparisons and an excess of information, suggesting a sly mirroring of stock exchange trackers and updated RSS (really simple syndication) news feeds. On the other, it presents these statistics with an ASCII art aesthetic more common to Usenet and BBS (bulletin board system) forums of the late 1970s and early 1980s, which predated the graphical display that has defined the internet since 1993.26 Instead of shaping recognizable figures with the stylistic flair for monospace glyphs in ASCII art, Textz uses a plain text approach to the representation of data. This historical gesture interfaces the Textz collection with an aesthetics of piracy featured in the warez scene (or “The Scene”) of the BBSes, which often featured ASCII art as a tag for the hacker or group that offered cracked software.27 Much more than summarizing the user logs of the site, these visualizations perform a pointed poetic intervention that connects the millennial internet with the secret histories of network aesthetics best left unmentioned and only discovered by interested parties.

With this in mind, we can turn to the definitive visualization of the site. Continuing the approach seen in statz, it is presented as an elaborate work of ASCII art. This multifaceted graph (Figure 1.3) was produced by Lütgert and released on the site in the spring of 2004. In one extraordinary data visualization, simply titled “textz.com/logs,” the user can explore the extensive tracking logs that Textz collected over four years. Condensing the massive volumes of data presented in Figures 1.1 and 1.2 into individual color-coded glyphs, this single page offers an incredible quantity of information about the global use of Textz from January 2000 until December 2003. Every character, color, and position in this grid transmits significant data. Additional data points are encoded within each letterform using embedded links and tooltips, which are revealed by hovering over any character with a cursor.

Color-coded log data from textz.com charting nation-based usage statistics from January 2000 to December 2003.

Figure 1.3. A multicolor data visualization representing four years of logs for visitors to textz.com, from January 2000 until December 2003, organized by month. For print readers, see the e-book or Manifold edition for full-color images throughout. Captured via Internet Archive.

Figure Description

The image is a complex and colorful text log from textz.com, displaying data spanning from January 2000 to December 2003. The log includes various parameters for months, years, and nations rendered in color-coded HTML to display site-wide usage statistics by nation. Each row represents a month and year, with corresponding data represented by letters and color codes.

The log starts with a header that reads: “FOUR YEARS OF FREE INDUSTRIAL STRENGTH MASSIVE PARALLEL PEER TO PERMANENT SCALABLE SYNCHRONOUS UNLIMITED UNRESTRICTED WIRELESS DOWNLOADS OF PIRATED ASCII EBOOKS FROM TEXTZ.COM 1292907 CLIENTS SERVED.”

Below the header, each line starts with a month and year (e.g., Jan2000, Feb2000), followed by a string of letters and symbols color-coded in shades of red, green, blue, yellow, and other colors each representing a national usage statistic.

The overall presentation is dense and visually striking, with the use of color playing a key role in differentiating the information.

The following two figures continue this chart to its conclusion. To further explore the visualization, see captures of the page by the Wayback Machine.

A brief excursus to parse this graph yields many fruitful insights. Rendering white text in all caps against a black background, the graph opens with the following text on a single line (with spaces added for ease of reading):

FOUR YEARS OF FREE INDUSTRIAL STRENGTH MASSIVE PARALLEL PEER TO PEER PERMANENT SCALABLE SYNCHRONOUS UNLIMITED UNRESTRICTED WIRELESS DOWNLOADS OF PIRATED ASCII EBOOKS FROM TEXTZ DOT COM 1292907 CLIENTS SERVED

The rest of the graph outlines the temporal and geographic dimensions of the 1.29 million “clients served.” In the first section, each line prints a new month of logs from 2000 to 2004. These lines can be parsed as follows: first, the month and year of the line; second, single characters standing in for each of the top twenty countries in assorted colors ordered by ranking; third, the rank that month had in users over the four-year period displayed in white; fourth, the number of users that month represented on a grayscale spectrum, with lighter values for greater numbers. Finally, the remaining 144 characters display proportional values of the month’s use log, distributed by color-coded characters representing the top twenty countries for each month. Additionally, a single white character is plotted once per line in the final 144 characters to chart overall volume across the four-year span. The overwhelming density of the graph continues in the following section, which indexes proportional use volumes for each of the 199 countries that accessed the site (Figure 1.4). Even after repeated viewing, it is difficult to comprehend the rationale guiding this chromatic excess: its maximalist approach to data aesthetics yields informatic wonder.

Color-coded log data from textz.com charting nation-based usage statistics from January 2000 to December 2003.

Figure 1.4. A multicolor data visualization representing four years of logs for visitors to textz.com from January 2000 until December 2003, organized by country. Captured via Internet Archive.

Figure Description

The image is a similarly complex and colorful text log from textz.com, displaying data spanning from January 2000 to December 2003. The log includes various parameters for months, years, and nations rendered in color-coded HTML to display site-wide usage statistics by nation. Each row represents a nation’s usage statistics, with corresponding data represented by color-coded numbers and letters.

Each line starts with ranking (from 001 to 273, indicating all nations potentially tracked by the site), followed by continent, region, and nation information. This information is followed by overall usage statistics and a multicolored set of symbols for usage values by month over time. Hovering over the HTML of the grid presents precise usage statistics for each letterform using tooltips.

The following figure continues this chart to its conclusion, featuring registered nations with zero usage of the site demarcated with grey “X”s, which form the iconic shape of an upturned shopping cart rendered in grayscale ASCII art.

Scrolling through this dizzying array of statistics, the user finds the grayscale conclusion to the graph before a final line that repeats the phrase “NO COPYRIGHT 2004 TEXTZ DOT COM NO RIGHTS RESERVED.” In this concluding set, fifty-three countries appear with zero use, including, for example, both Antarctica and Afghanistan. Each of these countries’ lines is accompanied by grayscale X’s that together form the emblematic Textz icon: an upturned shopping cart (Figure 1.5). No rights are reserved and none of these countries are “served.” Dissolved nations (Czechoslovakia), military territories (Indian Ocean Territory), and in the final position, ambiguous nonplaces (“Neutral Zone”) highlight the absurdity and excess of the entire metric exercise. The data aesthetics of tracking and logging are themselves overturned in the gesture. The form of the chart is retained in the service of an extended display of nonuse, which is in turn aestheticized as an elaborate work of conceptual ASCII art. The display of these statistics not only indexes the site’s visitor data itself, but also points back to a moment when such data was habitually rendered visible online. By amplifying these tracking systems, the site speaks back to these surveillance systems in a kind of “dark sousveillance.”28 That is to say, there are manifold political stakes inherent to Textz’s playful statistics, which bring to the surface the conditions of an internet not yet subsumed by the forms of infrastructural opacity, proprietary algorithmic logic, and systemic obfuscation common online today. Just as Textz’s charts offer up numerical data for statistical analysis, they also foreground the limitations and absurdities of such an analytical exercise. If this is the definitive statement that Textz makes on the statistical uses of its own collection, how might we compute the little database otherwise?

Color-coded log data from textz.com charting nation-based usage statistics from January 2000 to December 2003.

Figure 1.5. A multicolor data visualization representing four years of logs for visitors to textz.com from January 2000 until December 2003, organized by country, ending in an upturned shopping cart icon. Captured via Internet Archive.

Figure Description

The image is a similarly complex and colorful text log from textz.com, displaying data spanning from January 2000 to December 2003. The log includes various parameters for months, years, and nations rendered in color-coded HTML to display site-wide usage statistics by nation. Each row represents a nation’s usage statistics, with corresponding data represented by color-coded numbers and letters.

Each line starts with ranking (from 001 to 273, indicating all nations potentially tracked by the site), followed by continent, region, and nation information. This information is followed by overall usage statistics and a multicolored set of symbols for usage values by month over time. Hovering over the HTML of the grid presents precise usage statistics for each letterform using tooltips.

The figure continues the chart from the previous figure to its conclusion, featuring registered nations with zero usage of the site demarcated with grey “X”s, which form the iconic shape of an upturned shopping cart rendered in grayscale ASCII art.

One approach is to turn from use to content: where Textz quantified the way its collection was used, a scholarly approach might attempt to chart the contents of the collection itself. This, at least, was a driving question when I began my research on Textz. A plain text archive with a full collection ready to process seemed a generative entry point into computational modes of literary scholarship. However, a number of problems with this approach immediately suggested themselves. A quick scroll through the names of authors reveals a critical failure so obvious it doesn’t require statistical confirmation: the collection is overwhelmingly white and male.29 Aside from the obviousness of this core fact, the collection is remarkably heterogeneous in terms of genre and language. As a whole, it remains strange. The contents are too generically diverse to provide insight through topic modeling tools. Network analysis and other “cultural analytics” techniques similarly fail to furnish a new lens through which to view the collection. The greatest challenge to computation is the multilingual nature of the texts. Just over half of the works are presented in English, nearly a third are in German, and the remaining portions are split among Italian, French, and Spanish texts. Even if, to consider it, the texts were translated (causing a range of new problems), the most comprehensive analytic tools would prove incapable of producing meaningful patterns in the strange assortment that comprises the Textz corpus. Frequency charts, topic models, and network diagrams of collections like Textz manage to be both obvious and obfuscatory. An attempt to import the collection into machine learning NLP (natural language processing) models would effectively erase and flatten the most distinctive features of the corpus, regardless of the context window.

These challenges are not unique to Textz. Scholars working with diverse datasets collected from multilingual or transdisciplinary contexts are routinely advised to focus their research on more readily computable collections, in size or self-similarity. Preparing multilingual parallel corpora for Textz would be as futile as modeling the topics or affects of literary documents ranging from poems to speculative fiction to political tracts. Attempting to decipher semantic patterns through Text Encoding Initiative (TEI) markup for this type of collection approaches a ’pataphysical level of absurdity. Beyond certain statements about the collector’s preferences or the circulating availability of specific genres of texts, there is little to be done with the contents of files hosted by the site. The little database, like my own collection of haphazard files housed within the device receiving these letter-strokes, remains resolutely idiosyncratic.

Graph showing the gradual growth of the Textz collection from May 9, 2000, to February 20, 2004.

Figure 1.6. A chart tracking the growth of the Textz collection by number of files between the dates of May 9, 2000, and February 20, 2004.

Figure Description

The image is a line graph depicting the growth of the Textz collection over time, from May 9, 2000, to February 20, 2004. The X-axis represents the timeline, while the Y-axis represents the number of text files.

Key Elements
  • Title: “Textz Collection Growth (5.9.2000–2.20.2004)”
  • X-Axis: Dates spanning from May 9, 2000, to February 20, 2004, with specific markers at the start of each half year (1/1/2001, 7/1/2001, 1/1/2002, 7/1/2002, 1/1/2003, 7/1/2003, 1/1/2004).
  • Y-Axis: Number of text files, with markers at 250, 500, and 750.
  • Graph Line: A black line showing the cumulative growth of the number of text files in the collection, filled in with a gray shading underneath. The line shows a general upward trend with some plateaus and more rapid increases at various points in time.

The graph illustrates a steady increase in the number of text files in the collection, highlighting significant periods of growth and moments where the increase rate slows down, providing a visual representation of the expansion of the Textz archive over nearly four years.

Scatter plot showing the publication distribution of Textz from July 1, 2000, to January 1, 2004.

Figure 1.7. A plot tracking the publication distribution of files released by Textz between the dates of May 9, 2000, and February 20, 2004.

Figure Description

The image is a scatter plot titled “Publication Distribution,” depicting the distribution of text publications over time from July 1, 2000, to January 1, 2004.

Key Elements
  • Title: “Publication Distribution”
  • X-Axis: Dates ranging from 7/1/2000 to 1/1/2004, with specific markers at half-year intervals (7/1/2000, 1/1/2001, 7/1/2001, 1/1/2002, 7/1/2002, 1/1/2003, 7/1/2003, 1/1/2004).
  • Y-Axis: A random distribution of points to demonstrate clustered uploads.
  • Data Points: Black dots scattered across the plot, representing individual texts uploaded to the site.

The scatter plot is divided into vertical sections by the date markers, with each section displaying a cluster of data points. The density and distribution of these points vary over time, showing periods of higher and lower publication activity. Some sections have more concentrated clusters, indicating bursts of publication, while others have sparser distributions, suggesting quieter periods. The overall visualization provides an overview of how the publication activity fluctuated over the given timeframe.

Given these challenges, one potential solution involved tracking the release patterns of all files featured on the site. Figures 1.6 and 1.7 respectively chart the growth and periodicity of Textz releases. In Figure 1.7, each scattered dot represents the digital publication of a text file. As this figure demonstrates, the files are released in bursts around certain dates, with scattered releases occurring in between. Activity skews toward the founding of the site, with the highest concentration of releases in the winter of 2000–2001. Before the site goes on hiatus, there is a large release of files in early 2003. Finally, the remaining texts are released in May of 2004. This narrative is enriched by the parallel chart of the site’s growth in Figure 1.6 as steep cliffs of productivity surrounded by plateaus of inactivity. Anyone who has worked on a blog, captured citations to Zotero, or attempted to clear an email inbox might quickly recognize these bursts in productivity as the familiar temporality of working amidst endless data streams.

But these aggregate numbers fail to account for the most important dates in the Textz timeline. Pairing these graphs with textz.com/logs, one wonders why June and October of 2002 boast such exceptional user numbers. Reviewing the contents of the site, it’s clear that the release of Max Horkheimer and Theodor Adorno’s Dialektik der Aufklärung (published in English as Dialectic of Enlightenment) on April 30, 2002, resulted in critical user mass following widespread media coverage—leading to a lawsuit in June and an official response by Textz in October. In a similar vein, there are plausibly hundreds of reasons for each spike in use. A broad spectrum of speculative interpretations might address how the periodicity of the collection interfaces with social contexts, external publications, intellectual trends, and so forth. However, these speculative metrics would surely fail Textz as squarely as the user logs. Following Lütgert’s suggestion, I propose that the statistical cart ought, in this instance, be overturned. Central to the site’s aesthetics, a rejection of computational analysis serves as both an invitation to examine its contents more closely and a method to evaluate the upturned cart in its own right. In what follows, we’ll move from the computation of content to the technical effects of the container. In this way, I consider how a contingent media poetics might articulate the Textz collection otherwise.

Content to Container

From the webmaster motto “content is king” (as relevant now as it was in 2001), Textz presents the dis-contents of the dethroned, cracked, and stolen text file.30 They write: “They say there was a time when content was king, but we have seen his head rolling. our week beats their year. ever since we have been moving from content to discontent, collecting scripts and viruses, writing programs and bots, dealing with textz as warez, as executables—something that is able to change your life.”31 As an exaggerated plural derivative of software, “warez” emerged on BBSes as slang for a pirated or “cracked” commercial program distributed across illicit file-sharing channels. The act of distribution itself seems to “beat” the content as such, assuming a privileged status that eclipses the digital artifact being distributed. While this analogy certainly plays out in the guerilla scanning and dispersion activity of Textz, the re-rendering of its contents as executable software programs is a still more radical gesture. Beyond the politics of copyright, Textz presents a novel interface between digital formats and the written word. Alongside viruses, we find “scripts.” Programs and bots are, of course, “written.” And the “contents” of each text are transformed into pirated software: “textwarez.” To understand the implications of these claims, we might start at the feudal roots of content: “From medieval Latin contentum (plural contenta ‘things contained’).”32 As Jonathan Sterne might put it, following Lewis Mumford, Textz transforms through the introduction of a significantly different “container technology.”33 What Textz introduces to these historical texts is not only an illicit new venue and distribution system, but also a markedly new format. Not just an ASCII file, but a text-based software program that operates on and executes within a human operating system.

This bold claim set forth in the Textz manifesto raises a series of questions. Is it possible to take its provocation seriously? What does it mean to present works by Kathy Acker, Guy Debord, William Gibson, or Theodor Adorno as “executable” code? What changes to interpretation does this framing mechanism introduce? More importantly, what modifications might we chart in the transcoding from print codex to plain text file? To offer a response to these queries, we’ll begin by examining the protocological structure of ASCII itself. From there, we can consider possible computational actions enabled by a notion of textwarez. These actions will be discussed in relation to both the Textz collection and a series of works that Lütgert performs as textwarez. Finally, these inquiries will lead us to reconsider the relation of container technologies and operational software to the work of literary criticism after the digital turn. Beyond the computational metrics of textual analysis, Textz suggests a transformation that is more far-reaching in its recoding of each digital text file. Put differently, the question of how we might use computational tools to understand literature is less urgent here than understanding how textwarez might transform literary sources in the computation of each user.

“Changing meaning with each new medium, text is a truly chameleonic word,” writes Adriaan van der Weel in Changing Our Textual Minds.34 As medial formations continue to shift, our concept of the text must be resituated within the operations of each new media system. Extending from the practical and theoretical ambiguity that Stanley Fish once questioned, digital media have injected a complex of new queries into the task of defining text as a conceptual category.35 Dennis Tenen signals these new challenges to the task of definition: “Digital texts form a live lattice, a multi-dimensional grid, that connects a letter’s tactile response at one’s fingertips to its optic and electromagnetic traces. . . . It is impossible to give the entire structure over.”36 More directly: “Ideas about form, content, style, letter, and word—change profoundly as texts shift their confines from paper to pixel.”37 Attending to the poetics of such changes remains an open query throughout this chapter: tracking how the ideas change alongside revised material situations.

Working from the other end of the spectrum, it may be useful to begin instead by sketching the definitional boundaries of media. Van der Weel’s model provides a succinct entry point, summarizing “medium” as “a structure consisting of a technological tool with its (explicit) technical protocols and any implicit social protocols with the function to communicate information.”38 Lisa Gitelman provides a slightly more expansive definition—and, as Craig Dworkin notes, an important pluralized formulation—in Always Already New: Media, History, and the Data of Culture. Gitelman writes: “Media are socially realized structures of communication, where structures include both technological forms and their associated protocols, and where communication is a cultural practice, a ritualized collocation of different people on the same mental map, sharing or engaged with popular ontologies of representation.”39 Both of these formulations are inflected by the constant flux of ever-shifting medial forms.

Reinvented as technological and social protocols within ritualized cultural practices, media are reshaped along the contours of a constantly transforming terrain. Naturally, this problem is exacerbated by the acceleration of digital media forms: as new computational platforms and distinct internet cultures emerge along the same exponential rates that continue to amass dead media and defunct sites, any study of digital media’s role in communication technologies is marked by an incredibly short life expectancy. Wendy Hui Kyong Chun theorizes “the material transience of discrete information and the internet” in a foundational essay that surfaces the complications of “digital media’s archival promise.”40 As Chun writes, degeneration “belies the promise of digital computers as permanent memory machines” at the same time that digital media “depends on a degeneration actively denied and repressed.”41 Along those same lines, Textz and the little database broadly construed are subject to the fluctuations and inconstancies of digital objects’ enduring ephemerality.

While technical and social protocols continue to coevolve with new media releases, the infrastructural layer of standards defining the file format remains relatively stable. The infrastructural history of binary text formats, for example, reaches into the development of telegraphy and the five-bit system devised by engineer Émile Baudot in 1874.42 This prominent standard (International Telegraph Alphabet No. 1), along with a few variants like the six-bit IBM BCD punched-card code, determined the protocol for teletype and related technologies for nearly a hundred years. Close to a century later, an updated version of this same protocol, known as the American Standard Character International Interchange (ASCII) format, was proposed in 1963 and approved as a standard in 1968. This same ASCII format has remained the relatively stable core of textual transmission in media systems, from the military origins of ARPANET to the ubiquitous presence of digital text today. ASCII character-encoding protocols still undergird the text file, despite the incessant waves of new media technologies that deploy the format and the various expansions that have been introduced by the larger character sets of Unicode. As the Textz manifesto has it: “electronic gadgets [are] dead media on their very release day. forget about your new kafka dvd. i already got it via sms.”43 Short Message Service (SMS), of course, is also supported by the protocol built upon backward compliance with ASCII. To this day, the textual practices of SMS, or in the gerund, texting, still return us to the 7-bit ASCII character set standardized in 1968.

In the purest form of ASCII, the character set is anything but neutral. The text is rigorously constrained by its letterforms: 95 printable characters in strict 7-bit ASCII; 191 in Extended ASCII, or the various ISO-8859 standards. In terms of literary stylistics, the problems brought about by the limitations of this set are legion, as even a quick perusal of Project Gutenberg will indicate. All passages previously formatted for italics, underline, or boldface are presented in ALL CAPS, a feature quickly disappearing amidst formats more adaptable to nuanced stylistic markup. Footnotes, pages, and other basic properties of textual formatting are either rendered impossible or creatively sidestepped. The attendant difficulties of negotiating these formatting dilemmas are demonstrated in the Textz database. For example, a multilingual database like Textz falls prey to what Daniel Pargman and Jacob Palme have termed “ASCII Imperialism.”44 This includes everything from the dollar sign encoded in the original 7-bit ASCII, whose use is obviously restricted to the United States despite its pervasive presence in code, to the absence of innumerable multilingual glyphs from a wide range of language systems beyond the United States. Because Textz predates the Unicode standard widely adopted in 2007, myriad glitches occur as my U.S. browser or text editor tries to parse ISO 8859–1 into ASCII or Unicode base binaries, which are fundamentally structured around ASCII. Vast swaths of French and German texts are thus rendered all but unreadable without the aid of systematic re-encoding.

Tenen amplifies the politics of encoding in his definitive study of the format, Plain Text: A Poetics of Computation, by “exposing the technological bias” of text within the poetics of “plain” text’s technical and literary formulations.45 Most mistakes we find in these works are not rooted in material limitations, but are rather the products of standards shaping our use, beyond the geopolitical and linguistic limitations of encoding in their moment of distribution. In this way, Tenen argues, plain text is both “a file format and a frame of mind” that requires a “computational poetics [that] breaks textuality down into its minute constituent components.”46 In league with breaking, Sterne writes, “infrastructures tend to disappear for observers, except when they break down.”47 Working with the Textz collection today, the infrastructural politics of encoding systems from the early internet most clearly reveal themselves not through analytics, but rather through their most minute components: the poetics of every glitch and error.48

Text to Textwarez

In the strictest sense, the neologism “textwarez” is purely a metaphorical invention. The .txt file format is explicitly used to demarcate unparsed textual data from the executable file, defined generally as an operable program that causes a computer to perform tasks according to encoded instructions. Put differently, the standards and protocols built into an executable software program are needed to parse the data encoded in the text file. Despite the fact that, from an operational standpoint, the same glyphs make up both .exe and .txt files, the difference could not be more important with respect to use. Incorporating human readers into the technical schematic begins to blur these sharp delineations. Jerome McGann offers a series of provocative arguments for the executable nature of text in Radiant Textuality: Literature After the World Wide Web. He contends that both “grapheme and phoneme are forms of thought and not facts—not character data but parsed character data, or ‘data’ that already function within an instructional field.”49 This line of thinking is intended to counter a blind spot that the digital humanities often have regarding the “algorithmic character of traditional text.”50 Of course, as Gitelman reminds us, “raw data” is an oxymoron.51 Data is always shaped and constructed by the systems of classification and interpretation that determine what constitutes “data” in the first place. All parsed data becomes information.

In McGann’s view, all texts contain both protocols of figuration (a graphic representation) and instructional options (for readers navigating the text). If we are to draw computational metaphors between readers and texts, we must admit that the instructional field is parsing its reader, not the other way around. In line with Textz’s emphasis on the transformative potential of textwarez, McGann argues that “readers do this as a matter of course as they move through a text and make themselves the measure of a process of transformation.”52 Poet and scholar Tan Lin presents a more radical variant of this position across a series of works that deploy his theorizations of “Disco as Operating System.” Lin relates disco to a wide range of cultural forms, including the text, writing that “such a programming language was once called literature.”53 It is this operating system that writes its reader in a play of affect programmatically exercised by the text file itself. “Disco is not, as is mistakenly thought, an explosion of sound onto the dance floor but an implosion of pre-programmed dance moves into a head.”54 Lin’s alignment of disco with an operating system finds its corollary in Textz’s approach to text files as pirated software, as textwarez. In both, the metaphor of executability is built into the user, rather than the text itself.

This, it seems, is the core of the Textz project. To quote again from the manifesto: “we have been moving from content to discontent, collecting scripts and viruses, writing programs and bots, dealing with textz as warez, as executables.” Lütgert proposes a more technically precise sense of the executable text file in a series of works that radiate out from the Textz archive. The first of these was generated in response to a cease-and-desist letter received by Textz from a lawyer representing the press Suhrkamp in the summer of 2002. At the time, a scandal was brewing in Germany over conservative writer Martin Walser’s book Tod Eines Kritikers (Death of a critic), a work of “deep incompetence” that was widely critiqued for its anti-Semitic caricatures.55 Anticipating further rebuke for publishing the book, Suhrkamp made the mistake of sending out PDF review copies of a file titled “walser.pdf” to various news organizations who they hoped would issue favorable appraisals of the project before the book went to press. In what was a novel development at the time, these files leaked online, prompting Suhrkamp to scramble to remove them from distribution. In an activist gesture of resistance to the press’s attempt to sanitize the book’s anti-Semitic content and its attempt at crisis management, Textz put up a file entitled “walser.pdf” on its site. In actuality, the file in question contained a PDF of Bruce Sterling’s The Hacker Crackdown, a nonfiction work on the history of phreaking (that is to say, illegally accessing the telephone system) and cracking: a tongue-in-cheek intertextual riposte to Suhrkamp’s ham-fisted attempts at crackdown. In spite of this, Surhkamp’s legal threats were issued, suggesting that the claimants had not bothered to open the file. In response to this series of events, Lütgert crafted a file entitled “walser.php.txt.” It contained a simple PHP script (a general-purpose scripting language geared toward web development) that had the sole purpose of reconstructing Walser’s Tod Eines Kritikers in full plain text (and “plaintext”) ASCII format under the title “walser.txt.”56

Here, finally, the textwarez metaphor meets the realization of the executable text file. Lütgert’s cryptographic project would be the start of a series of Textz initiatives exploring encoding formats under the guise of copyleft politics. This ambitious program—operating somewhere between text and warez, from “walser.pdf” to “walser.php.txt” to “walser.php” to “walser.txt”—remains far more compelling than the now-familiar genre of dispute over copyright of an errant file online. The PHP script might be considered a legally protected piece of software released under a General Public License. Its string of glyphs in no way resembles Walser’s text and presents, in a sense, an entirely original piece of writing:

$z.=“64000a20212728292c2d2e2f303132333435363738393a3b3f4142434445464748494a4b4c4d4e4f”;

$z.=“505152535455565758595a6162636465666768696a6b6c6d6e6f707172737475767778797a849293”;

$z.=“9697a9b4c4d6dcdfe0e1e4e7e8e9edf1f3f4f6fbfc0c022635474a40460a373c48504351103b5147”;

[. . .]

Textz dubiously maintains in the script’s release notes that hosting and distributing the PHP file would be well within the confines of the law. Only the execution of the script, producing Tod Eines Kritikers as walser.txt, would result in a breach of intellectual property. By enciphering the text within an executable code, the work calls both TXT and PHP formats into question. At the hexadecimal level, both files would be unreadable to an unaided human agent. Accessing the content thus requires that these formats be displayed, transmitted, and processed by a range of platforms and operating systems. By introducing an auxiliary step for rendering the text legible to human readers, “walser.php” concretizes the process of encoding and decoding that enables textual transmission on information networks while eluding easy classifications of text and executable. Beyond this, the decision to encode the text as a .php file of illegible glyphs can be understood as a political intervention insisting that this particular book’s content should not be made readable, bringing us back to the human reader at the Textz interface, whom we might now add back into this circuit of format poetics.

Information Does Not Want to Be Free

Working against nascent formulations of “The Californian Ideology,” Textz refused the heroics of digital piracy from its inception, contending that “information does not want to be free. in fact it is absolutely free of will, a constant flow of signs of lives which are permanently being turned into commodities and transformed into commercial content. http://textz.com is not part of the information business.”57 By negation, Textz grounds Stewart Brand’s famous slogan “information wants to be free” in the particularities of information theory and digital communication. The “walser.php” script provides a clear demonstration of information as functional parsed data—text put to computational use. Every text can also traffic as machine-readable code, and vice versa. If the “information business” is built on the demand for knowledge in a neoliberal economy, Textz is an artwork that explicitly counters the “free trade” of transparent information. Instead, the collection is mobilized as warez to perform the interplay of compression, data formats, and the poetics of digital communication at large. In both form and content, the collection refuses neoliberal currents that direct the constant flow of data toward profit. Instead, Textz actively devalues the corpus with difficult texts that demand reflexive reading.

Following on the inaugural publication of “walser.php,” Textz would release a series of more complex conceptual games that drew together formats of encryption and decryption under the sign of textwarez. To begin with, in the footer to “walser.php.txt,” the project also includes a short script called “makewalser.php.” The PHP script’s function was to easily generate a similarly executable PHP script for any text file. That is to say, any writing stored in a plaintext file format could be re-encoded as an executable file. This addition, in just eighty-one lines of code, reconfigures the entire Textz collection as potentially executable PHP script. The plaintext file contains the same input alphabet used by HTTP protocols and software programs. This infrathin play (to use Marcel Duchamp’s term for the most minute shade of difference) between text file and executable script highlights the interoperability of glyphs in fluid text formats. The segment of “makewalser.php” that generates a new header for any file makes the interchangeability of the Textz collection clear:

header(“content-type: text/plain”);

[. . .]

echo “ $self v1.01 (includes make$self)\n”;

echo “ this script generates the plain ascii version\n”;

echo “ of \”$title\” by $author.\n”;

echo “ it can be redistributed and/or modified under\n”;

echo “ the terms of the gnu general public license\n”;

echo “ as published by the free software foundation,\n”;

echo “ but may not be run without written permission\n”;

echo “ by $owner.\n”;

The “echo”-language construct outputs each of these lines into the header of a newly minted “makewalser.php.txt” file. Using the double extension “.php.txt” is another winking nod to the executable character of textwarez. The file is both script and text. If you remove the “.txt” extension, the PHP script can execute. By constellating variables for the PHP script ($self) alongside the author, title, and owner, the program accommodates bibliographic data within its translation scheme. Any text in the collection, from Gibson’s “Agrippa” to Deleuze’s “Postscript,” can be thrown into this scheme, with radically altered results for literary interpretation. This is the fundamental challenge that software poses to literary study: all potential instances of transformation overcome the actualized use of the script on any given text. Reading is contingent on how the text materializes. In one of the few published articles on Textz, Inke Arns writes that “the poetry of codeworks lies not only in their textual form, but rather in the knowledge that they have the potential to be executed.”58 Returning us to Saper’s claims for sociopoetics, these codeworks prioritize networked methods of database transmission beyond “their textual form.” Following the development of Lütgert’s Textz codeworks offers possible points of departure for addressing a poetics of media formats. Olga Goriunova notes:

The decision of considering walser.php, walser.pl or any other text generated with makewalser.php something that can be run through a Perl or PHP interpreter is entirely up to the reader’s viewpoint and imagination. The text may just as well be considered literary works of their own, resembling concrete poetry and conceptual art. As a matter of fact, nobody can rule out the possibility that a text file of, say, the fairytale “Cinderella” executes as algorithmic sourcecode on some programming language interpreter or compiler and generates de Sade’s “120 Days of Sodom” as its output.59

Later in 2002, Lütgert released a work entitled “pngreader,” a project that extends beyond the “walser.php.txt” release to apply the same operational concept to a variety of potential formats. Navigating an intuitive interface, the user of “pngreader” may encrypt any standard file format into a multicolored PNG image (see Figure 1.8). Or rather, as the program’s readme.txt explains, the script includes a “pngwriter()” function that may “restore” a lost PNG image from any given file format:

pngreader is a free, open source php script that reads png images, parses the input according to a defined set of rules, and displays the results. . . . the output format will normally be plain text, even though the program can return a variety of content types, including archives, images, music, video, and more. png images can be created with most graphic editors (a sample gallery can be found at http://pngreader.gnutenberg.net/gallery). pngreader also includes a currently unused function named pngwriter() which is able to recover lost images. given the output of pngreader(), it will restore the original png.60

These “lost images” reverse the direction by which information is usually encoded into an encrypted image: the image is a visual cipher for its hidden text rather than a conduit for text to hide within. In this way, “pngreader” brings into fruition Jorge Luis Borges’s imagining of the infinite-monkey theorem in the universal archives of the library of Babel, given sufficient time and input to a typewriter.61 Its readme.txt file asks: “to paraphrase a famous question: how long will an ape have to play around with photoshop until he draws a png that returns borges’ library of babel? the answer is: it has already been done (http://pngreader.gnutenberg.net/gallery/babel.png).” Encoded texts and artworks include Borges’s “The Library of Babel,” Adorno’s Minima Moralia, Public Enemy’s “Burn Hollywood Burn,” and the PDF of Michael Hardt and Antonio Negri’s Empire, among other “recovered” PNG images.

A seemingly random array of colorful pixels form an encoded square with a white, abstract Textz.com symbol in the bottom right corner.

Figure 1.8. An encrypted image of Jorge Luis Borges’s short story “The Library of Babel,” produced using Sebastian Lütgert’s “pngreader,” which can encrypt any standard file format into a multi-colored PNG image.

These carefully plotted media-format poetics, playfully operating on the interface between file types and the transcoding of culture on the internet, question our relationship to data displayed in the browser. As Florian Cramer has written, “pngreader thus allows artists to create images which, accidentally of course, might also be read as certain pieces of literature.”62 Conceiving images as works of literature—or executable PHP scripts as concrete poetry or conceptual art—brings us to the crux of the enigma that Textz presents. Where might we draw the line between text and textwarez? Or, following on recent studies of generative image platforms like Stable Diffusion and Dall-E, perhaps, within the pngreader project we might glimpse a way in which these lines might be erased altogether, anticipating what Hannes Bajohr has termed “operative ekphrasis”: the collapse of inherited models of text–image distinction in the multidimensional space of neural network architectures.63 In this way, the lost images of pngreader gesture toward weird futures of media transmutability that are only now coming to light.

The Conceptual Crisis of Private Property as a Crisis in Practice

Completing this cycle of cryptographic projects, consider a final work: “The Conceptual Crisis of Private Property as a Crisis in Practice” by Robert Luxemburg (a pseudonym for Lütgert).64 Its lengthy title is drawn from Hardt and Negri’s Empire in an allusion to the site’s success at cracking the encrypted file before Harvard University Press was able to publish the first printing of the book. Like the response Textz produced for Death of a Critic, this work comes in three parts: “crisis.php” (a decoding script), “crisis.txt” (an explanatory readme file), and “crisis.png” (a desktop screenshot that contains encrypted data). Plugging “crisis.png” into “crisis.php” according to the instructions given in “crisis.txt,” the user is able to reconstruct the entirety of cyberpunk author Neal Stephenson’s novel Cryptonomicon. A clue to guide the user is concealed within the screen capture, which spells out the novel’s title in icons along the bottom of the image (Figure 1.9). Stephenson’s novel represents a deliberate act of editorial selection. Its narrative fictionalizes an alternate history of the very cryptography at play in “The Conceptual Crisis,” while the book itself was also subjected to U.S. export restrictions due to trade secrets embedded within a cryptographic algorithm featured in the text. “The Conceptual Crisis” merges these layers into a conceptual game that incorporates speculative fiction, real-world circulation politics, encoding formats, and an elaborate ruse of steganographic play.

Whereas previous works like “walser.php” and “pngreader” visibly displayed their cryptographic function, this work presents a new steganographic impulse: the text is hidden in plain sight. “The Conceptual Crisis” calls up these contexts in its play between quotidian image, illicit data transfer, and secretive encoding hiding plaintext. To accompany the artwork, a grandiose “law” is proposed that undergirds each of the previously described projects: “Any digital piece of intellectual property can be transformed into any other digital piece of intellectual property with a relatively short and simple shell script.”65 Issued alongside this law, a very low-resolution image of Walter Benjamin (begging to be unpacked) is proven to harbor the data for a cracked version of Final Cut Pro 4, a highly sought after video-editing suite at the time, which is itself presented as a .mov file. The layers of masking constellate software with text, and formats with poetics: an image of a famous writer is simultaneously a cracked software program masquerading as a movie. Steganography can be thought of as another form of compression or encoding, it is a container technology that transmits a given input into a desired output for use. Decompressing these files calls the entire collection into view as interchangeable, mutable code, a collection parsing its reader. Or, following Benjamin’s most famous notes on decompression: for the collector of digital files, it is “not that they come alive in him; it is he who lives in them.”66

Computer desktop with various open windows, including text files, images, a video, and icons at the bottom spelling out “Cryptonomicon.”

Figure 1.9. Robert Luxemburg’s “The Conceptual Crisis of Private Property as a Crisis in Practice” (2003). This screen capture enables the user to reconstruct the entirety of Neal Stephenson’s novel Cryptonomicon.

Figure Description

The image depicts a MacOS computer desktop from 2001 with multiple open windows and applications. The open windows include text files, images, a video player, and several desktop icons.

Detailed Elements
  • Top Window (crisis.php): Displays code for a PHP file titled “crisis.php” with comments explaining its license under the GNU General Public License. The code contains various PHP functions and operations.
  • Middle Window (crisis.txt): Contains a lengthy text discussing “The Conceptual Crisis of Private Property as a Crisis in Practice.” It begins: “The program above will turn this screenshot into the full text of the novel Cryptonomicon by Neal Stephenson: 1. Copy the text in the upper window and save it as crisis-php. 2. Put crisis.php and crisis.png in the root directory of your local web server. 3. Launch your web browser and enter the URL http://127.0.0.1/crisis.php.”
  • Other Open Windows Include:
    • Empire.pdf: A partially visible document.
    • Cafeteria.jpg: An image file serving as the desktop background.
    • Burn.mov: A video player window showing a scene from “The Matrix” with a police officer citing a parked car.
    • The Conceptual Crisis of Private Property as a Crisis in Practice: A recursive window revealing this image in a file folder.
  • Icons at the Bottom: A series of application icons at the bottom of the screen spell out “Cryptonomicon,” referencing Neal Stephenson’s novel.
  • Time and Date: The top right corner of the screen shows the date and time: “Tue Apr 20 11:57:20 AM.”
  • Menu Bar: The menu bar at the top includes options for BBEdit, File, Edit, Text, Font, Search, Markup, Window, and Help.

Split-screen image with a black-and-white photo of Walter Benjamin on the left and a colorful, pixelated screen with a video icon on the right.

Figure 1.10. A link to a free download of Final Cut Pro 4 embedded in a representation of “Luxemburg’s Law,” which claims that “any digital piece of intellectual property can be transformed into any other digital piece of intellectual property with a relatively short and simple shell script.” After downloading both files, the Walter Benjamin image decrypts pirated software. Captured via Internet Archive.

Figure Description

The image is a website capture that features two distinct sections side by side, accompanied by explanatory text centered above and a text reading “PLAY” centered below.

Top Text
  • Content: “ANY DIGITAL PIECE OF ‘INTELLECTUAL PROPERTY’ IS A SIMPLE MATHEMATICAL DIFFERENCE OF TWO OTHER DIGITAL PIECES OF ‘INTELLECTUAL PROPERTY’ AND CAN BE GENERATED WITH A RELATIVELY SHORT AND FULLY LEGAL SHELL SCRIPT. (LUXEMBURG’S LAW).”
Left Section
  • Image: A black-and-white photograph of a man with glasses, resting his head on his hand, appearing thoughtful or contemplative.
  • Caption: Below the image, there is a text link labeled “DOWNLOAD WB.PNG.”
Right Section
  • Image: A colorful, pixelated screen featuring the Final Cut Pro 4 clapperboard icon in the center.
  • Caption: Below the image, there is a text link labeled “DOWNLOAD FCP4.MOV.”

The conclusion to this steganographic work, as it interfaces with the Textz collection, remains unpublished. Following extended litigation with Suhrkamp, which included a warrant for Lütgert’s arrest, the site went static in early 2004. In 2005, an ASCII rendering of the date “5/23/06” appeared on the Textz front page, promising updates that never arrived. More recently, Lütgert has written about future plans for the site: “For amateur cryptographers or Internet art historians, the most interesting find may be DePNG, short for ‘DePNG Probably Nothing Generator,’ developed between 2005 and 2008. The corresponding re-implementation of textz.com would have no longer hosted books, but only a gallery of cover images. A highly obfuscated script to transform the covers into full books would have been offered on the DePNG companion site.”67 Lütgert illustrates how this process would have operated. Each plaintext file in the Textz collection was to be embedded in a PNG “cover” for the text, which could be activated (and decrypted) by a code stored in the “spine” through the use of DePNG. Simulating a library or the remediated virtual bookshelf featured in a range of commercial e-books platforms, this unrealized Textz project points its user back to the codex even while it reveals pervasive mechanisms of digital encoding. Through a complex set of procedures, the final transformation of Textz would render every plaintext file as an executable steganographic PNG image. To read any of the works in the collection, the user would first be called on to recognize the historical, legal, and technical protocols that structure textual transmission on the internet. They would then have to participate in a playful enactment of format poetics to access the text.

Lütgert’s approach to textwarez thus sets forth a conceptual poetics for digital objects as a mise en abyme of encoding protocols and file formats. Sterne reminds us: “All formats presuppose particular formations of infrastructure with their own codes, protocols, limits, and affordances.”68 By revealing, distorting, and reconfiguring the limits of these protocols, Textz furnishes a compelling direction for the poetics of digital scholarship. These poetics emerge from interpenetrating actions of framing, constellating, contextualizing, and transmitting. There is a famous, if reductive, diagram from the annals of 1990s net.art history:

Diagram titled “Simple Net Art Diagram” showing two computers connected by a line with a lightning bolt in the middle labeled “The art happens here.”

Figure 1.11. MTAA (M. River & T. Whid Art Associates), Simple Net Art Diagram (1997). Keeping lines of transmission open, in 2011 the collective announced: “Using a Creative Commons Attribution 2.5 Generic (CC BY 2.5) license and general lazy disregard, MTAA give our 1997 artwork The Simple Net Art Diagram (SNAD) over to academics, artists and the general public for re-purposing. We hand it over carte blanche. Have at it.”

Transmission is everything. It should be noted that this diagram condenses a fantastically intricate communications circuit into two nodes and a single line of connection. Derived from the rigorous network conceptualism of the 1990s, this aesthetic of relationality reconfigures Saper’s sociopoetics toward digital access and sharing—or, for that matter, piracy and open-source movements—periodizing the work both technically and aesthetically. Indeed, Lütgert writes, “the nineties of the net are over.”

Given contemporary trends toward a more decentralized web in the 2020s, the possibility remains that we might yet reimagine digital practices through dreams and failures nested within the 1990s of the net. Simultaneously deploying a radical politics of distribution, new software for old media genres, and a visualization of “invisible” compression and transcoding processes, Textz directs us to a new horizon for media poetics. In so doing, it necessitates a corresponding model for scholarship on the internet: one that attends to the limits of collection and distribution, questioning the file itself alongside its use. Presenting a little database within elaborate models to highlight the demographics, politics, and technics of its dispersion, Textz is the lost digital humanities project that continues to anticipate endeavors to follow and new futures for poetic computation. In an interlude following this chapter, I propose a scholarly textual mechanism that follows Textz in ways that the essay cannot attempt to pursue. In a generous reading, the chapter may serve as a prelude to this gesture.

Annotate

Next Chapter
EXE TXT
PreviousNext
This book is freely available in an open access edition thanks to the generous support of Arcadia, a charitable fund of Lisbet Rausing and Peter Baldwin, and the UCLA Library.

Excerpts from “The Defective Record” by William Carlos Williams, from The Collected Poems: Volume I, 1909–1939, copyright 1938 by New Directions, reprinted by permission of New Directions and Carcanet Press. Excerpts from Stan VanDerBeek, Poemfield #2, copyright 1971 Estate of Stan VanDerBeek.

A portion of chapter 1 was previously published in a different form in “EXE TXT: Textwarez & Deformance,” in Code und Konzept: Literatur und das Digitale, ed. Hannes Bajohr (Berlin: Frohmann), copyright 2016 by Daniel Scott Snelson. A portion of chapter 3 was previously published in a different form in “Live Vinyl MP3: Mutant Sounds, PennSound, UbuWeb, SpokenWeb,” Amodern 4: The Poetry Series, Creative Commons Attribution 3.0 Unported License, copyright 2015 by Daniel Scott Snelson. A portion of chapter 4 was previously published in “Incredible Machines: Following People Like Us into the Database,” Avant, June 4, 2014.

Copyright 2025 Daniel Scott Snelson. The Little Database: A Poetics of Media Formats is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0): https://creativecommons.org/licenses/by-nc-nd/4.0/
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org