1
The Devil We Know
Maxwell’s Demon, Cyborg Sciences, and Flow Control
“MAXWELL’S DEMON,” the celebrated thought experiment by physicist James Maxwell, occupies a strange space, at once both abstract and real. Perhaps the demon is abstract enough to explore the realities of control and communication. What started out as a pure thought experiment inspired decades of speculation about its possible existence and then attempts to build an artificial demon. Maxwell’s demon inspired the crucial breakdowns that Donna Haraway identifies as part of postwar computer research, or what has been called the “cyborg sciences.” Two breakdowns in particular relate to my interest in networks, infrastructures, and control: between humans and machines and between the physical and nonphysical.[1] This chapter picks up from these breakdowns to track the development of digital communication and control that led to internet daemons.
In contemplating an imaginary demon, the originator of cybernetics, Norbert Wiener, imagined a thinking machine. As Katherine Hayles notes in her influential work on posthumanism and cybernetics, “to pose the question of ‘what can think’ inevitably also changes, in a reverse feedback loop, the terms of ‘who can think.’”[2] If computers thought, were they like humans? How could humans and computers better interact given their similarities? The reconsideration of how humans interact with computers led to digital communication or what I call “networks.” J. C. R. Licklider, a defining figure in postwar U.S. computer research,[3] framed the issue as a problem of “man–computer symbiosis” and suggested that the way to achieve harmony between humans and computers was through communication. As Licklider imagined a better world with computers, actual users began to communicate with computers. From chats in time-sharing systems to more elaborate real-time interactions in computer defense systems, these early computer infrastructures foreshadowed the many networks operating online today and continue to inform today’s internet.
In addition to provoking reflections on the nature of communication, computers were also used to automate the work of running a communication system. Researchers delegated core communication functions to them, which led to computers becoming communication infrastructure. Maxwell’s demon foreshadowed the kind of control found in these infrastructures. Maxwell employed his demon to describe an imaginary mechanism able to bring order to the gas molecules’ random distribution. The demon’s ability to independently keep a system in order resembles contemporary engineering concepts such as feedback, governors, and self-regulating machines. While these principles helped devise more efficient steam engines,[4] Maxwell’s demon came to symbolize a force of control in machines and technical infrastructure that inspired subsequent research into a general information processor capable of creating order in the world. This capacity for control, the dream of making order out of chaos, inspired Wiener as he developed the concept of cybernetics. Could real demons be found in nature? Or could humans program demons to combat entropy and create order? Computer infrastructure would become possessed not by Maxwell’s demon exactly, but by daemons, a term that computer scientists at Massachusetts Institute of Technology (MIT) used to designate the programs running in the background of their computers and keeping a system in working order.
This chapter explores these two turns during the development of digital computing that led to internet daemons. It begins by elaborating on Maxwell’s demon’s contribution to information theory and computing. From there, the chapter traces the problem of human–computer interaction that materialized in early–Cold War computer infrastructures. At MIT, two early computer systems developed real-time and time-shared approaches to digital communication. Maxwell’s demon also made a leap into the real world at these institutions. The latter part of this chapter will address how Maxwell’s demon both represents the theory of digital control and became an inspiration to programmers who wrote their own daemons to manage their new computer infrastructures.
“If We Conceive a Being”: Materializing Maxwell’s Demon
In the nineteenth century, Maxwell, a seminal figure in physics, engineering, and control theory, conjured a demon into the sciences. In his book on thermodynamics, Theory of Heat, published in 1871, he paused to consider a potential refutation of its second law, which states that, generally speaking, entropy increases over time.[5] Maybe the law could be broken, Maxwell speculated, “if we conceive a being whose faculties are so sharpened that he can follow every molecule in its course, such a being, whose attributes are still as essentially finite as our own, would be able to do what is at present impossible to us.”[6] In Maxwell’s thought experiment, this being acted as a gatekeeper between two chambers containing molecules of gas, opening and closing a door to selectively control the transmission of molecules between chambers. By doing so, the demon isolated hot molecules in one chamber and cold molecules in the other,[7] raising the temperature in the first chamber and lowering it in the second. This redistribution of energy toward an extreme ordered state violated the second law of thermodynamics, which predicted that the two chambers would revert back to a random distribution of molecules (or what was later called “heat death”).
The creature in Maxwell’s thought experiment became known as a demon, and its being deeply influenced the natural sciences. As Hayles writes:
Charting the responses to Maxwell’s Demon is like mapping the progress of Christopher Columbus across the ocean. From the compass readings we can infer what the prevailing winds and currents were even though we cannot measure them directly. It is like Columbus’s route in another respect as well; only in retrospect does the journey appear as progress toward a certain end.[8]
For her part, Hayles traces the influence of entropy and thermodynamics on information and the posthuman. Key figures in the information age such as Wiener and Claude Shannon relied on the debates inspired by Maxwell’s demon to formulate their definitions of information. Historian of economics Philip Mirowski, another key figure in the cyborg sciences, traces the long history of refutations of Maxwell’s demon that led to Wiener’s cybernetics, John von Neumann’s computer, and information theory, all of which informed modern economics. For those interested in the longer history of Maxwell’s demon, one could find no better guides.[9]
For this book, Maxwell’s demon makes two important contributions to conjuring internet daemons. First, it inspired the design of computers and a reconsideration of the nature of communication. Second, it inspired interest in new kinds of digital control. These developments evolved out of the interpretation of Maxwell’s demon in information theory.
The theory of information, by most accounts, depended on Maxwell’s demon. His thought experiment eased the transference of thermodynamic concepts, specifically probability and entropy, into a mathematical approach to information. The study of thermodynamics treats its laws as “statistical generalizations” with only a probabilistic ability to predict—say, the location of molecules.[10] Entropy refers to the probable distribution of molecules in a space. Maxwell’s demon played with both: its gatekeeping moved molecules, altering their probable distribution and overturning the entropic assumption that energy stabilizes at an equilibrium. Efforts to exorcise the demon and confirm the second law turned to its work. How could the demon function? Were there hidden costs that made its work improbable? In answering these questions, theorists began to conceive of the demon’s work as information processing. In addition to remembering the location of the molecules, the demon also had to track their movement.
Information became a theoretical concept out of the refutation of the daemon. As Wiener explained, for Maxwell’s demon “to act, it must receive information from approaching particles concerning their velocity and point of impact on the wall” (italics added).[11] Information about the molecules allowed the demon to control their transmission in a closed system, creating a self-regulating system. In Maxwell’s thought experiment, the demon appears to be able to acquire information about the molecules’ movement without any cost. How could a demon gain this information? Wiener argued that “information must be carried by some physical process, say some form of radiation.”[12] The demon could not operate because “there is an inevitable hidden entropy cost in the acquisition of information needed to run the device.”[13] The energy required to transfer information between molecule and demon would eventually, according to Wiener, cause the demon to malfunction.
Maxwell’s demon encapsulates information theory research accelerated by World War II. This research redefined probability and entropy around the emerging concept of information developed by the likes of von Neumann, Shannon, Wiener, and Alan Turing.[14] The approach reinterpreted the world in a cultural perception that Hayles describes as understanding that “material objects are interpenetrated by information patterns.”[15] Wiener and Shannon both had key roles in developing information theory. Indeed, talks with Wiener and his classified reports on the subject inspired Shannon to publish his theory of information.[16] Shannon, according to Hayles, “defined information as a function of the probability distribution of the message elements.”[17] He introduced this definition in his article “A Mathematical Theory of Communication,” published in the Bell Labs Technical Journal,[18] and it became a foundation for the modern concept of information and digital transmission theory. Decontextualization of information facilitated digital communication. Computer scientists and electrical engineers could focus on transmitting discrete units of information and ignore the complexities of human context and even the physical medium of communication.
Information theory did not distinguish between human and machine, and this is part of the breakdown observed by Haraway. This breakdown prompted a reconsideration of the subjects of communication. Where some scholars had turned to the heavens in the search for intelligent life,[19] early computer scientists found a kindred spirit in Maxwell’s demon. Before long, scientists imagined playing games with their newfound companions.[20] Turing, who established the principles of a digital computer, speculated that he could play naughts and crosses (or tic-tac-toe) with a machine.[21] Shannon, a mathematical and electrical engineer whose work was integral to the development of the concept of digital information, thought a computer could play chess.[22] Wiener also saw computing as a way to predict the moves of an unknowable enemy fighter pilot.[23] Turing, in his foundational paper “Computing Machinery and Intelligence,” published in Mind: A Quarterly Review of Psychology and Philosophy in 1950, popularized the idea of talking to a computer. His “imitation game” (a play on the gendered party game of the same name) asked a human to converse with another unidentified subject.[24] After some time, the human had to decide if they were talking to another human or a machine. The test requires, as a precondition, the breakdown of boundaries between humans and machines. As Friedrich Kittler notes, the Turing test is possible only when the defining traits of humanity can be expressed on a computer printout.[25]
Maxwell introduced the demon as a purely theoretical possibility, but it became a reality when adopted by early computing researchers above, who began to consider the ways that demons could function in the real world. Wiener wrote, “there is no reason to suppose that Maxwell demons do not in fact exist.”[26] If demons might be found naturally, could they also be built artificially? In other words, being open to the existence of Maxwell’s demon allowed for the possibility of building a real machine designed for generalized control and information processing. Shannon, while he imagined computers playing chess, also suggested that a thinking machine could “handle routing of telephone calls based on the individual circumstances rather than by fixed patterns.”[27] Thus, Maxwell’s demon made the transition from inspiring the idea of information to providing conceptual fuel for imagining the infrastructures of early computing.
Interpretations of Maxwell’s demon finally led to a consideration of computer systems as a force of control in a disordered world. Much of this thinking has roots in lofty interpretations of entropy. Entropy and heat death, in nineteenth-century interpretations, closely tracked the fortunes of the British Empire.[28] Using heat death to ponder the human condition continued in information theory. In brief, Wiener had a negative view of entropy, whereas Shannon had a more optimistic interpretation. Shannon (and his interpreter Warren Weaver) thought entropy could add information. Chaos, noise, and disorder could be seen as “the source of all that is new in the world” (similar to the writings of second-generation cyberneticist Gregory Bateson).[29] Wiener, by contrast, treated entropy as a negative. In his seminal book on cybernetics, he asserted that “the amount of information is the negative of the quantity usually defined as entropy in similar situations.”[30] Order and meaning were the opposite of entropy. Wiener’s negative view of entropy led him to interpret Maxwell’s demon as a noble creature. The demon’s “sharpened faculties” allowed it to control the movement of molecules in ways impossible for humans. Wiener found inspiration in this vision of demonic control. Control, according to his own definition, creates order out of chaos; it keeps a signal amid the noise. His science of cybernetics sought to design information systems with feedback mechanisms that would create homeostasis, resulting in a self-regulating system that avoided social entropy.[31] Like a feedback mechanism, Maxwell’s demon acts as an agent of homeostasis, exerting control within a system to maintain order and reduce entropy.[32]
Wiener hoped the demon could replace entropy with what he called “metastability.” Maxwell’s demon also maintained regularity. Wiener argued that “we may speak of the active phase of the demon as metastable” (italics added).[33] The “active phase” refers to the time when the demon was working before it lost control. Wiener borrowed the term “metastable” from the sciences to describe the overall effect of Maxwell’s demon. It was originally coined by chemists to describe a volatile liquid, and then physicists adopted it to describe the state of the atom.[34] Metastability denotes becoming ordered; it is the phase before a stable state. As such, metastability represents a moment of potential and interaction much like Maxwell’s demon trying to organize active molecules. Indeed, this world of molecules might look very disorganized to the demon, but outside observers can perceive its metastability. Later, Wiener speculated that certain biological processes might be similar to Maxwell’s demon in their metastability. His reflection on enzymes helps further clarify metastability (in addition to leading to a generalization of the demon, discussed in the next section). He wrote: “The enzyme and the living organism are alike metastable; the stable state of an enzyme is to be deconditioned, and the stable state of a living organism is to be dead.”[35] By extension, the stable state of a communication system is to be silent, whereas a metastable communication system is one that is in use.
Internet daemons spawned from these legacies of Maxwell’s demon, but not directly. Digital computers came first. As these computers developed, they prompted a reconsideration of communication in addition to actualizing Wiener’s dreams of control. In the next section, I elaborate how computers led to new kinds of digital communication. At issue was how to synchronize humans and machines. Answers to the synchronization question, or what was called man–machine “symbiosis,” led to the first computer infrastructures and networks. Later in this chapter, I discuss the second contribution of Maxwell’s demon: its tireless work moving molecules playfully inspired programmers as they built control mechanisms for their new digital operating systems.
“Whose Faculties Are so Sharpened”: Networks of Humans and Daemons
If people could play games with computers, could they also talk to them? Networks sprang from this communicative impulse, as I elaborate in this next section. The problem became known as man–computer symbiosis, a name that also serves as a reminder of the marginalization of women in the history of computing.[36] The term comes from Licklider, who was an emerging leader in computer science and a figure central to the development of modern computing.[37] After earning a PhD in psychology and psychoacoustics, he took a position as assistant professor of electrical engineering at MIT. There he consulted at one of the early centers of digital computing, the Lincoln Lab, before he left to work as vice president for the high-technology firm Bolt, Beranek, and Newman Inc. (BBN), where he worked on early time-sharing computer services.[38] The Lincoln Lab and BBN were both important centers of work on early computing, and both labs were among the first tasked with solving the problem of finding better applications for computers.
Licklider’s influential 1960 paper “Man-Computer Symbiosis” summarized almost a decade of computer research when published in the Institute of Radio Engineers’ journal, IRE Transactions on Human Factors in Electronics.[39] The novelty, and perhaps the success, of man–computer symbiosis was its ability to define a field of research dedicated to human and computer communication. Licklider drew on his work in early computing, when he had developed some experimental ways to allow for communication between humans and machines. In his paper, he distinguished between “mechanically-extended man” and “man-computer symbiosis.” He criticized early computing for being too one-sided. In these systems, “there was only one kind of organism—man—and the rest was there only to help him.”[40] The demon got the snub. True symbiosis meant establishing a “partnership” between the two organisms based on “conversation” and “cooperation.” Humans would be able to learn from machines, and machines would be able to understand human speech (although simple language recognition remained at least five years off in his estimation). Licklider called for computer science to improve the communication between man and machines, or what today would be called “human–computer interaction” (HCI).
Licklider’s vision for man–computer symbiosis implicitly criticized the way communication worked in batch computing, the most common form of computing at the time. Analog computers, and even their earliest digital successors, functioned as batch processors, a paradigm of computing in which programmers inputted commands in batches and then waited for a response. Beginning with the first tabulating machines developed by Herman Hollerith for the U.S. Census, batch computers automated the work of industrial computation that had previously filled rooms with, usually female, “computers.”[41] While effective, batch computers provided little chance for interaction. These conditions led to what Herbert Simon later called a man–computer “imbalance,” since batch processing meant “the programmer had to wait for the response of the computer.”[42]
Licklider recognized that solving the problem of communication between humans and computers was chiefly a matter of time. Even though humans could relate to their computers, they differed in their tempos. Licklider’s observation resonated with an earlier assertion by Wiener that the philosophical concept of duration could apply to machines. Henri Bergson used the term “duration” to describe each individual’s unique experience of time. Wiener suggested that “there is no reason in Bergson’s considerations why the essential mode of functioning of the living organism should not be the same as that of the automaton.”[43] Wiener recognized that humans and automatons each had their own durations, and Licklider discovered these durations had what he called a speed “mismatch.” He explained that human thinking “move[s] too fast to permit using computers in conventional ways”:
Imagine trying, for example, to direct a battle with the aid of a computer on such a schedule as this. You formulate your problem today. Tomorrow you spend with a programmer. Next week the computer devotes 5 minutes to assembling your program and 47 seconds to calculating the answer to your problem. You get a sheet of paper 20 feet long, full of numbers that, instead of providing a final solution, only suggest a tactic that should be explored by simulation. Obviously the battle would be over before the second step in its planning began.[44]
In addition to reflecting the influence of the Cold War on computing, Licklider’s example of a speed mismatch parodied batch computing. It was too one-sided, as “the human operator supplied the initiative, the direction, the integration and the criterion.”[45] Genuine symbiosis required the computer and the human to be in a shared time (or what Licklider called “real-time”). Symbiosis would not necessarily mean equality (Licklider expected computers to surpass humans in intelligence), but the two organisms would be together in partnership.
For Licklider, symbiosis provided better access to the past, improved “real-time interactions,” and fostered the ability to project into the future. In an ideal state of man–computer symbiosis, human and computer memory would be united to allow for better real-time information retrieval. Humans would “set the goals and supply the motivations,” while computers would “convert hypotheses into testable models” and “simulate the mechanisms and models, carry out the procedures, and display the results to the operator.”[46] For Licklider, it seemed “reasonable to envision, for a time ten or fifteen years hence, a ‘thinking center’ that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval and the symbiotic functions.”[47] Tremendous optimism, significant for a researcher reflecting on his experiences working in nuclear air defense, abounds in Licklider’s writings on man–computer symbiosis. The coming era of man–computer symbiosis “should be intellectually the most exciting and creative in the history of mankind” (at least until “electronic or chemical ‘machines’ outdo the human brain,” bringing an end to the era and to Licklider’s optimism about the future).[48]
Licklider’s commentary obliquely referenced the state of American computer research. From the time the Soviet Union began testing nuclear weapons in 1949, the U.S. military, especially the Air Force, had invested heavily in computing.[49] Much of the money went to fund research at MIT, continuing its legacy of military science into the postwar era. MIT hosted two major computing projects: the Semi-Automatic Ground Environment (SAGE) and the Compatible Time-Sharing System (CTSS). These two early computer infrastructures directly inspired both a research agenda into digital communication and subsequent infrastructures like the Advanced Research Projects Agency’s (ARPA) packet-switching digital communication system, ARPANET. They became prototypes of the early computer networking (real-time and time-sharing) that synchronized humans and machines in distinct ways. Understanding these synchronizations helps explain the networks online today.
Real-Time Computing
Real-time infrastructures delivered, or at least promised, instant or on-line responses to user inputs. Early real-time systems required massive budgets. SAGE looms over the early history of digital computing as one of the first attempts to create a real-time computer infrastructure. SAGE developed out of Project Lincoln (later the Lincoln Laboratory)—a collaboration among MIT, the RAND Corporation, and IBM begun in 1951—and prior work on the Whirlwind computer at MIT.[50] To mitigate the threat of a Soviet attack, SAGE attempted to create a real-time network using then-experimental digital computers like the Whirlwind.[51] The discourse of real-time control, or what Paul Edwards calls a “closed world discourse,” helped justify SAGE’s tremendous cost. “A SAGE center,” Edwards argues, “was an archetypal closed-world space: enclosed and insulated, containing a world represented abstractly on screen, rendered manageable, coherent and rational through digital calculation and control.”[52]
The work done on SAGE advanced computer hardware and communication simultaneously. Over the course of the project’s life span, from 1954 to 1984, the U.S. Air Force built an international communication infrastructure. The bulk of the SAGE system consisted of twenty-three bunkers scattered across North America. Known as “detention centers,” these gray concrete structures created a real-time simulation of American air space. The centers communicated through AT&T telephone lines to receive radar reports and dispatch orders to pilots in the air. Each center included two IBM AN/FSQ-7 computer systems that cost $30 million each.[53] As SAGE used only one computer at a time, the other was a backup, evidence of the extent to which the needs of real-time, always-on communication overrode cost concerns. These computers calculated the movement of projectiles and rendered their projections on a display scope, a predecessor of today’s computer monitors. Importantly, the display scope augmented human vision by displaying the full trajectory of a projectile. The computer remembered and displayed both the past known locations of a target and its projected future.
Real-time networking soon found application outside of the military. American Airlines launched one of the first massive initiatives to integrate computer networks into a business model. IBM worked with the airline to translate insights gained from the SAGE project into a distributed airline reservation system, just as telegraphy once coordinated railroads. The project’s name, Semi-Automated Business Research Environment (SABRE), directly referenced SAGE. The SABRE system was a massive project: it took five years to build, employed two hundred technical professionals, and cost $300 million. SABRE, which went live in 1965, provided real-time data to airline employees to help them book seats and minimize overbooking. The results revolutionized air travel by synchronizing American Airlines’ seat stock with its reservation process. The real-time SABRE system proved immensely valuable, with a return-on-investment rate of 25 percent.[54]
Time-sharing Computer Systems
Time-sharing developed as a more cost-effective way to achieve the online interaction of real-time computing. Time-sharing computers offered a cheaper solution by creating systems that shared one big and expensive machine among multiple users. This approach maximized the use of mainframe computers by allowing many programmers to use the same computer at once. The expense of leaving computers to idle led to time-sharing systems where a queue of programmers shared a high-speed computer that simulated an online or real-time system while continuing to operate under the batch-computing paradigm. Users still submitted batches and waited for a response, but the computers became powerful enough to multitask these requests to simulate a real-time environment.[55] Sharing computer resources allowed universities to justify buying expensive machines.[56]
Although debate surrounds the origins of the phrase “time-sharing,”[57] MIT has a strong claim to being the nucleus of early time-sharing experiments. By 1959, MIT had purchased an IBM 7090 computer[58] to support experiments in time-sharing in its computation center and later at Project MAC, a vague acronym usually stated to stand for “Man And Computer.”[59] John McCarthy, one of the developers and a founder of Project MAC, remembered that part of the motivation to write their own time-sharing system came from the high cost of IBM’s promised “real-time package” for the computer.[60] Instead, programmers at the center developed the CTSS operating system on their own.
CTSS worked to create a communication network out of this shared infrastructure. The technical work of CTSS attempted to overcome the communication bottleneck imposed by the system’s central processor. The CTSS Supervisor software program managed this bottleneck[61] by allocating processor time and priority among users and managing reading and writing information from the system’s drum memory.[62] The program’s algorithms made decisions based on the status and access level of each user.
The collective activity of CTSS created a common time among the users, the consoles, and the IBM hardware. The storage program allowed users to have a past by saving programs to the system’s memory that endured into the future. The scheduling program sorted simultaneous user commands and synchronized users in a queue for the limited common processor. Users never shared the same moment on the processor, but their commands existed in a temporal relation of processed, processing, and to process. This trick, so to speak, meant that the overall CTSS system created an experience with a strong resemblance to concurrent use. It allowed for an early version of computer chat, as terminals could send messages between each other.
Like the rest of the cyborg sciences, time-sharing proliferated both inside and outside the lab.[63] A subsequent time-sharing computer system came from BBN, which was located near MIT. Many computer pioneers from MIT worked at BBN, including Marvin Minsky and Ed Fredkin, who were time-sharing experts like McCarthy, as well as Licklider. By 1960, Licklider was prototyping a Digital Equipment Corporation PDP-1 at BBN and looking for contracts to sell time-shared access to it. BBN sold a time-sharing system to the Massachusetts General Hospital in 1963 and started a subsidiary, TELCOMP, that offered users in Boston and New York City remote access to a digital computer.[64] The idea proved popular, and by 1967, twenty other companies had begun offering commercial clients time-sharing services.[65]
Today, CTSS and SAGE might be seen as a particular kind of infrastructure, namely media infrastructure. Lisa Parks and Nicole Starosielski, in their edited volume on the subject, define media infrastructure as “situated sociotechnical systems that are designed and configured to support the distribution of audiovisual signal traffic” (italics added).[66] Their emphasis on distribution resonates with the insights of computer networking historian Paul Edwards, who emphasizes flow as a key function of infrastructure.[67] Manuel Castells, whom Edwards cites, defines “flows” in his study of the network society as “the purposeful, repetitive, programmable sequences of exchange and interaction between physically disjointed positions held by social actors in the economic, political and symbolic structures of society.”[68] Flows can be seen as the underlying material conditions that enable communication networks to develop from media infrastructure. SAGE and CTSS exemplify this relationship between infrastructure and network. The budgets of SAGE and SABRE allowed for a much more ambitious but less experimental deployment of real-time computing, whereas CTSS developed a more complicated system to mimic online computing. All three created hardware or infrastructure that could provision a certain kind of flow for networks.
Computers, Networks, and Synchronization
With this history in mind, I would like to elaborate my definition of a network, which is important to understanding the specific influence of a daemon’s flow control. Where studies of networks often focus on their topologies or spatial properties,[69] I emphasize their chronologies or properties related to time. Whether running on telegraph wires or the internet, communication networks are unique complexes of time or temporalities.[70] The word synchronization combines the Greek syn, meaning “united or connected together,” with khronos, meaning “time.” Crucially, synchronizations are complexes of time (a term borrowed from Gilles Deleuze’s philosophy of time) that include pasts, presents, and futures.[71] The internet, as will be discussed, includes many networks that bring humans and machines together in shared pasts, presents, and futures. These networks range from Peer-to-Peer (P2P) networking to live streaming to the on-demand archives of the world wide web.
Networks are productive synchronizations that afford different forms of communication and collaboration (as Licklider suggested). Studies of communication have long considered its influence on time and behavior. Monasteries ringing bells in medieval Europe, according to Lewis Mumford, “helped to give human enterprise the collective beat and rhythm of the machine; for the clock is not merely a means of keeping track of the hours, but of synchronizing the actions of men.”[72] The transmission of a tone by a ringing bell imparted a collective rhythm that coordinated and controlled those within hearing range. Without the sound of a bell, serfs and nobles would fall out of synchronization.
James Carey’s analysis of the telegraph helps elaborate the idea of synchronization. National telegraph networks facilitated the establishment of a communication system effectively distinct from a transportation system.[73] Telegraphy decreased the time delay in sending messages at a distance and facilitated greater regional coordination, cooperation, and control. In effect, economies could be in contact over larger regions. Commodity traders felt the impact of news transmitted by wire when they began receiving the prices of goods in any city before they shipped. As a result, price disparities between cities lessened. The telegraph, in other words, synchronized disjointed local markets into a coherent national one. Synchronization in the case of the telegraph entailed a united past, future, and present. As Carey argues, “it was not, then, mere historic accident that the Chicago Commodity Exchange, to this day the principal American futures market, opened in 1848, the same year the telegraph reached that city.”[74] Commodity traders went from profiting by knowing where to buy and sell to when to buy and sell.
Networks have uneven temporal relations in which some users might exist in the past relative to other users. Certain users in CTSS could have prioritized access to processing time. The SAGE system, tightly controlled, orientated its time around the decision making of the North American Aerospace Defense Command. In the contemporary internet, social media users have very different levels of access. Social media firms sell access to the real-time system (usually called the firehose) and limit users’ ability to participate in the present (through rate limiting or spam filtering). This business model is quite old. The New York Stock Exchange initially delayed telegraph messages by thirty seconds to give trading in the city a competitive advantage; it paid to operate in New York City to receive information live on the floor rather than delayed via the telegraph. The contemporary regime of high-frequency algorithmic trading, as Michael Lewis eloquently describes in his book Flash Boys, involves the production of networks to ensure that certain traders operate in what is essentially the future of other traders.[75]
Sarah Sharma, in a larger review of theoretical approaches to time, argues that temporalities are uneven systems of temporal relations. She explains that “temporal” denotes “lived time” and continues:
The temporal is not a general sense of time in particular to an epoch but a specific experience of time that is structured in particular political and economic contexts. Focusing on the issue of fast or slow pace without a nuanced and complex conception of the temporal does an injustice to the multitude of time-based experiences specific to different populations that live, labor, and sleep under the auspices of global capital.[76]
Certainly, the networks encountered in this book invite discussion about their relation to global capital, but also at work in Sharma’s thinking is a sense of a temporality as a system of relations common to humans and, I suggest, machines. Networks require technical labor to function, as will be discussed in the next sections, as well as shared meanings about time. Without getting too ahead in my argument, temporality on the internet works in a more complex fashion than the temporalities of just one network: it supports many temporalities at once. As the next section explains, daemonic control allows these infrastructures to share resources among many networks.
“To Do What Is at Present Impossible to Us”: From Demons to Daemons
How did CTSS manage the demands of its multiple users? (A similar problem vexed the early designers of packet switching.) As mentioned above, CTSS relied on the Supervisor program, which managed the overall data flows in the operating system. It remained active at all times, though users rarely interacted with it directly. Instead, the Supervisor managed users’ input and output and scheduled executed jobs. Every job submitted by the user had to go through the Supervisor. Its scheduling algorithm ranked jobs based on their size and time to completion, in effect deciding which jobs finished first and which jobs had to wait. The Supervisor, in short, played a vital role in the time-sharing system: it shared the time. Without its efforts managing the flows of information, the system could crash and the hardware lock up.[77]
The Supervisor greatly resembles Maxwell’s demon, and it exemplifies the kind of program through which the metaphor is actualized in computing. Where one manages the flows of molecules, the other handles jobs. One works in a closed system, the other in an operating system. Moreover, these similarities are not accidental. Researchers at the project began to refer to programs as demons or daemons in a direct allusion to Maxwell. As Fernando J. Corbató, a prominent computer scientist at Project MAC, explained later on:
Our use of the word daemon was inspired by the Maxwell’s demon of physics and thermodynamics. (My background is Physics.) Maxwell’s demon was an imaginary agent which helped sort molecules of different speeds and worked tirelessly in the background. We fancifully began to use the word daemon to describe background processes which worked tirelessly to perform system chores.[78]
The change in spelling from “demon” to “daemon” was intended to avoid some of its older, religious connotations. No matter the spelling, Maxwell’s demon provided an evocative imaginary of control and order to explain the computationally routine.
Through Project MAC and CTSS, Maxwell’s demon materialized as digital daemons running in computer hardware. The joke became real when the first daemon entered the infrastructure to control tape backup, and the process was known as the Disk And Execution MONitor, or DAEMON. DAEMON shared the infrastructure with other daemons responsible for scheduling time and cleaning up messes made by users. Later on, MIT would invest in the biggest time-sharing project ever, MULTICS, the Multiplexed Information and Computing Service. Although MULTICS struggled to stabilize, it inspired the influential operating system UNIX, which retained the term daemon to refer to the programs running in its background.
From running a printer to managing a tape backup, digital systems require the daemons to maintain control. Of all these daemons, I am interested in the daemons that manage flows in these digital systems (building on my earlier discussion of infrastructure and flow). In doing so, I follow Wiener (as well as many others) by investigating the link between communication and control. His cybernetics (of communication and control) is just one of many definitions of control at work in this book. In this last section, I elaborate on my specific usage of control and its relation to communication. This overview breaks with the historical discussion so far to capture the broad features of digital control in order to contextualize internet daemons.
Control does not so much create constraints as it does influence the conditions of possibility for communication, movement, or action. In infrastructure studies, control works against “the variability inherent in the natural environment.”[79] Before the term’s use in infrastructure studies, control had been defined by Michel Foucault as a productive power: “It produces pleasure, forms knowledge, produces discourse; it is a productive network which runs through the whole social body, and is far more than a negative instance whose function is to punish.”[80] To put it another way, control conceptualizes how constituent power (pouvoir)—the chaos, noise, and potential—becomes constituted power (puissance), a discernible infrastructure or assemblage.[81] Control can be considered an immanent power because it is part of the very system that it holds together, necessary for both its constitution and its continued operation.
Communication is a key mechanism of control, and so much so that it is difficult to distinguish the two at times. To borrow a term from Raymond Williams, communication “imparts” a shared or common existence for those in contact with each other.[82] Imparting can be seen as a form of control, putting people and machines in contact with one another in a common system of communication. Systems of communication encode means of exchange and coordination and cultivate shared meanings and values. Media studies scholars have highlighted this link between the constitution of a system through communication and the control exerted therein. Indeed, theories of “hypodermic needle models,” “propaganda,” and “culture industries” (terms drawn from early work in communication studies) were developed in response to anxieties about the power of communication media to control society.[83] Yet, these “direct effects” models overlook the complex operation of communication as a form of control in multiagent systems.
Theories of control in communication, however, can be more specific about mechanisms than the theory of hypodermic needles. Organizational studies,[84] infrastructure studies,[85] and media studies[86] complement one another in their attempts to link specific mechanisms of communication within media, organizations, and social forms. Scholars of organizational communication have detailed many of the mechanisms of communication at work in late-nineteenth-century corporate America. For example, James Beniger details a wide array of mechanisms for advanced information processing (such as card-sorting computers developed by IBM) and reciprocal communication (such as the monthly reports train conductors gave to chief engineers). These mechanisms of control allowed corporations to better monitor and regulate their activity: to know the activity of their agents, the deviations from their purpose, and how these deviations might be corrected. However, Jo Ann Yates argues that Beniger’s definition of control is too broad. In her study of the same period, she focuses on the rise of managerial control, an influence “over employees (both workers and other managers), processes and flows of materials” due to mechanisms “through which the operations of an organization are coordinated to achieve desired results.”[87] Yates refines both the agents (employees of a company) and the mechanism of influence (memos, reports, and newsletters).[88] Taken together, Beniger’s and Yates’s accounts reveal the mechanisms of control that allowed disorganized family companies to evolve into modern corporations, another trajectory of control in communication studies.
Mechanisms of control are more suggestive than deterministic and “range from absolute control to the weakest and most probabilistic form, that is, any purposeful influence on behavior, however slight.”[89] They operate through influencing probabilities and likely outcomes, and Maxwell’s demon remains helpful, along with the real history of control, in elaborating this probabilistic influence. Importantly, the demon does not directly move the molecules, but rather increases the probability of them ending up in one chamber or another. Probability constitutes open, creative systems that allow their parts to operate with a certain degree of freedom. Deleuze compared control to a highway system: “People can drive indefinitely and ‘freely’ without being at all confined yet while still being perfectly controlled.”[90] According to Raiford Guins, freedom is like the open road, “a practice produced by control.”[91] In other words, control creates conditions of possibility, rather than constrictions. Modern personal computers, for example, are designed with no single purpose in mind, so users can repurpose them by installing software such as deep web tools like TOR or P2P applications like BitTorrent. Jonathan Zittrain described this openness as generative, since it allows for innovations to be created by users and other sources.[92] Control often succeeds precisely by capturing these innovations—the unpredictable creativity of users—as feedback to help regulate the overall system.
Maxwell’s demon helps pinpoint the particular kind of control to be discussed throughout the rest of this book. The demon works constantly and responds dynamically to molecules’ movements. This kind of control resonates with what Deleuze theorizes as “societies of control, which are in the process of replacing the disciplinary societies,” and are characterized by “modulation, like a self-deforming cast that will continuously change from one moment to the other, or like a sieve whose mesh will transmute from point to point.”[93] Whereas discipline sought to compel subjects to internalize mechanisms of control and become docile bodies, modulation is continually adaptive, embracing difference and change while nonetheless maintaining regularity. Yet, where Maxwell imagined only one daemon, I see countless daemons operating in the internet. Flow control, then, may be restated as distributive agency manifest by daemons to modulate the conditions of transmission online.
Control, however, has its limits. Mechanisms of control inherently have many limitations precisely because they operate with great degrees of freedom. William S. Burroughs, who inspired Deleuze to write about control, stresses that “control also needs opposition or acquiescence; otherwise, it ceases to be control.”[94] Constituent power exceeds its constituted form. In fact, this excess can drive the development of control technologies by exposing limitations and revealing new possibilities to be harnessed and regularized. Crises in control necessitate new mechanisms. For example, the rapid expansion of railroads and the ensuing rise in the number of train accidents required new mechanisms, such as high-speed telegraphy, standardized time, scheduling, and routine inspections, to bring the system back in order.[95] The limits of control can derive from constituted power as much as from an excess of constituent power. Constituted power has gaps in the system, or what Alexander Galloway and Eugene Thacker call “exploits,” that may allow momentary, unintended uses.[96] Hackers, for example, look for exploits in computer systems. Insecurities in the Microsoft Windows operating system have facilitated the spread of worms, viruses, and botnets. More recently, SnapChat, a social media photo-sharing application for mobile devices whose original appeal centered on user privacy, grew from three thousand users to 3.4 million in one year only for a security flaw to leak most of its users’ phone numbers.[97]
The outcome of control, I argue, is an optimal state or optimality. In this book, the optimal is not a fixed state, but a metastability (as Wiener suggested). But Wiener was not the only theorist of control to use metastability; Deleuze did too. For the latter, control societies are metastable: “In the societies of control one is never finished with anything—the corporation, the educational system, the armed services being metastable states coexisting in one and the same modulation, like a universal system of deformation” (italics added).[98] “Metastability in a control society” refers to the active organization of change established through the relationships between institutions. The emphasis here is on control as dynamic or modulating, and its overall effect is metastable, maintaining a living system. Elsewhere, Deleuze defines metastability as “a fundamental difference, like a state of dis-symmetry,” but he goes on to say that “it is nonetheless a system insofar as the difference therein is like potential energy, like a difference of potential distributed within certain limits.”[99] Deleuze’s definition of metastability also captures the work of Maxwell’s demon. Gas molecules of different temperatures and speeds surround the demon. The two chambers (the overall system) are in a state of dis-symmetry. The demon controls this dis-symmetry, modulating the system by channeling the molecules’ own energy to create a certain order, a “potential distributed within certain limits.”
Internet daemons include all these aspects of control. Their distributive agency has the difficult task of managing an infrastructure filled with many networks, ranging from real-time to more asynchronous communication, like email. Daemons modulate the conditions of transmission to support these different networks. Their flow control refers to a control over the flow or conditions of transmission in an infrastructure. This productive power enables networking. In doing so, daemons change networks’ temporalities and the relations between networks to create an overall metastability (following Sharma). This metastability is what I call an optimality, a working program of how the network of networks should interact. An optimality influences both the conditions for networks and the overall conditions of communication. Daemons’ constant and tireless work of flow control actualizes these optimalities amid and among the networks of the internet.
Conclusion
Maxwell’s demon has had an unquestionable influence on the history of digital computing, but it has also had a major influence on my own project of daemonic media studies. Maxwell’s demon inspired two important developments that enabled the internet daemon. First, information theory and control materialized in actual computer infrastructures. Researchers began to frame these questions of man–machine symbiosis as communication questions. Digital communication crystallized through this research as new computing infrastructures brought together humans and machines in unique synchronizations, particularly around time-sharing and real-time computing. These synchronizations became a sort of prototype for future networks running on the internet. In tandem with the development of digital communication, demons, or rather daemons, came to actualize Norbert Wiener’s idea of cybernetic control. Where Wiener hoped to find Maxwell’s demon in nature, programmers coded their own versions to manage the routine tasks of their new operating systems. Internet daemons followed from these two developments and became a means to control the complex work of running a digital communication infrastructure.
The problem of digital communication exceeded the research agenda of Project MAC and SAGE. Indeed, the growth and popularity of computers altered the problem of man–computer symbiosis. Rather than simply looking for ways to connect humans and machines, the question turned to interconnecting different computer systems to each other. The local computer systems at MIT became models for research into national, international and intergalactic computer communication systems. ARPANET, the internet’s progenitor, was an early attempt to interconnect computers’ infrastructures. The success of computer networking required ARPANET to abstract one step further and create an infrastructure that could accommodate multiple networks. Just as Shannon proposed that computers could control telephone systems, researchers at ARPANET proposed computers that could manage digital communications. These computers would host a new generation of internet daemons that would be able to control the flow of information in this new infrastructure, enabling what we now call a network of networks.[100]