4
Pandaemonium
The Internet as Daemons
THE CISCO SERVICE CONTROL ENGINE (SCE) 8000 debuted in 2008. Designed as carrier-grade equipment, the forty-two kilogram metal box requires only five rack units within the tall metal cages of an internet service provider’s (ISP) infrastructure. When running, it can manage a million concurrent internet users. The processing power of the SCE 8000 enables “stateful awareness” of packet flows, which means it is able to look into the contents of a sequence of packets and contextualize this data within the overall state of the infrastructure. The SCE’s built-in packet inspection detects six hundred known protocols and has adaptive recognition for new Peer-to-Peer (P2P) networks. With stateful awareness, the SCE 8000 is better able to manage bandwidth allocation and assign more or less bandwidth to different networks. But by the time you read this, the SCE 8000 will have reached the end of life,[1] replaced by an even more powerful piece of equipment.
Internet daemons have come a long way. Loosed from the military-industrial complex, daemons are now the products of hackers, free software developers, telecommunication companies, and the $41 billion networking infrastructure industry. This chapter focuses on the daemons produced by this industry. Two of the biggest players in the market are Cisco Systems, with 56 percent market share, and its nearest competitor, Juniper Networks, with 6 to 8 percent market share.[2] Many daemons discussed below come from the subsidiary Deep Packet Inspection (DPI) industry, which has an estimated value of $741.7 million. Two of that industry’s biggest vendors appear in this chapter: Procera Networks and Sandvine.[3] Other daemons come from an emerging change in infrastructure design known as software-defined networking, which is estimated to be a $326.5 million market that will grow to $4.9 billion by 2020.[4]
Where this book began with a discussion of Oliver Selfridge’s Pandemonium, this chapter describes the internet as its own kind of Pandaemonium. In doing so, I build on the prior analysis of the internet’s diagram to focus on the daemons that have occupied its infrastructure. Pandaemonium encapsulates how daemons enact flow control, working in collaboration across the infrastructure to create, ideally, smooth conditions for networking. To understand this work of daemons, I have divided the chapter into two parts. The chapter begins with a daemonology of the internet moving from packet inspection to queuing, to routing, to policy management. I begin with a discussion of the daemons on the Interface Message Processor (IMP) as a way to introduce these different functions. The second part examines the internet’s architecture to show the collaboration and conflict between demons.
This second half of the chapter proceeds by way of a discussion of some present internet technology that practices the second of two competing kinds of optimizations. The first is type “nonsychronous,” a term I borrow from Donald Davies. A nonsynchronous optimization leaves networks unorganized; it draws on the well-known End-to-End principle (E2E) that stipulates daemons at the edges of the infrastructure be responsible for the key decisions during transmission. In a bit of a slight, the principle holds that the core daemons should be dumb. The diagram for a nonsynchronous optimization ignores the center, emphasizing the edge daemons who best know the conditions of networking.
It is hard to blame core daemons for conspiring against this principle, but the consequences led to the “net neutrality” debate. As mentioned in the introduction for instance, unruly P2P daemons such as those in the eMule program prompted internet service providers (ISPs) such as Comcast to install new networking computers known as “middleboxes” into their infrastructure. In doing so, Comcast exemplifies a new trend in networking away from nonsynchronous optimization and toward the second kind of optimization, what I call a “polychronous” internet. This optimization stratifies networks into classes and tiers, allocating bandwidth accordingly. In this new regime, internet daemons in the middle of the infrastructure grow more powerful and influential. Through technical filings submitted by Comcast to the Federal Communications Commission (FCC), I analyze the operations of flow control during the ISP’s eMule throttling discussed in the introduction.
Through these two tours of Pandaemonium—the catalogue of daemonic functions and the history of conflicts between users in favor of nonsynchronous optimization and ISPs in favor of polychronous—the chapter analyzes the distributive agency of daemons.
A Daemonology of the Internet
IMPs are a good place to begin the study of internet daemons because the IMPs’ core program might be seen as the first of their kind. After Bolt, Beranek and Newman Inc. (BBN) submitted the first IMP, their research team published a paper in 1970 in the Proceedings of the American Federation of Information Processing Societies describing its design and operation.[5] They explained:
The principal function of the operational program is the processing of packets. This processing includes segmentation of Host messages into packets for routing and transmission, building of headers, receiving, routing and transmitting of store and forward packets, retransmitting of unacknowledged packets, reassembling received packets into messages for transmission to the Host, and generating of [Request for Next Message] and acknowledgements. The program also monitors network status, gathers statistics, and performs on-line testing.[6]
From this rather technical description, an IMP:
- 1. inspected and interpreted packets;
- 2. stored packets in buffers and managed queues;
- 3. learned and selected routes; and
- 4. collected statistics and coordinated with each other to keep the system running.
New daemons handled tasks similar to those of the IMP. Unto the IMP, other internet daemons were born. Their packet inspection begat firewalls and DPI daemons. Their routing algorithms begat internal and external routing daemons. Their buffers begat queuing daemons. And statistics routines begat policy management daemons. Packet inspection, queuing, routing, and policy daemons all modulate flow control. Each one influences the overall conditions of transmission, and thus flow control. Daemons distinguish networks through packet inspection. Conditions of transmission vary depending on routing and queuing. Meanwhile, the interactions between all of these daemons increasingly depend on policy daemons. The internet functions through the delicate orchestration of these daemons.
Packet Inspection
Packet inspection is a daemon’s gaze. Daemons read packets to make decisions about transmission. A packet is constructed according to the layering diagram described in chapter 3. A packet is a bit stream that begins with the lower link layer. After the link layer, the packet contains internet layer metadata like source and destination address. Next, the packet encodes the transport layer that includes port numbers and sometimes actual messages. Finally, deep in the packet stream is the application layer that contains both the message and some metadata to help the local application interpret it.
Daemons look at the part of the packet corresponding to their function from the transport layer to the link layer. Consider the daemonic gazes at work when browsing the web. Clicking a link starts a daemonic frenzy. Upper-layer daemons who are part of web browsers send requests in the HyperText Transfer Protocol (HTTP) using the application layer. The server’s daemons interpret these requests and send back HTTP responses. Simultaneously, lower-layer daemons on the home computer and the web server encapsulate HTTP data using the transport and internet layers. Finally, daemons at the link layer handle sending the packets, depending on whether the home computer connects to the internet through an Ethernet cable or wirelessly.
Protocols help specify the format of data at each layer of the packet. To be exact, protocols determine the meaning of each bit. A packet is just a binary stream: ones and zeros. These bits do not have any meaning in and of themselves. Rather, daemons are programmed to implement protocols so that they know the meaning of each bit. Protocols, to recall Thomas Marill’s and Lawrence Robert’s discussion from chapter 2, have to be agreed upon by all parties. Achieving consensus has meant that protocols, especially critical ones like the internet protocol, are slow to change. All daemons must be reprogrammed to interpret the new protocol. Application-layer protocols notably change more quickly, as daemons at the ends are able to understand new data formats.
Protocols, then, have an important influence on the conduct of daemons, and their distributed nature means that they have widespread implications. Protocols are “political,” as noted by internet governance expert Laura DeNardis:
They control the global flow of information and make decisions that influence access to knowledge, civil liberties online, innovation policy, national economic competitiveness, national security and which technology companies will succeed.[7]
These social topics do not appear much in the early ARPANET technical manuals (though subsequent Requests for Comments [RFCs] actively discussed them),[8] but early design decisions had long-standing consequences. For example, even though BBN developed encryption for the IMP as part of its work with the U.S. intelligence community, the ARPANET protocols did not include it. DeNardis argues that exposure is now a key characteristic of internet protocols. The Edward Snowden leaks revealed the ramifications of unencrypted packets, which eased the global intelligence community’s surveillance of the internet. Another unintended consequence of the early design is the exhaustion of internet addresses (preventing new devices from joining the infrastructure) that resulted from version 4 of the Internet Protocol Suite (TCP/IP) assigning only thirty-two bits for signal location, thereby creating a theoretical maximum of 4,294,967,296 locations.[9] The internet is currently in transition to a new version of the protocol (version 6) that will simplify the header content and provision longer, 128-bit addresses. These protocol debates cannot be completely summarized here (and, indeed, they offer a different pathway into internet studies than this book), but they have important ramifications for a daemon’s gaze.
Returning to packet inspection, the general trend is that intermediary daemons inspect more of the packet. New forms of inspection allow these daemons to make more insightful decisions. The development of these new daemonic gazes has been driven by demand for better network security, as well as bandwidth management, government surveillance, content regulation, and copyright enforcement.[10] These new gazes include:
- 1. inspecting packet headers for security threats;
- 2. tracking the overall state of networks to remember past activity and anticipate routine behaviors;
- 3. inspecting deep into the packet to read the application layer;
- 4. and situating the packet in a flow of network activity.
Modern packet inspection uses all these gazes at once, but it is useful to address them in order.
Firewalls directly contributed to the development of the first two gazes. These middleboxes required daemons capable of accessing a packet’s probable threat. The first firewall daemons were called “stateful” because they interpreted packets depending on the state of their infrastructure. As one of the first papers about these dynamic packet filters explained, a firewall could inspect “all outgoing TCP [Transmission Control Protocol] and UDP [User Datagram Protocol] packets and only all[ow] incoming packets to the same port.”[11] A firewall’s daemon remembers whether an internal computer sent a message (an outgoing TCP or UDP packet) and permits only incoming traffic that has been requested locally. Conversely, a daemon could detect the arrival of an unsolicited packet, since it would know that no local host had initiated contact.
A whole industry now tracks global threats against the internet. Arbor Networks, a network security and monitoring firm, runs the Active Threat Level Analysis System (ATLAS) initiative. ATLAS provides real-time threat monitoring by aggregating data generated from more than 330 installations of its equipment by ISPs. The ATLAS website, when accessible, included a small map of the world. Across the globe, dots flickered to indicate detected threats. Below the map, the site listed top attacks, their targets, and sources. ATLAS still functions as a tool to profile risky networks, ports, and domains, though it has become a subscriber service.
ATLAS and programs like it represent a broader imbrication of technology and security oriented around preemption.[12] ATLAS’s insights help daemons decide how to treat packets. Daemons download new profiles that update their gazes. At the same time, ATLAS deterritorializes local daemons’ gazes. Strange activity on one infrastructure becomes part of a global data set. This cloud of known risk reterritorializes in a loop as the daemons constantly update their profiles to nullify threats before their wider actualization. ATLAS also exemplifies a push to extend the daemon’s gaze into the past, in this case with a global log of internet activity.
Most daemons keep detailed activity logs that are used to diagnose threats and to conduct security audits and forensics after attacks. Demand for better data about the past has led to the development of even more sophisticated memory systems. Before going bankrupt in 2015, ISC8 Inc. sold the Cyber NetFalcon appliance, which recorded all the packets that passed through it.[13] The Cyber NetFalcon not only recorded all communications, but its daemons interpreted the entire packet across all layers. Security analysts used the NetFalcon to go back in time by reading these records. Daemons helped too. The appliance’s daemons interpreted packets to correlate activity and store records in structured data for easier analysis in the future.
ISC8 was part of an industry developing technologies for DPI,[14] which refers to when daemons, particularly those on middleboxes, read and analyze all layers of the packet, including the application layer.[15] Some of its biggest vendors include Allot Communications, Blue Coat Systems, and Sandvine.[16] In effect, DPI daemons turn packets into a source of big data, or what Jose van Dijck calls “datafication.”[17] Using all the data from the packet, DPI daemons make probabilistic guesses about the nature of the packet and look for patterns to detect P2P applications or web traffic. The gaze is probabilistic, since it usually includes some margin of error, according to a survey of the industry, and “both false positives and false negatives are unavoidable.”[18] Some appliances inspect the commands embedded in application data to classify the packet’s intent or threat level. For example, the Cisco 4700 Series Application Control Engine (ACE) reads HTTP packets to detect key words in web pages and File Transfer Protocol (FTP) packets to identify commands. The ACE could block, for example, packets requesting *.MP3 files to discourage piracy.[19]
DPI vendors describe their products as a solution to the shifting landscape of internet security, specifically the declining value of port numbers as an accurate way to classify traffic. As Sandvine, a leading manufacturer of DPI equipment, wrote:
DPI is necessary for the identification of traffic today because the historically-used “honour-based” port system of application classification no longer works. Essentially, some application developers have either intentionally or unintentionally designed their applications to obfuscate the identity of the application. Today, DPI technology represents the only effective way to accurately identify different types of applications.[20]
DPI responds to intentional obfuscation or port-spoofing, in which a network self-identifies on unconventional or incorrect ports. Some P2P file sharing networks, in an effort to avoid detection, send packets on HTTP ports rather than their standard ports (or through virtual private networks [VPNs]), as will be discussed in chapter 6). Even when a network mislabels its port, DPI allows a daemon to evaluate the contents of the packet and match it to the correct profile.
Daemons have unintentionally obfuscated networks by using HTTP as a kind of universal transport port.[21] Netflix, Google, and Facebook build their applications to use HTTP ports. For example, Netflix, along with Apple and Microsoft, participate in the Moving Picture Experts Group Committee for Dynamic Adaptive Streaming over HTTP (DASH).[22] DASH delivers video streams over HTTP, which simplifies over-the-top services but confuses older daemons looking to identify networks by port number. Since DPI daemons can read into the application layer, they can distinguish streams in HTTP traffic. Procera Networks, now merged with Sandvine, attracted Netflix’s ire when it inspected data from its ISP clients to detect if Netflix subscribers had started to watch the new season of the show House of Cards. Using DPI, Procera Networks created a list of the most popular episodes on the streaming service.[23] In reaction, Netflix changed its packets to make it harder for DPI to detect viewing habits.[24]
How DPI works is a dark art usually enshrouded in proprietary code. However, one DPI firm, iPoque, shed some light on the practice by releasing an open-source version of its packet inspection code.[25] OpenDPI version 1.3 classifies 118 different applications, depending on many functions in the source code specific to each application. The code contains 100 separate files dedicated to different applications, including bittorrent.c to classify BitTorrent networks and eDonkey.c to classify eDonkey or eMule networks. The bittorent.c file includes numerous functions that search for particular patterns in the packet that indicate that it is part of a BitTorrent network. A simple function (copied below) compares data in the application layer (the packet→payload variable in the code) to the string “BitTorrent protocol.”
if (packet->payload_packet_len > 20) {
/* test for match 0x13+”BitTorrent protocol” */
if (packet->payload[0] == 0x13) {
if (memcmp(&packet->payload[1], “BitTorrent protocol”, 19) == 0) {
IPQ_LOG_BITTORRENT(
IPOQUE_PROTOCOL_BITTORRENT,
ipoque_struct,
IPQ_LOG_TRACE,
“BT: plain BitTorrent protocol detected”);
ipoque_add_connection_as_bittorrent(
ipoque_struct,
IPOQUE_PROTOCOL_SAFE_DETECTION,
IPOQUE_PROTOCOL_PLAIN_DETECTION,
IPOQUE_REAL_PROTOCOL);
return 1;
}
}
}
If OpenDPI matches a packet, it triggers an event that logs “BT: plain BitTorrent protocol detected.” Another function detects packets’ uploading data to a BitTorrent network (also called “seeding”) by matching the packet payload to a known identifier, in this case, “GET /webseed?info_hash=.” OpenDPI also detects specific BitTorrent clients like Azureus and BitComet. The source code includes a simple demonstration that accepts a series of packets (technically a packet dump) as an input and outputs a table listing detected networks.
The use of encrypted services like the VPNs discussed in chapter 6 prompted daemons to find other ways to profile packets. Daemons cannot read the contents of a packet when it is encrypted, so daemons have learned to inspect the sequences of packets, called “flows,” instead. These techniques, also called “deep flow inspection,” entail tracking the tempo of packets, looking for bursts and other signatures of communication that might indicate a probable network.[26] For example, a Skype conversation sends packets at a rate different from that at which a web browser does, and thus can be easily detected.[27]
Deep flow inspection, however, remains a rule-based system of classification that requires humans to analyze and develop profiles. In coming years, profiling will be automated through machine learning and deep learning. Cybersecurity vendors have already begun to deploy machine learning and artificial intelligence in threat detection. One study found that, through machine learning, daemons could detect BitTorrent networks with 95.3 percent accuracy after observing traffic for about one minute (or two hundred packets).[28]
Companies such as Vectra Networks advertise that they detect threats “using a patent-pending combination of data science, machine learning and behavioral analysis” in real time.[29] Behavioral analysis synthesizes the different forms of packet inspection used by contemporary daemons. The sum of our online communications, encoded as packets, becomes training data for the classifiers with black-boxed algorithms. Daemons once used rules to classify networks by application; now machine learning enables daemons to detect kinds of behaviors that present threats to cybersecurity and to adapt to changing code deployed by new applications.
The daemonic gaze will only widen as the computational capacity of the infrastructure increases. Where IMPs tracked fifty-kilobit telephone lines,[30] Saisei Networks today advertises its FlowCommand technology as capable of “monitoring [five] million concurrent flows on a [ten gigabit] link [twenty] times per second, and evaluating and/or taking action on those flows across policies based on more than [thirty] metrics.”[31] Future daemons will likely use multiple classifiers at once, being able to detect not just the network type but also its behavior and vector.
What can a daemon do with its improved gaze? It can modulate the conditions of transmission. These modulations happen in what Florian Sprenger calls the “micro-decisions” of the internet. “Micro-decisions” refers to the microseconds of computational cycles allocated for a daemon to modulate transmission conditions.[32] Packet inspection allows flow control to be more targeted, a process Christian Sandvig calls “redlining” in his groundbreaking discussion of the link between packet inspection and net neutrality.[33] Daemons influence the conditions of transmission by modulating “jitter” (variation in packet arrival times), reliability (the level of error in transmission), delay or latency (the time it takes to receive a response to a request), and bandwidth (the rate the ones and zeros or bits of an application pass over a network, usually measured per second, as in ten megabits per second).[34] Daemons intentionally and unintentionally influence these conditions through queuing and routing.[35]
Before moving to a discussion of routing then queuing, the privacy implications of packet inspection must be noted. The ability of ISPs to learn about their subscribers from packet inspection has prompted investigations by regulators into actual and potential privacy harms.[36] These investigations came about in response to companies and ISPs that used DPI to link packets with advertising profiles, as well as to inject data into packets to ease tracking and corporate communications. Phorm, a defunct advertising company, sought to develop DPI equipment designed to connect internet activity to profiles. This prompted privacy investigations by the United Kingdom and the European Union.[37] Packet inspection, more problematically, can lead to packet injection where a third party modifies packets on the fly. Canadian ISP Rogers Internet relied on technology from PerfTech to inject messages as a notice to its users. Before Rogers Internet discontinued the program, packet injection would modify web pages to warn users they were reaching their bandwidth cap.[38] Verizon Internet injected an additional identifier often called a “super-cookie” into HTTP packet headers transmitted on their mobile infrastructures. Web advertisers could pay Verizon for access to demographic and geographic information using this identifier to better target advertisements.[39] These issues remain an area of ongoing concern with DPI and other improvements of packet inspection that potentially violate long-standing norms of common carriage.
Routing
“Routing” refers to how daemons decide where to send packets. To make such decisions, daemons require the ability to map available routes and algorithms to select the best route. Where IMPs simply had to know the statuses of their peers before forwarding a packet, now daemons have to understand their location in the larger internet infrastructure and then decide the best route to send their packets. Daemons, however, do not have to map the internet; they just have to learn their domain. “A routing domain,” according to an RFC on gateways, “is a collection of routers which coordinate their routing knowledge.”[40] Domains gather daemons, in other words, to coordinate how and where to send packets, as well as to share information. Most often, a domain is a single infrastructure or part of an infrastructure, and it has a few daemons that act as routers for it. These daemons constantly collaborate to map possible routes both within their own domain and across domains. Since daemons often know multiple routes, algorithms help them pick one for each packet, though how they make that decision varies by algorithm.
How daemons coordinate routing depends largely on different protocols. These protocols have important implications for how daemons transmit packets. The first IMPs used what became known as a “distance-vector” routing protocol. An IMP chose where to send a packet depending on a routing table kept in its memory. The table included an estimate of the minimum delay to every destination on the ARPANET. Then daemons factored in “queue lengths and the recent performance of the connecting communication circuit” when calculating the delay.[41] When a packet or message arrived, the daemon consulted its routing table to find a route to the packet’s destination. It used the Bellman–Ford or Ford–Fulkerson algorithms (developed between 1957 and 1962) to calculate the shortest path before the IMP-to-modem routine sent the packet down the best line.[42] Every half second, IMPs exchanged updated delay estimates to adjacent IMPs. “Each IMP then construct[ed] its own routing table by combining its neighbors’ estimates with its own estimates of the delay to that neighbor.”[43] Every calculation had a recursiveness—interpreting adjacent routing tables then informing adjacent calculations a half second later. The whole system was distributed, because every routing table was the product of interrelated calculations made on IMPs across the ARPANET.
Even functional routing protocols introduce delay and congestion, depending on how they coordinate daemons. Distance-vector routing, as a distributed system, cascaded any failure. As John McQuillan, an engineer at BBN responsible for the developing of routing, recounted:
In 1971, the IMP at Harvard had a memory error that caused its routing updates to be all zeros. That led all other nodes in the net to conclude that the Harvard IMP was the best route to everywhere, and to send their traffic there. Naturally, the network collapsed.[44]
This malfunction demonstrates one reason that ARPANET sought to replace distance-vector routing: it “reacted quickly to good news but slowly to bad news.”[45] The Harvard IMP’s good news happened to be wrong, causing faulty distributed calculations. Conversely, distance-vector routing could cause network delay and malfunctions because, “if delay increased, the nodes did not use the new information while they still [had] adjacent nodes with old, lower values.”[46] In other words, IMPs were programmed to be optimistic, to hold on to good news even after the arrival of news of delay and trouble. As a result, IMPs introduced congestion by sending packets to the wrong nodes.
Distance-vector routing did not scale well either when routing across complex, heterogeneous infrastructures. The protocol’s replacement, “link-state” routing, arrived in 1978. Link-state routing exemplified a shift from using a universal, homogeneous algorithm (like distance-vector) to more hierarchical, localized algorithms as ARPANET and its successors began to interconnect separate infrastructures. Link-state routing involved minor though significant changes to measurements of delay, signaling, and route calculation, as well as a broader paradigm shift in how daemons conceptualized their network map. In link-state routing, IMPs and other parts of the communication subnet estimated delay over ten-second periods, rather than instantaneously. The longer time period allowed estimates to smooth out noise and increase stability. A node sent updates to every line—a process now called “flooding”—only when it processed an update. As a result, nodes shared this information less frequently. McQuillan reflected: “We had better, more data for the routing algorithm, delivered less often, only when a meaningful change occurred.”[47] Finally, link-state routing changed the route calculation algorithm to a shortest-path-first algorithm designed by Edsger Dijkstra and first published in 1959. The algorithm composed routes in hierarchical trees. Updates changed only the affected branches, not the whole tree. These computational efficiencies made possible a bigger change in routing: “Every node in the network had the entire network ‘map,’ instead of routing tables from adjacent nodes.”[48] Routing calculations became localized instead of distributed. Building on changes to estimates, signaling, and calculation, local nodes calculated their own local map of the infrastructure and network possibilities. Link-state routing went live on ARPANET in late 1978 and early 1979.[49]
Distance-vector and link-state routing inform today’s routing protocols, specifically the Border Gateway Protocol (BGP) and the Open Shortest Path First protocol (OSPF). BGP, a descendant of Vinton Cerf’s and Robert Kahn’s early gateways, is responsible for communication in the core of the internet between autonomous systems, domains under common administration. BGP daemons use a derivative of the distance-vector algorithm to map and decide routes. BGP includes a few protocols, such as the External Border Gateway Protocol (eBGP), which guides how gateways advertise their routes and coordinate with each other. Cooperation varies, and sometimes an autonomous system will configure a daemon to avoid acting as an intermediary between two other systems.[50] BGP also coordinates within domains through the Internal Border Gateway Protocol (iBGP), blurring the boundaries of domains somewhat. OSPF, a descendant of link-state routing, is now the recommended protocol for internal routing. Daemons share link-status updates with their adjacent nodes. The protocol also allows for domains to designate a central router that coordinates its routing.[51]
Since multiple routing protocols coexist, routing daemons have to decide which protocol to use and when. For example, a gateway might be interconnected through the eBGP for networking external to its domain and the OSPF protocol for internal networking. Most daemons then include logics to select the best route and protocol. For example, Cisco ranks protocols through a variable it calls “administrative distance,” and so a Cisco daemon will factor in this value when faced with multiple routes using different protocols. Cisco’s daemons prefer lower values. By default, Cisco gives an internal BGP route a value of 200 and an OSPF route a value of 110. As a result, a daemon will select the lower OSPF route instead of the iBGP.[52]
The BIRD internet-routing daemon also implements both internal and external routing protocols.[53] At any one time, BIRD might be running a few routines implementing BGP and OSPF, as well as maintaining routing tables. BIRD includes a simple scripting language to help configure how it selects routes. An OSPF configuration, for example, can rank a wired connection higher than a wireless connection so that, when the daemon selects a route from the table, it always selects the wired connection. The same applies for BGP. A BGP configuration might also specify how it will advertise routes and how to pass information to its OSPF routine.
The diversity of routes and routing decisions are an important reminder of routing’s relation to flow control. Effective routing avoids delays by routing across the shortest, most reliable lines or, conversely, introduces delay by selecting a slower route. Routing also alters the composition of a network, changing the links between nodes. As Andrew Tanenbaum explains in one of the leading textbooks about computer networks, “typical policies involve political, security, or economic considerations.” He suggests, for example, that one should “not use the United States to get from British Columbia to Ontario” and that “traffic starting or ending at IBM should not transit Microsoft.”[54] These routing decisions, along with queuing, are the two most important ways daemons modulate transmission conditions.
Queuing
Daemons use queues to decide transmission priority and allocate bandwidth. In computing, “queue” refers to a list stored in a prioritized sequence. Items appearing earlier in a queue are processed sooner than later items. Operations researchers and early computer scientists debated the best algorithms or disciplines to manage queue priorities, as discussed in chapter 2.[55] Davies discussed the round-robin technique of managing queues in time-sharing systems. This technique assigned computing time in equal units, cycling between different processes for the same amount of time so that every process received the same priority.[56] These abstract debates about queue discipline directly influenced the design of the IMP.
Every action on an IMP had a priority. IMPs prioritized routines, interrupts, messages, and packets. Hosts could prioritize messages with a flag in the header, sending these messages to the top of the queue. IMPs also ranked sent packets. Each modem had its own queue that first sent acknowledgments (ACKs), then prioritized messages, and then Requests for Next Messages (RFNMs) and regular packets.[57] These priorities likely did not cause much delay, but they show the roots of queuing in packet transmission.
While the IMP had a lot of moving parts, it did have a few overall priorities, as seen in ACKs and RFNMs being at the front of the transmission queue. IMPs prioritized ACKs and other control messages because early ARPANET researchers preferred an active communication subnet that guaranteed delivery. As ARPANET converted into the internet, this approach gave way to a less active communication subnet developed at the packet-switching infrastructure, CYCLADES. Its design reduced involvement of the communication subnet level and increased responsibility for the host subnet. CYCLADES, as a result, did not ensure the delivery of packets. The approach taken, known as “best efforts,” amounted to a daemon doing “its best to deliver [packets]” but “not provid[ing] any guarantees regarding delays, bandwidth or losses.”[58] Since networks can be overwhelmed, this approach stipulated that packets should be dropped, forcing a node to resend the packets at a more opportune time.[59] “Best efforts,” over time, became a key part of the TCP/IP.
Queue disciplines proliferated even though “best efforts” recommended less involvement of the communication subnet. These disciplines solved queuing problems (which had colorful names like the “diaper-transport problem”[60]) with algorithms that decided how best to send packets down a shared line. Two key queuing algorithms used the metaphor of buckets to describe their logics. The “leaky bucket” algorithm imagines a packet flow as water filling a bucket and leaking out of it through a hole. The bucket acts as a metaphor for a finite packet queue, while the hole represents the average bandwidth. Leaky buckets regulate the intermittent flow of packets by varying queue size (how big a bucket) and average bandwidth (the size of the hole). A queue fills with packets arriving irregularly and holds them until they might be sent at a regular rate. When a bucket overfills, water spills out. When the queue fills, daemons drop packets, signaling congestion.
“Leaky bucket” inspired the “token bucket.” Where the leaky bucket kept packet flow constant (the leak has a fixed size), the token bucket accommodates bursts of packets. A token bucket is filled with tokens at a regular rate until it is full. A packet needs a token to be transmitted. Every packet sent removes a token. The maximum transmission rate, then, corresponds to the size of bucket. Thus, the algorithms differ in that “the Token Bucket algorithm throws away tokens (i.e., transmission capacity) when the bucket fills up but never discards packets. In contrast, the Leaky Bucket algorithm discards packets when the bucket fills up.”[61] A large burst of packets might be easily accommodated if the token bucket is full, but a leaky bucket, would simply start discarding packets.
Daemons often use these buckets to manage traffic through what is called traffic “shaping” or traffic “policing.” Traffic shaping usually works like a leaky bucket, keeping the rate of packet transmission constant. Traffic policing resembles a token bucket, since it attempts to keep an average rate (corresponding to the rate of token refreshing in the bucket) but can accommodate bursts. Both major network equipment manufacturers, Cisco and Juniper, have built-in commands to shape and police traffic as part of their respective operating systems, Cisco’s IOS and Juniper’s JUNOS. These operating systems run on their routers, switches, and other equipment.[62] The shape command in Cisco’s IOS, for example, uses the token-bucket algorithm to limit outbound packets, managing traffic by setting the peak and average bandwidth. By entering “shape average 384000” into the Cisco IOS’s command line, a human administrator programs the internal token bucket in a daemon to have a capacity and refresh rate that averages 384,000 bits per second.
The shape command also integrates with packet inspection to treat networks differently. Class-based traffic shaping in Cisco IOS assigns a greater or lesser number of tokens to different networks. Network engineers manually code networks into classes. Classes might include port numbers or other identifiers from DPI and associate through policy maps. Cisco gives the example of a policy map that aggregates classes into gold, silver, and bronze tiers. These tiers receive more or less bandwidth according to a discipline known as “class-based weighted fair” queuing. Different classes receive a set percentage of the token bucket, or bandwidth. The gold tier receives 50 percent, with 20 percent for silver and 15 percent for bronze. (The example does not explain the allocation of the remaining 15 percent.) This is just one example, and Cisco’s configuration guide includes numerous queue disciplines and traffic shaping configurations.[63] These queuing configurations demonstrate how flow control easily stratifies networks, ensuring that some receive more bandwidth than others.
Investment in DPI and other forms of advanced traffic management has also led to the development of techniques to accelerate specific networks. Acceleration programs create transmission conditions known to improve certain networks’ performances. Another Cisco product, its Wide Area Application Services (WAAS), includes code to accelerate seven networks related to specific applications like HTTP, as well as Windows Media video packets. Acceleration varies, but tweaks such as caching some HTTP traffic lead to better web performance. In addition to these specific accelerations, WAAS includes “200 predefined optimization policy rules” to “classify and optimize some of the most common traffic,” according to Cisco Systems.[64] These rules use a combination of different transmission variables like buffer size, “removing redundant information,” and compressing data streams to reduce the length of the message.[65] WAAS applies all three techniques to accelerate the Real Time Streaming Protocol used by Skype and Spotify, whereas, by default, it passes on BitTorrent packets without any acceleration.
Other equipment vendors have also begun selling acceleration equipment aimed at improving the performance of other applications and their networks. Allot Communications sells the VideoClass product to improve online video streaming.[66] OpenWave Mobility partnered with a major European mobile network operator to accelerate video game live-streaming sites like Twitch and YouTube.[67] Where shaping and policing deliberately degrade traffic, acceleration technologies lead to uneven communication by improving transmission conditions for select networks.
The ability of a lone daemon to assign queue priority or accelerate packets means little if other daemons cannot identify and indicate high-priority traffic. Numerous protocols have been developed to communicate priority between daemons. Version 4 of TCP/IP includes eight bits in the packet header to signal priority and other transmission conditions to daemons (known as “type of service”). Of the eight bits, the third bit signals if a packet requires low delay, the fourth bit signals a need for high throughput or bandwidth, and the fifth bit signals whether a packet needs high reliability. Each bit signals the daemon to modulate its flow control. Most routers could read the type-of-service bits, but few daemons enforced these instructions.[68]
With the convergence of the internet, great efforts were taken to better signal priority for multimedia and other delay-sensitive packets. The Internet Engineering Task Force (IETF), one of the key standards organizations for the internet, invested heavily in providing multimedia services. The research produced a number of RFCs (the means to publicize and to implement new features on the internet). RFC 2205, released in 1997, outlined the Resource reSerVation Protocol (RSVP) as a means for a host to communicate with networks to reserve a path and resources among them. RSVP provided the foundation for the next protocol, Differentiated Services (DiffServ), outlined in RFCs 2474 and 2475.[69] DiffServ “represented an important modification of the traditional internet paradigm” because “the responsibility to maintain flow information is distributed to all nodes along the network.”[70] Using DiffServ, daemons assigned packets to classes according to the type of service specified in their header. Unlike in the Cisco example above, packets included their own priority value.[71] DiffServ classes became a way for network daemons to widen their queue priorities.
Cisco and Juniper developed their own protocol for signaling priority known as Multi-Protocol Label Switching (MPLS).[72] RFC 3031, released in 2001, specified MPLS as a way to label packets entering a part of the internet. The label appears before the IP and TCP data in the bitstream of a packet and includes the class of service among other data for daemons. The label travels with the packet through the infrastructure so that subsequent daemons need only read the MPLS label to decide how to allocate bandwidth.[73]
MPLS works only insofar as daemons agree to abide by its rules, and these pacts work only for a set domain. Where daemons modulate transmission conditions through routing and queuing, they also coordinate themselves through a fourth kind of daemon tasked with policy management.
Policy Management
Policy daemons configure packet inspection, queuing, and routing rules between daemons.[74] They do not necessarily influence transmission conditions directly, but rather coordinate other daemons to set policies that decide how a daemon responds after it inspects a packet. Cisco IOS, for example, includes the POLICY-MAP command to share policies between daemons. Cisco also sells specific products to coordinate policies across domains, usually for enterprise or smaller businesses. Cisco’s WAAS, mentioned earlier in relation to acceleration, includes the AppNav feature to coordinate traffic management across multiple pieces of equipment. Typically, the AppNav Controller (ANC) policy daemon monitors incoming traffic and routes it to subservient nodes. An ANC, in other words, administers a cluster of servers. Depending on its configuration, an ANC tries to balance packet flow to avoid overloading the nodes.
The ANC relies on packet inspection to make decisions, matching packets to certain set classes using the class maps discussed above. Cisco’s policy-based routing, for example, assigns certain routes to certain classes. This might lead to networks receiving more or less bandwidth if a daemon down the line has been assigned to shape or throttle the packet. A policy daemon might simply pass traffic off to a node or try to balance the load on each node. Finally, policies might accelerate traffic by sending it to a specialized daemon. These are just a few examples of policy management meant to demonstrate how some daemons influence their peers.
A new trend known as Software-Defined Networking (SDN) attempts to further consolidate policy management and control. It is estimated to be a $326.5-million market that will grow to $4.9 billion in 2020.[75] A few ISPs and key players like Google have begun to implement this major new paradigm in network design and management. SDN builds on years of research into internet infrastructure design with the aim of increasing programmability, improving system-wide state awareness, and consolidating control. SDN improves programmability by decoupling daemons from their hardware. Many daemons are hard-coded into purpose-built appliances. Instead, as the name suggests, SDN prefers to use generic hardware and reprogrammable software to carry out functions, exemplifying a trend called “network functions virtualization.” This abstraction runs through the whole infrastructure design. Decisions are consolidated in one overall program referenced by the rest of the software infrastructure. SDN advocates sometimes call its implementation a network operating system because it turns all the pieces of an infrastructure into one centrally administrated machine. OpenFlow, for example, is the best-known open source implementation of SDN.[76] Juniper advertises its NorthStar Controller as a way to simplify “operations by enabling SDN programmability control points across disparate network elements.”[77] Through its web interface, the controller offers a window into infrastructural activity and ways to modify flow control across multiple devices.
The consolidation of policy management through SDN and other distributed management techniques creates an attractive place for daemons. SDN currently requires a human to configure the infrastructure, but in the future, an autonomous daemon empowered with sophisticated artificial intelligence might constantly monitor and manage networks, their different rhythms and tempos orchestrated by an omnipotent descendant of Selfridge’s “decision demon.” Indeed, a key promise of SDN is to be able to automate traffic management. The NorthStar Controller includes a path optimization feature that inspects the performance of all its nodes. Administrators can click the “Optimize Now” button to automatically reconfigure its daemons to run better.[78]
The Internet as Pandaemonium: Daemons and Optimization
Together, these autonomous daemons orchestrate flow control. Packet inspection daemons profile and contextualize a packet, drawing on its stateful awareness, programmed characteristics of protocol and perhaps behavioral analysis. In concert, policy daemons set the goals for daemons to work toward, such as, for example, a system free of P2P traffic or congestion. Information from packet-inspection and policy-management daemons informs queuing and routing daemons. With a sense of a packet’s past and its future, daemons modulate bandwidth and priority at the moment of transmission.
This distributive agency seeks to actualize a programmed optimal state. Internet daemons create a metastability in the network of networks—at least until a network administrator or an autonomous policy daemon changes the rules. Every network differs in what it considers minimum transmission conditions, and it is up to daemons to judge this heterarchy. Netflix requires five-megabytes-per-second bandwidth for high-definition video, while Xbox gaming requires the same bandwidth and less than 150 milliseconds ping time. An optimization decides how a part of the internet accommodates these minimums. A daemon might recognize a few bits in a packet as Voice over Internet Protocol (VoIP) and realize that it needs to be prioritized with low delay to avoid degrading an online conversation. Another daemon might ignore the transmission conditions required by a network, as is often the case with P2P networking. This metastability influences both the life and tempo of networks. How does the internet, as discussed by net neutrality expert Barbara van Schewick, accommodate innovation?[79] Should a new network be treated as a new use or an unknown threat? Should an infrastructure accommodate networks, prioritize some networks over others, or block some networks from operating? It is a matter of the very conditions of communication, the imparting of a shared temporality.
There is no one optimization for the internet. There are multiple definitions of the optimal brought about by the heterarchy of networks and the different versions of packet switching. Two kinds of optimality stand out online: nonsynchronous optimization and polychronous optimizatton. As mentioned in the introduction of this chapter, these optimizations differ in the daemons they include in the infrastructure, the ways they arrange these daemons, and their shared definition of the optimal. The former largely keeps the networks simple, pushes control to the edges, and prefers to leave the optimal unsettled, while the latter brings in more daemons and uses their flow control toward the center in order to better manage these many networks.
Nonsynchronous Optimization
Nonsynchronous optimization resonates with the ideas that first led to the ARPANET. Donald Davies proposed nonsychronous communication as a way for a common carrier to accommodate diverse networks, but he left the role of the infrastructure somewhat ambiguous. Should it make decisions about accommodating different networks? The next iteration of nonsynchronous optimization made it much clearer that important decisions about the infrastructure should be left to the ends.
This optimization does not call for internet daemons much beyond the original IMP. The optimization expects daemons simply to make their best efforts to control flows.[80] This sense of best efforts can be found again in the E2E principle. Jerome Saltzer, David Reed, and David Clark formalized the principle in a 1984 article entitled “End-to-End Arguments in System Design.” Core daemons did little more than route packets, certainly nothing as advanced as discussed above. The principle invites comparison to common carriage, since the core infrastructure has limited control (which, in telecommunications, usually grants the carrier limited liability).
Nonsynchronous optimization privileges the ends over the core, just as the E2E principle prioritized the sender and the receiver. The principle holds that correct message delivery “can completely and correctly be implemented only with the knowledge and the help of the application standing at the end points of the communication system.”[81] Only the sender and the receiver can guarantee the accuracy of a message because they alone know its contents. Therefore, control should reside in the endpoints. In this way, the diagram resembled the proposal by Cerf and Kahn mentioned in the previous chapter of this book.
The E2E principle also did not expect an optimal network to be entirely free of error. Consider its approach to voice calls. In the original article, Saltzer, Reed, and Clark thought E2E could easily accommodate delay-sensitive communication, even such as voice, with “an unusually strong version of the end-to-end argument.” They reasoned, “if low levels of the communication system try to accomplish bit-perfect communication, they will probably introduce uncontrolled delays in packet delivery.” In short, internet daemons should do less to ensure the proper delivery of packets and let the ends of networks (or users) sort out lapses in communication. Etiquette, not optimization, would solve disruptions. They suggested that, if transmission conditions degraded an online conversation, “the high-level error correction procedure” was for the receiver to ask the sender to repeat themselves.[82] Their advice may seem out of touch to anyone who has suffered through an unreliable VoIP conversation, but it demonstrates the sacrifices expected for this optimization.
Nonsynchronous optimization leaves the metastability of the internet unorganized by any central point. Instead, the internet’s metastability hinges on the interactions between ends, with each deciding how best to manage its own participation in multiple networks. This arrangement leaves a great deal of uncertainty about how networks share infrastructures. The tempo of the network of networks, in other words, is unknown. E2E requires intermediaries that do little else than carry bits between the ends.[83] Authoritative daemons at the ends command internet daemons to ferry the packet mindlessly to its destination.
Much has been written that defends nonsynchronous optimization. Most significantly, internet legal scholars have argued that E2E fosters innovation and user-led development.[84] Since the ends command the bulk of the authority over transmitting messages, they can easily choose to communicate over new kinds of networks. Jonathan Zittrain calls this the “generative web.” He explains: “The end-to-end argument stands for modularity in network design: it allows the network nerds, both protocol designers and ISP implementers, to do their work without giving a thought to network hardware or PC software.” He asserts that aspects of E2E invite “others to overcome the network’s shortcomings, and to continue adding to its uses.”[85] His optimism exemplifies the guiding principle of nonsynchronous optimization: that the infrastructure should accommodate all kinds of networks without being obligated to handle them well.
Polychronous Optimization
In 2002, critical media scholar Dwayne Winseck warned of the “netscapes of power” drawing “intelligence, resources and capabilities back in the network and under the control of those who own them.”[86] A multitude of daemons have made good on Winseck’s warning. They enact polychronous optimizations. I use “poly” in contrast to “non” to denote this optimization’s temporality. The prefix “poly” indicates many. Polychronous optimization works from a belief that a solution exists to manage the knowable, multitudinous networks of the internet. Networks exist within the tempos set by the optimization. The unpredictable “best efforts” approach is replaced by a reasonable management. The unsettled metastability of the internet is replaced by a regulated system of service guarantees and data limits. The diagram shifts from the edges to the core, with infrastructures progressively taking on greater management capacities. To handle this greater responsibility, the poloychronous optimization installs new daemons discussed above.
Polychronous optimization is less a matter of network discrimination than it is one of a broader economics of bandwidth, a push to an optimal metastability of the internet. Perhaps the value of polychronous optimization, more than anything else, is that it captures the productive aspect of traffic management. It rejects the unsettled relations of nonsynchronous optimization and its optimism that networks can coexist without oversight. Instead, this metastability is premised on a knowable diversity of networks whose relations can be ordered into an optimal distribution of infrastructural resources. Bandwidth-hungry applications must be managed to preserve the functionality of “well-behaved” applications. Assigning the labels “bandwidth-hungry” and “well-behaved” involves a network capable of being able to make decisions about the value of a packet. A polychronous optimization does not remove or block problematic networks. This optimization does not stop the innovation of new networks, at least not deterministically, but it incorporates them into an economy of bandwidth in which they have lower value and priority. Discrimination might not even be intentional, but rather an externality of accelerating and prioritizing other users and applications.
The Comcast case discussed next provides a case study of polychronous optimization. As mentioned in the introduction, Comcast’s management of eMule in 2007 led to a detailed disclosure of its infrastructural architecture.[87] The FCC compelled Comcast to disclose its practices of traffic management of P2P networks.[88] These filings offer a compelling guide to understanding a polychronous optimization (its daemons, its diagram and its definition of the optimal). The filings demonstrate a conflict between two optimizations of the internet. Home users at the ends proliferated P2P networks, while the descendants of IMPs in Comcast’s infrastructure worked to manage and suppress these networks. The introduction of a DPI and traffic management middlebox into Comcast’s infrastructure set off the “net neutrality” debate in the United States and provided a glimpse into a future internet under polychronous optimization.
A Journey through Pandaemonium
This journey begins with activity on home computers. In 2006, the home computer was in the midst of major changes. After Napster, computer piracy grew from an underground phenomenon to a popular activity, an era of mass piracy.[89] Piracy (as well as many legitimate uses) relied on a new kind of networking known as “peer-to-peer,” P2P. “Peers,” in this case, refers to home users. File transmission before P2P relied on a server–client model in which home users connected to a central file-sharing server. P2P connected home users to each other so they could share files, as well as chat or talk. For example, the popular VoIP service Skype uses P2P. As the music industry learned after the launch of Napster, P2P networks often lacked traditional gatekeepers, so users could move clandestine activities like home taping and underground file-sharing onto the internet. (In many ways, P2P remediated these networks as discussed in the previous chapter.)
P2P developed as advocates of free speech on the internet broadly expanded the implications of the E2E principle.[90] Where the TCP/IP regarded both clients and servers as ends, since they function as the sender and receiver in any session, P2P tried to cut out the server and focus directly on the client, prioritizing home computers above all. As John Perry Barlow, cofounder of the digital rights group the Electronic Frontier Foundation (EFF), once quipped, “the Internet treats censorship as a malfunction and routes around it,” a comment that Tarleton Gillespie argues shows that “there is a neat discursive fit between the populist political arrangements [Barlow] seeks and the technical design of the network that he believes hands users power.”[91] The ends of the network, proponents like Barlow argued, must be free from the impositions of centralized control. P2P seemed to actualize these desires for an uncensored network of peers, and enthusiasts not only evangelized the concept but also coded P2P networks.[92] For true believers, the closure of Napster and its successors signified the need for a technical project to build a more resilient form of P2P.[93]
P2P users connected to the wider internet through Comcast’s infrastructure. At the time, that infrastructure, seen in Figure 5, included seven major points: the end user, the cable modem, the optical node, the Cable Modem Termination System (CMTS), a local market router, a regional network router, and, finally, the internet network backbone.
These points connected through a mixture of repurposed coaxial cables (once used to deliver television) and fiber optic lines. Home users connected to Comcast’s infrastructure through a shared network of coaxial cable. These local loops connected to an optical node that transferred signals to fiber optical cables connected to the CMTS. The CMTS aggregated multiple optical nodes, sending traffic to higher-level regional routers and eventually to the core internet. Comcast averaged at the time 275 cable modems per downstream port of a CMTS and 100 cable modems for its upstream port. In total, 14.4 million customers connected to the internet using approximately 3,300 shared CMTS points across Comcast’s entire network.
Comcast had also begun to monitor P2P networks in its infrastructure. Several years before 2007, they had begun to investigate changes in internet use that might be causing congestion on their lines, another sign that keeping pace with innovation on the internet challenged the infrastructure itself. The company’s research found five problematic P2P file-sharing networks: Ares Galaxy, BitTorrent, eDonkey/eMule, FastTrack, and Gnutella. Each embodied a different version of P2P with its own challenges for Comcast. Gnutella, one of the first P2P networks developed after Napster, attempted to further decentralize the network by treating every node as both a client and a server. All peers were equal, and no central index existed. A search, for example, passed between peers rather than queried from a central database like Napster. Ares Galaxy, a fork of Gnutella, brought back some centralization to provide greater reliability. BitTorrent treated all peers as equal and even enforced a rule that users had to upload as much as they downloaded, but it also initially relied on some central servers to coordinate peers and locate files.
Figure 5. Network diagram submitted by Comcast to the Federal Communications Commission to explain its P2P traffic management (reproduction).
P2P networks, according to Comcast, caused congestion in their infrastructure. Much of this had to do with the design of internet service over coaxial cable. Along with the rest of the cable industry, Comcast had repurposed its cable network to provide internet service. The Cable Television Laboratories consortium invested heavily in research to develop the Data over Cable Service Interface Specification (DOCSIS).[94] The first version of DOCSIS was certified in 1999, and it guided cable companies as they upgraded their infrastructure to deliver broadband internet. DOCSIS specifies the entire provision of cable broadband internet, including the arrangement of cable modems and the CMTS. The specification requires that all data passed between cable modems and the CMTS be encrypted (using the Baseline Privacy Plus protocol).[95]
Daemons on a home user’s cable modem put DOCSIS into practice, dealing with the messy realities of cables and wires. Daemons encoded digital packets as analog signals and sent them up the coaxial cable. A big part of their job was coordinating how to share the common coaxial cable. The shared wire of cable television, while fine for broadcasting fifty-seven channels, needed greater daemonic supervision to be used for multiple, bidirectional communications. Daemons communicated with the CMTS every two milliseconds to reserve time (or mini slots), upload data, and interpret signals sent downstream.[96] Thus, the cable network is like Selfridge’s Pandemonium: full of daemons screaming at each other to coordinate resources.
Cable modems also managed transmission conditions for customers. Comcast (like many ISPs) had already begun to use its cable modems to tier its service. In June 2006, the company sold internet service in tiers that ranged from 4 Mbps download and 384 Kbps upload to 8 Mbps download and 768 Kbps (well below the DOCSIS 2.0 maximums of 43 Mbps download and 36 Mbps upload).[97] Comcast does not mention these tiers in its disclosure, but cable modems typically enforced service tiers. When a cable modem boots up and connects to the CMTS, it downloads a boot file that includes instructions for its Simple Network Management Protocol (SNMP) daemon. These instructions match the modem to a customer and configure the SNMP daemon to operate at the bandwidth limits set by the customer’s service tier.[98] These daemons obey a vision of a network, ensuring that their transmission does not exceed the maximum download and upload bandwidth set by the CMTS. Given the responsibility delegated to the cable modem, it should be no surprise there was a healthy interest in cable modem hacking to bypass its security features and reconfigure the SNMP daemon to ignore upload and download limits set by the ISP.[99]
These idealistic P2P networks, however, put the cable modem in an awkward spot. From its inception, P2P posed a problem for the cable internet because it placed greater demand on the scarcest resource, upload capacity. DOCSIS provisioned more bandwidth for download throughput than upload throughput. Comcast had likely upgraded to DOCSIS 2.0 by 2006.[100] DOCSIS 2.0 allowed for a theoretical maximum of 42.88 megabits per second for download, versus 30.72 megabits per second for upload. Cable modems simply could not generate the complicated high frequency modulations needed to use more cable capacity. The lower frequency also meant that upstream traffic had to contend with greater interference from mobile phone traffic.
P2P developers knew their networks could be a nuisance and had taken measures to protect themselves in hostile environments. Developers had begun to design their networks to avoid detection. EMule, the P2P network that provoked the Comcast investigation, had implemented what it called “protocol obfuscation” by 2006. As the eMule project explained:
[Protocol Obfuscation is] a feature which causes eMule to obfuscate or “hide” its protocol when communicating with other clients or servers. Without obfuscation, each eMule communication has a given structure which can be easily recognized and identified as an eMule packet by any observer. If this feature is turned on, the whole eMule communication appears like random data on the first look and an automatic identification is no longer easily possible. This helps against situations where the eMule Protocol is unjustly discriminated or even completely blocked from a network by identifying its packets.[101]
Under protocol obfuscation, the packets’ metadata did not inform daemons of the type of traffic, which effectively “hid” the network from the daemons’ gaze and presumably from its traffic shaping. Protocol obfuscation broke with the internet protocols by deliberately evading packet classification. TCP/IP assumed the packet header to be accurate even though it preferred daemons to avoid looking at it in keeping with the E2E principle. By refusing to be accurate, eMule undermined trust in the suite.
The Sandvine Policy Traffic Switch 8210
Protocol obfuscation was one reason Comcast installed new DPI and traffic management middleboxes in their networks beginning in May 2005. Comcast also hoped to better manage P2P networks. In its submission to the FCC, the company made the rare gesture of disclosing the manufacturer and model of their network equipment: a Sandvine Policy Traffic Switch (PTS) 8210. Sandvine, the manufacturer of the device, was a leader in the DPI industry. Their brochure for the PTS 8210 describes an apparent answer to Comcast’s congestion problems:
Subscriber behavior has always been difficult to characterize. What applications are popular? How does usage vary by service? Are third-party services increasing? Sandvine Network Demographics reporting, without impinging on subscriber privacy, provides valuable insights into application behavior and trends.[102]
Sandvine could deliver on these promises because it had programmed (like its competitors) a new set of daemons to inspect and manage networks. Each PTS had a Policy Traffic Switch daemon (PTSd). As Sandvine explained: “PTSd is the daemon that holds or enforces the rules specified for processing the incoming traffic.”[103] The introduction of the Sandvine PTS 8210 marked an important change in the network, a change that Comcast initially did not announce, leaving it to the public to discover. In fact, Comcast did not update its terms of service to disclose its traffic management until January 25, 2008.[104]
The PTSd wielded a much more powerful gaze into the network and thwarted P2P protocol obfuscation. The PTS 8210 “inspected and stored” all packets exchanged between two peers (technically a flow) for the duration of a session. Mislabeling packets no longer worked because the PTS daemon didn’t look at the header to identify the packet. Sandvine elaborated: “There is no limit on how deep in the packet or flow the PTS can go,” and its gaze “spans across multiple packets ensuring that TCP-based protocols are identified 100% of the time.”[105] The PTS’s daemons could then identify packets based on both patterns embedded within individual packets (like the OpenDPI code) and patterns in the flow itself even when it was obfuscated.
Not only could the PTS 8210 observe more of the network; it also included a built-in system of analytics and demographic reporting. (Well aware of privacy concerns, Comcast frequently highlighted in their disclosure that they did not read any content, even though the Sandvine PTSd could likely assemble parts of a message using its flow analysis.) These reports must have appealed to Comcast as it sought to make sense of the network’s performance. The PTS’s brochure promised “over 150 fully-customizable reports” useful for “marketing, operations, security, and support.” The brochure included a few examples of reports that demonstrate that the device could track user behaviors such as protocols, bandwidth usage, or total minutes of VoIP conversation. The PTS could also track usage by class of activity, so an ISP could determine the popularity of streaming versus online gaming or the types of attacks taking place on its network.
The PTS 8210 offered numerous responses to a congested state. The device could manipulate packets themselves to alter the type-of-service bits in the header to enable DiffServ during routing. The PTS could also use Sandvine’s FairShare policy management to “allocate equitable network resources during periods of congestion.” What these technological solutions imply is that the network itself should fairly allocate bandwidth. However, in the same brochure, Sandvine also noted that the device could create new service tiers, for example gamers could buy a package that guaranteed a better in-game experience.[106] The influence of the PTS 8210 then could be said to modulate between guaranteeing a fair network and further stratifying the internet beyond speed and into different profiles of users.
FairShare was only one of many solutions to P2P congestion. Well before the Comcast affair, Sandvine, in a 2004 report, evaluated seven different solutions to manage P2P traffic and optimize the changing behavior of networks on their infrastructure. These different options describe the various ways daemons could manage P2P and help situate the unique features of the PTS 8210. First, an ISP could just buy more bandwidth, but Sandvine argued (as have ISPs subsequently in regulatory hearings) that “the increased amount of bandwidth actually encourages bandwidth abuses, as the offending subscribers have increased resources to consume.”[107] Instead of adding bandwidth, an ISP could simply subtract or block P2P traffic from their network. The problem was, as Sandvine admitted:
Blocking all P2P traffic is certain to lead to customer dissatisfaction and aggravate customer churn. In fact, some service providers are beginning to tout their high-speed services as “P2P Friendly,” leveraging their P2P position into a powerful marketing tool, capturing the interest—and wallets—of frustrated subscribers.[108]
Blocking was too risky a strategy because it was too overt and could potentially lead to customers’ developing conscious animosity toward their ISP. Too much frustration (an issue discussed in the next chapter) could work too well, leading to a total drop in network traffic as customers quit Comcast’s service. Interestingly, Sandvine also suggested the controversial approach of network caching (also known as “content distribution”), where ISPs store the contents of P2P networks closer to the customer. Caching appeared to be “a workable solution,” but its legal ambiguity exposed the ISP to “a range of serious risks,” and Sandvine warned that caching could result in “a host of legal issues and a mass of bad PR” due to the ambiguous, gray, or illegal contents of many P2P networks[109] (a calculation of risk that reiterates the commonalities between transmission and security). Bandwidth caps, Sandvine suggested, could also limit P2P traffic by introducing economic penalties for users who consume too much.[110] Today caps are almost universal, but at the time, Sandvine warned that caps were a “heavy-handed and imprecise approach to the P2P problem.”[111]
Sandvine preferred to recommend more dynamic, or modulating, solutions to P2P. ISPs could throttle traffic to prevent P2P networks from using too much of the available bandwidth (as was done in Canada) or, even better, manage traffic depending on the current state of the network. Sandvine claimed, as its report turned into an advertisement, that its products were “essentially ‘listening in’ on the P2P conversations” so that they could “step in and facilitate a transfer among local subscribers, rather than allowing the P2P protocol to connect randomly to a client on an external network.”[112] Controversially, Sandvine proposed that the ISP should interfere in the interactions between end daemons to push them to connect based on proximity. This solution, one that proposes a different type of collaboration, rather than antagonism, between P2P daemons and networks, might have solved bandwidth issues had it been adopted by Comcast. Instead, Comcast configured its daemons to interfere in P2P networks by reducing all upstream traffic.
To manage the congestion caused by P2P applications, Comcast installed Sandvine’s PTS 8210 equipment next to every CMTS (though sometimes two CMTSs shared one Sandvine switch). The PTS monitored a copy of the upstream traffic that passed through the CMTS to the active upstream router that led to the general internet.[113] Comcast had installed the PTS 8210 on a duplicate network, or “out of line,” to reduce points of failure. Traffic passed through a splitter—labeled as a “Mirror” in Figure 5—that passed a copy of traffic to the PTS 8210. (In contrast, an “in-line” application installed the PTS in between the CMTS and the active upstream router. By being out of line, the PTS could fail without disrupting the operations of the CMTS.)
Sandvine daemons looked for a few troublesome networks. Comcast’s prior testing had revealed that Ares, BitTorrent, eDonkey, FastTrack, and Gnutella “were generating disproportionate amounts of traffic.”[114] Comcast configured the PTS 8210 to track and count packets related to those applications (likely using proprietary DPI by Sandvine, though it is not mentioned in the report). Sandvine daemons kept track of the overall number of upload sessions generated by each P2P network per CMTS (rather than per user). A “session” referred to a connection established between two peers in a P2P network. BitTorrent, for example, creates swarms where users download and upload parts of a file. Comcast focused on instances of “unidirectional” sessions, when a subscriber only sends information to a peer, not receiving another part from that peer, as opposed to what they called “bidirectional” sessions, when two peers exchange data.[115] In BitTorrent, this unidirectional flow was called “seeding.” A user seeded a BitTorrent network when, having completed downloading a file, they left the BitTorrent client running to keep sharing the file with other users. Comcast explained, “the number of simultaneous unidirectional upload sessions of any particular P2P protocol at any given time serves as a useful proxy for determining the level of overall network congestion” (italics added).[116] It is reasonable to assume that Comcast had many ways to detect congestion in its infrastructure, so it is important to note their decision to pick simultaneous unidirectional upload sessions as its proxy.
Each PTS 8210 had a stateful awareness of its part of the Comcast network. Comcast configured the Sandvine PTS 8210 to observe the levels of unidirectional upstream traffic per application. When sessions exceeded a threshold, the device’s daemons intervened. Thresholds differed by application. Through testing, Comcast decided that one CMTS tolerated up to one hundred fifty sessions for Ares networks, while another CMTS tolerated only eight BitTorrent sessions before intervening. These thresholds derived from estimates of how much bandwidth a session consumed. Comcast set a lower threshold for BitTorrent sessions than for Ares because the former consumed more bandwidth per session than the latter. Thresholds also included a calculation of how well unidirectional sessions functioned as a proxy for overall activity. As Comcast explained, “the BitTorrent protocol more heavily promotes bidirectional uploads as compared to eDonkey, so, while they both may have the same total number of sessions, BitTorrent would have a much higher percentage of bidirectional sessions than eDonkey.”[117] Comcast calculated a ratio of three bidirectional sessions for every one unidirectional session observed for eDonkey. BitTorrent had a ratio of twenty bidirectional sessions for every one unidirectional, and so BitTorrent had a lower threshold because unidirectional sessions implied a much larger amount of overall activity.
Comcast did not elect to use the PTS 8210 to create new service tiers, nor did they use FairShare to manage bandwidth. Exceeding the threshold caused the PTSd to try to diminish upstream traffic on its domain. As the PTS 8210 was “out of line,” its daemons could not interact directly with the packets passing through the Comcast network. This fact limited the daemons’ grasps since they could not reduce bandwidth or drop packets from the network. Instead, the daemons injected reset packets into downstream traffic. Reset packets are conventionally sent from receiver to sender to inform the sender that some error requires communication to be restarted. By injecting reset packets, the PTS 8210 caused daemons on the home computer to think the session had ended in error and to thus close the connection. Comcast used the technique to “delay unidirectional uploads for that particular P2P protocol in the geographic area.”[118] The PTS 8210 continued to inject reset packets until unidirectional sessions fell below the threshold or when the proxy for congestion returned to an acceptable level. Importantly the technique broke IP conventions by having the server intervene in the control messages sent between two peers on the network, a violation of the E2E principle.[119]
Gauging the effects of the throttling on users is difficult. The court cases mostly focused on Comcast’s false advertising. According to a class action suit that was eventually filed against the company, Comcast had: “(1) slowed, delayed or otherwise impeded peer-to-peer (P2P) transmission sent using its high-speed Internet service (HSIS) (even though it advertised ‘unfettered’ access) and (2) failed to disclose this practice to its subscribers.”[120] Possible effects varied depending on the network. The EFF, in its response to the case, suggested that packet injection adversely affected BitTorrent’s and Gnutella’s networks. Reset packets “impair[ed] the node’s ability to discover and establish proper communications with other parts of the Gnutella network.”[121] The EFF also suggested that traffic management delayed Lotus Notes and Windows Remote Desktop networks. Comcast’s filings do not mention these targets, so it is possible that reports were inaccurate or that the PTSd accidentally targeted these networks. If the latter explanation is true, it is an important reminder of a daemon’s probabilistic gaze. Such probabilities include the possibility that daemons misclassified some packets.
Comcast’s case is a specific example of the use of flow control against certain networks. Tim Wu calls this “broadband discrimination” in his original article on net neutrality.[122] The concept of flow control helps clarify this discrimination. Daemons discriminate by degrading transmission conditions for certain networks, in this case P2P, intentionally providing different conditions of transmission than what is considered best by the network. Faced with a fiber-coaxial infrastructure struggling to provide sufficient upload capacity, Comcast decided to selectively desynchronize the coordination of P2P networks, frustrating its users, to ensure the success of other networks. Amid growing adoption of P2P, which was not an unforeseeable change, given how the E2E principle championed end users, the network changed how it transmitted networks, forcing P2P networks to suffer so other networks could succeed.
2008: User-Centric Traffic Management
The Comcast case is not the last example of polychronous optimization. New polychronous optimizations have arisen even after net neutrality legislation. Some even claim to support the principle. Attention to daemons helps track these polychronous optimizations. Increasingly, daemons have turned their gaze to problem users. To be clear, this approach still manages networks, but only select parts: the nodes. Comcast modified their strategy in reaction to the public, legal, and regulatory response to its network management. The company made a number of changes in its infrastructure with the goal of targeting certain users. In other words, in response to concerns that its traffic management techniques discriminated against P2P, Comcast shifted focus to home users who use more than their “fair share” of bandwidth.
Comcast’s new traffic management policy, diagrammed in Figure 6, reconfigured its cable modem and its infrastructure. Comcast installed three servers further upstream than the CMTS, near its regional network routers, although “the exact locations of various servers ha[d] not been finalized.”[123] Proximity to the regional network routers meant that these servers managed more than one CMTS at a time, serving an even wider geographic area. Comcast planned to install three kinds of servers to manage its users:
- 1. Sandvine Congestion Management FairShare servers designed to detect when a CMTS port was congested, similar to the way the PTSd had monitored for congestion;
- 2. Packetcable Multimedia servers manufactured by Camiant Technologies configured to manage the cable modems of Comcast customers;
- 3. and Internet Detailed Record Collector servers focused on monitoring the data sent and received by Comcast’s customers (Comcast had not selected a vendor for these when it submitted its explanation to the FCC).
These servers enforced a two-step threshold for traffic management. Daemons on the Sandvine server monitored each CMTS port for congestion over fifteen-minute intervals. Based on lab tests, technical trials, and other simulations, Comcast set a first threshold at the CMTS level. Daemons classified a CMTS line as being in a near-congestion state “if an average of more than 70 percent of a port’s upstream bandwidth capacity and more than 80 percent of a port’s downstream bandwidth capacity is utilized” over the fifteen-minute period, and daemons responded if a line in the CMTS passed this threshold.[124] Sandvine daemons queried the Internet Detailed Record Collector servers for cable modems using more than 70 percent of their provisioned upstream or downstream bandwidth in that fifteen-minute period. If the search returned no results, the daemons did nothing. If it did return a list of customers using a lot of their bandwidth, then the daemons’ traffic management targeted these customers. In other words, if a customer bought an 8-Mbps-down / 1-Mbps-up internet service package, they would be flagged if they used on average more than 5.6 Mbps down and 0.7 Mbps up in a fifteen-minute window. These two thresholds triggered Comcast’s new congestion management techniques.
Figure 6. Network diagram submitted by Comcast to Federal Communications Commission to explain its user-centric traffic management (reproduction).
Comcast daemons managed perceived congestion by introducing a new label to prioritize all packets sent and received by cable modems. Comcast updated all the boot files of cable modems to flag packets as either Priority Best Efforts (PBE) or Best Efforts (BE). By default, a cable modem sent and received all packets as PBE. They all had, in other words, the same status unless a CMTS entered a near-congestion state. Any cable modems identified in an extended high-consumption state had their packets set to BE rather than PBE. Daemons at the CMTS prioritized PBE over BE when they sent bursts of packets up or down the shared lines. Comcast explained:
A rough analogy would be to buses that empty and fill up at incredibly fast speeds. As empty buses arrive at the figurative “bus stop”—every two milliseconds in this case—they fill up with as many packets as are waiting for “seats” on the bus, to the limits of the bus’ capacity. During non-congested periods, the bus will usually have several empty seats, but, during congested periods, the bus will fill up and packets will have to wait for the next bus. It is in the congested periods that BE packets will be affected. If there is no congestion, packets from a user in a BE state should have little trouble getting on the bus when they arrive at the bus stop. If, on the other hand, there is congestion in a particular instance, the bus may become filled by packets in a PBE state before any BE packets can get on. In that situation, the BE packets would have to wait for the next bus that is not filled by PBE packets. In reality, this all takes place in two-millisecond increments, so even if the packets miss 50 “buses,” the delay only will be about one-tenth of a second.[125]
A missed bus might not be a big inconvenience, but a change in one tenth of a second (or one hundred milliseconds) was enough to exceed the minimum requirements of networks like Xbox gaming. (This bus analogy takes on new meaning in the next chapter, as Comcast also uses a bus analogy in an ad campaign.) Comcast, at the end of its filings, promised to implement this new system by December 31, 2008. It is still running today, as far as I can tell.
Today, user-centric management has been positioned as a polychronous optimization that respects net neutrality regulations. Saisei, another player in the traffic-management industry, advertises its own user-centric traffic management product called “FlowCommand” as “the world’s first ‘Net Neutrality’ enforcer.” FlowCommand can “monitor and control every flow on an Internet Service Provider’s broadband links—millions of concurrent data, voice and video sessions—in real time without impacting the performance of the network.”[126] Much like Comcast, Saisei sidesteps accusations of meddling with networks by focusing on “rogue users.” The problem, as the technical support joke goes, is between the chair and the keyboard. An administrator can tame rogue users:
The “Host Equalization” tick box on the FlowCommand User Interface immediately implements this “policy,” giving every host—user—on a link exactly the same percentage of the available bandwidth that every other user has, regardless of what application(s) they may be running. So, aggressive applications, including P2P apps like BitTorrent or high volumes of YouTube traffic, that used to grab huge amounts of link bandwidth, will get the same percentage of a link’s bandwidth as every other user on the network if that link approaches congestion.[127]
The daemon, in effect, becomes responsible for preserving net neutrality by consolidating the networks of every user into distinct, equitably provisioned flows. The FlowCommand’s superior management allows infrastructures to eliminate the need for spare, emergency capacity. Comcast set the threshold for near-congestion at 70 to 80 percent, but since FlowCommand’s host equalization allows links to run “close to [100 percent] utilization without ever stalling a session, there is far more bandwidth available for all.”[128] Far from being just an instrument of optimization, the daemons are the optimal way of managing the internet: they are better than humans at Comcast at making decisions about what constitutes congestion.
Saisei’s net neutrality enforcer raises important questions about the limits of the idea. If regulators were to make a pact with the devil, so to speak, they could ensure complete equity among users through persuasive optimization. Doing so goes well beyond the ideals of nonsynchronous optimization that seem to have informed net neutrality. Network equality could indeed be a more radical optimality than neutrality, one that sets the creation of common, equal conditions of transmission for all as its ideal.[129] Given the netscapes of power described by Winseck above, such a future is unlikely. Yet the promise of daemonic optimization looms large on the internet as it does in other parts of society.
2018: Future Optimizations
Future optimizations might not require any human oversight. New daemons promise to manage the network themselves. Aria Networks describes itself as “a provider of Artificial Intelligence (AI) driven planning and optimization software for networks”[130] and promises to create a self-optimizing network in which “the ultimate vision is a network that can respond to fluctuating demand in real time and deliver service levels immediately.”[131] The possibility of the next generation of daemons to automatically optimize the internet raises questions akin to legal theorist Frank Pasquale’s concerns over a black box society. In his investigations of financial credit systems and online data brokers, Pasquale questions the forms of social control manifest through opaque technical systems.[132] Like proponents of net neutrality, Pasquale worries that “the values and prerogatives that the encoded rules enact are hidden within black boxes.”[133] In the case of the internet, daemons are part of the black box now operating in proprietary infrastructures within the depths of the infrastructure. Black boxes might operate with novel computational logics, but it is just as likely that optimization will reassert logics of capital. Both Winseck and Pasquale draw a close parallel between optimization and capital. Pasquale writes: “Power over money and new media rapidly concentrates in a handful of private companies.”[134] Further research must trace this link to explore the ways the political economy of the internet drives daemonic development and programs the optimal.
Where future research should question who programs the next optimization, I wish to reflect on the optimism of autonomous daemonic optimization. Critiques of big data and algorithms have clearly demonstrated the capacity of automated computational systems to discriminate,[135] but software and algorithms endure as institutional solutions to human bias.[136] Why was the disclosure that Facebook used humans to manage their “news feed” a scandal?[137] Should not the clear biases of its algorithms be subject to the same scrutiny as human bias? (But it should be a scandal, since the leak demonstrated the glaring lack of public oversight over these new media empires.) These same debates over automation may well come to internet management (if they’re not already here). Canadian and American net neutrality regulations allow for reasonable network management while preventing discrimination. What values and prerogatives will be drawn into the network due to this exception? Will this loophole be tolerated because daemons will be able to better “solve” the internet than humans?
Daemons, or at least their autonomous successors, might manage the internet better, but there are risks in that optimism. Louise Amoore, in her book discussing the politics of algorithmic regulation, warns about the loss of enchantment. Drawing on the work of Jane Bennett, Amoore writes, “for Bennett, enchantment ‘can propel ethics,’ at least in the sense that the magic of future potential, the promise of a life not yet lived, remains open.”[138] The same might be said of the internet’s metastability. Perhaps an enchanting internet is worth the risk of suboptimality. Amoore warns that systems like self-optimizing daemons might “actively annul the potential for the unanticipated,” and instead she ponders what it means “to live with the unknowability of the future, even where it may contain dangers or risks.”[139] The stakes of internet optimization, to be fair, are different from Amoore’s interest in the security state, but they are not marginal. The internet is quickly becoming the de facto global communication system, if it has not already. Polychronous optimization promises a metastability for all these networks as if pure immanence can be solved by code.
A future where autonomous policy daemons automatically optimize the internet risks depoliticizing their influence. Amoore warns of the political consequences of this automation, writing, “if the decision were to be rendered easy, automated or preprogrammed, then not only would the decision be effaced, but politics itself is circumscribed.”[140] Her words echo in the promises of Saisei Networks, whose FlowCommand makes optimal network management easy. The easy solution effaces its hidden values and politics. Amoore herself calls for a politics of possibility against these technical solutions. She writes that “the question for critique becomes how to sustain potentiality, how to keep open the indeterminate, the unexpected place and the unknowable subject.”[141]
Perhaps what needs to be politicized is the optimism of the technological fix. Peter Galison, writing on cybernetics, comments that “perhaps disorganization, noise, and uncontrollability are not the greatest disasters to befall us. Perhaps our calamities are built largely from our efforts at superorganization, silence, and control.”[142] Nonsynchronous optimization captures (at the expense of performance) a project of a network of networks that can never be adequately provisioned, whose control will also be partial. Perhaps it is a more apt foundation for the network of networks in that it begins with an admission of limits. Nonsynchronous optimization has a sense of a diversity that cannot be fully known nor solved; it embraces, to recall the words of Bennett, the internet as a “volatile mixture.”[143]
Conclusion
The internet as Pandaemonium stretches from the microscale of daemons to the macroscales of internet governance and its political economy. In Pandaemonium, daemons enact flow control, working together across the infrastructure. Daemons collaborate to create flows for networks, but their collaborations differ. Nonsynchronous optimizations require daemons at the edges of the infrastructure to be responsible for key decisions during transmission. The center of the infrastructure is left unsettled without an attempt to create some internal metastability. Unruly P2P daemons have prompted internet service providers like Comcast to install new networking computers in their infrastructure. Comcast’s decision exemplifies a new trend in networking, away from nonsynchronous communication and toward a polychronous internet.
With only so much space in the pipe, ISPs have invested in more sophisticated daemons able to prioritize larger volumes of traffic and “to ensure that P2P file sharing applications on the Internet do not impair the quality and value of [their] services.”[144] More and more, ISPs leverage their flow control as a technological fix to attain a network optimality of managed temporalities. This polychronous optimization produces and assigns various temporalities that have comparative values. Like prime-time television, certain time slots have more value than others. However, the times of a tiered internet have less to do with the hour of the day than with the relations between times. File sharing is assigned less priority, and its forces of coordination and exchange cease to operate optimally. Polychronicity is driven by a profound new ability to remake itself always in service of the optimal.
These changing daemons illuminate the difficult relationship between net neutrality and the internet infrastructure industry. Regulation generally focuses on the ISPs without paying much attention to the industry developing the equipment that violates net neutrality. While these daemons have many legitimate uses in enterprise and private infrastructures, they become the virtualities of ISP infrastructures, the actual features not yet implemented. Regulation stops an ISP from enabling these features, but not from an industry developing them in the first place. Instead, these daemons become a source of what ISPs have described as service delivery innovation.[145] Daemons wait to be the next innovation.
To be effective, net neutrality regulation has to track this industry and encourage the development of daemons that abide by its rules. With daemons that have both many legitimate applications and some configurations that violate neutrality, their movements have to be tracked. The Citizen Lab, in comparison, has demonstrated how network censorship equipment often travels into authoritarian and repressive regimes.[146] Where the Citizen Lab has called for greater export controls of these technologies, internal regulatory agencies should also have a greater understanding of the equipment installed in public internet infrastructures.
Polychronous optimizations will continue to be a policy issue for years to come as it guides the design of new infrastructures. The transition to mobile, for example, has given the telecommunications industry an opportunity to rebuild infrastructure to better enact polychronous optimization. The Third Generation Partnership Project (3GPP) is an industry-initiated standards organization creating protocols and guidelines for mobile infrastructure. The group comprises regional telecommunications standards organizations from China, Europe, India, Japan, Korea, and the United States and is responsible for standardizing mobile wireless protocols such as Long-Term Evolution (LTE) and High-Speed Packet Access (HSPA).[147] These standards deal with the physical, or more accurately spectrum, issues necessary for packet-switched communications. The organization also provides guidelines for mobile internet infrastructures, including the Policy and Charging Control architecture (PCC). Started in 2005, the PCC standardized quality of service and the ways in which its members levy usage-based charges. As 3GPP members implement PCC, they install new daemons to apply its charging mechanisms and maintain its quality standards. DPI manufactures like Procera Networks sell products designed to implement these features.[148] These standards allow daemons to attach new metadata to messages, such as subscriber categories and service guarantees, to striate transmission conditions. The global scale of the 3GPP means that its complex logic of optimization extends far beyond any one infrastructure. Instead, it aims to establish these logics across infrastructures. TCP/IP, by contrast, had difficulty enforcing use of its type-of-service flag in the header.
If humans seem too absent for the discussion above, the next chapter moves from focusing on daemons to the feelings about them. What is the experience of having your communications delayed? How do we suffer from buffering? The next chapter describes these feelings through the analysis of five commercials by network providers that describe the various feelings imparted by flow control: frustration, isolation, exclusion, envy, and boredom. Comcast, for example, advertises to people riding the bus that it “owns faster,” suggesting that people can pay to enjoy faster internet like paying for a car to avoid public transit. Moving beyond technical solutions, these marketing pleas demonstrate how ISPs describe an optimal internet and attempt to valorize the experience of priority and avoiding frustrating delays.