Skip to main content

Digital Communications Possessed: Appendix

Digital Communications Possessed
Appendix
    • Notifications
    • Privacy
  • Project HomeInternet Daemons
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Cover
  2. Title Page
  3. Copyright Page
  4. Contents
  5. Abbreviations and Technical Terms
  6. Introduction
  7. 1. The Devil We Know: Maxwell’s Demon, Cyborg Sciences, and Flow Control
  8. 2. Possessing Infrastructure: Nonsynchronous Communication, IMPs, and Optimization
  9. 3. IMPs, OLIVERs, and Gateways: Internetworking before the Internet
  10. 4. Pandaemonium: The Internet as Daemons
  11. 5. Suffering from Buffering? Affects of Flow Control
  12. 6. The Disoptimized: The Ambiguous Tactics of The Pirate Bay
  13. 7. A Crescendo of Online Interactive Debugging? Gamers, Publics, and Daemons
  14. Conclusion
  15. Acknowledgments
  16. Appendix: Internet Measurement and Mediators
  17. Notes
  18. Bibliography
  19. Index

Appendix

Internet Measurement and Mediators

The field of internet measurement offers rich resources for finding new mediators for publics and regulators. Internet measurement is a research agenda in computer science. It refers to the development and use of software for analyzing the operation of computer networks, including the internet. As much as the internet can be taken for granted, these tools often reveal how it performs in unexpected ways. And not just for researchers either: most internet measurements are publicly available, meaning that anyone can run and use them. In this appendix, I describe a bit more of the field to those unfamiliar with approaches found in computer science.

Internet measurement is as old as the Advanced Research Projects Agency’s ARPANET. Interface Message Processors (IMPs) had the ability to send a “trace” bit, or what today might be called a “traceroute.” Trace bits helped ARPANET researchers map how IMPs routed packets. IMPs were required to handle trace bits distinctly and log a report that detailed how they handled packets. Collecting reports allowed ARPANET researchers to understand how packets traveled across the experimental system. Data aggregated at the Network Measurement Center (NMC) run by Leonard Kleinrock at the University of California, Los Angeles. The center was the first node in ARPANET; it collected IMP data and used the Network Measurement Program to “format and print out the measurement statistics generated by the IMPs.”[1]

These early internet measurements illustrate an important lesson: studying the operation of ARPANET meant analyzing both running code and written code.[2] There is an allure in thinking of written code as the constitution of “cyberspace,” a metaphor encouraged by the work of Lawrence Lessig,[3] but ARPANET had to be understood through observation. Even in the early ARPANET proposal from 1968, researchers cited a need to observe the network’s operation, although they had access to all its design documents and ran simulations of its performance. As a simulation made real, ARPANET had to run in order for how it worked to be understood. Through NMC, ARPANET researchers discovered lockups, points of congestion, and other errors in IMP programs not predicted in the source code.

The NMC was one way ARPANET designers understood their daemons, and it gave way to a number of different initiatives to study internet performance. Many of the first measurement tools studied the internet through the protocological responses of daemons. Mike Muuss developed the ping tool to measure the time taken to communicate between two nodes of the network (or “round-trip” time). Muuss’s tool repurposed the echo request feature of the Internet Control Message Protocol. Daemons had to reply to these requests, so ping worked by sending a request and then measuring the time taken to receive a response.[4] Ping in turn inspired the modern successor to the trace bit. Developed in 1987, traceroutes repurposed the echo request packets sent by ping to measure the different hops encountered by a packet.

While pings and traceroutes were simple, freely available tools, the growing size of the early internet required more sophisticated methods. Vinton Cerf provided a major catalyst for research in internet measurement. His RFC (Request for Comments) 1262 from October 1991 encouraged and provided guidelines for the development of tools for measuring the internet. The bulk of the short RFC stressed the need to ensure measurement did not interfere with network performance or violate privacy, but underlying these concerns was a belief that “data is vital to research and engineering planning activities, as well as to ensure the continued development of the operational infrastructure.”[5] Cerf acknowledged that the task of measuring the internet was now a vital task, but no longer a small one.

Internet measurement gradually emerged as a field of research, but not overnight. As Robert E. Molyneux and Robert V. Williams wrote eight years after Cerf’s RFC, internet measurement was “dispersed, fragmentary, fugitive, and rarely scholarly.”[6] The Cooperative Association for Internet Data Analysis (CAIDA) was a key center of early research and remains one of the biggest initiatives dedicated to internet measurement. CAIDA began in 1997 as a project of the University of California, San Diego, and the San Diego Supercomputer Center, where it still runs today. Research in computer science did not translate into media studies except for Martin Dodge and Rob Kitchin, whose geography of cyberspace research included a discussion of the National Science Foundation Network (NSFNET) mapping efforts.[7]

Internet measurements developed into two varieties. Passive measurements “capture data in the normal working of the Internet,” whereas active measurements “introduce traffic in order to test network behavior.”[8] Tools measure passively because they adopt the standpoint of an observer of the network, monitoring and recording behavior. A test becomes active when it generates control traffic to measure network activity. Both methods have their supporters and critics. Passive monitoring observes actual network performance, but the approach raises privacy concerns, as it monitors usage and it may interfere with the performance of the machine, thereby skewing results. Many middlebox manufacturers, like Sandvine, have looked to sell their measurement as a new source of audience insights. Active measurement, by contrast, does not require direct access to the logs. Instead, a third party can generate its own data with active measurement. The distinction can be a benefit to some testing initiatives that seek to provide an outside perspective on performance. Many of the major internet measurement initiatives are by independent third parties and, as a result, use active measurement.

The field of active internet measurement changed significantly with the introduction of Ookla’s Speedtest in 2006. For the first time, the tool crowdsourced internet measurement at scale by asking the public to test their connection and then pooling this data to make claims about the nature of connectivity in general. Speedtest started as a side project of popular internet-hosting service SpeakEasy and offered its users a simple and interactive tool to test the upload and download speeds of their home internet connection.[9] On May 25, 2010, Ookla launched NetIndex, a website that aggregated the 1.5 billion tests conducted into an interactive global map of internet speeds.

Crowdsourcing, at its best, offers a novel solution to the study of distributed systems like the internet. No one test, or even tests from one location, accurately describes the internet. Given the inability of any one vantage point to objectively assess the system’s performance, crowdsourcing observes systems at scale, turning to the public to study a public thing. To scale, its tools must be run easily on home computers while adhering to standards that ensure tests (scattered across the globe) measure roughly the same part of the infrastructure. Their popularity has been their strength, and many crowdsourced internet measurements have proven useful in understanding this changing infrastructure.

The Measurement Lab (M-Lab) is perhaps the best example of a crowdsourced internet measurement. M-Lab is an international infrastructure of testing servers located in standardized locations worldwide. The project has deployed over 130 servers located in core internet exchange points in major cities. Every server is located off-net, meaning it is run independently from an ISP. More than servers, M-Lab is an open platform for anyone to deploy measurement tools so long as they agree to release the code and the data to the public. The platform enables many kinds of crowdsourced tests, from tools to measure speed to censorship. Data is open to the public, making the project one of the few sources of public domain measurement data. In this way, M-Lab exemplifies the best practices by developing testing standards for infrastructure, maintaining an open platform to encourage new tests, and making its data open to independent analysis.[10]

Around 2003, another important measurement tool, SamKnows, was developed in the United Kingdom by Sam Crawford. SamKnows has built a testing infrastructure similar to M-Lab, actually often using M-Lab testing servers, but it has also standardized the locations of the home testing points. SamKnows has developed its own whitebox, a small computer that runs at a customer’s premises and automatically tests the connection. Whiteboxes provided a more stable testbed though it is more costly to implement. Today, SamKnows is used by regulators in Canada, Europe, the United States, and the United Kingdom. The success of SamKnows is an example of how regulators can deploy their own mediators for the internet.

These are just a few approaches to internet measurement. Given the size of the field, all the approaches or tools available cannot be listed here. A few examples, however, illustrate what kinds of tools could be used to study daemons’ packet inspection, routing, and queuing.

  1. 1. Packet inspection: A “frankenflow” is a technique to analyze packet inspection, specifically how daemons classify packets. Frankenflows resemble a regular flow of packets and are constructed by copying an application’s packet flow and then changing data in the application layer of the copied packets. By changing specific parts of the packets, frankenflows reveal which bits a daemons reads to classify the packet. Studies using this technique have found that mobile carrier T-Mobile uses the host header in HyperText Transfer Protocol (HTTP) traffic to zero-rate video traffic for certain American customers. Understanding these detection techniques allows for a better understanding of the types of daemons at work in a commercial infrastructure and whether Internet Service Providers (ISPs) are being forthright when describing their practices to the public.[11]
  2. 2. Queuing: Glasnost, mentioned in chapter 7, uses techniques similar to frankenflows to detect how ISPs might vary transmission conditions for different networks. Glasnost sends samples of packets associated with different networks—such as Peer-to-Peer (P2P) networks like BitTorrent and eMule, Flash video, and more traditional networks like email and HTTP—and then compares the results. By comparing performance, Glasnost reveals whether an ISP gives preferential treatment to one network over another.[12] This detection tends to compare the performance per protocol, whereas newer measurement tools can compare performance between apps. WeHe, developed by researchers at Northeastern University, University of Massachusetts, and Stony Brook University, builds on the prior work on frankenflows. It detects violations of network neutrality if one app performs better than another app. Available for both Android and Apple phones, the app allows any user to detect whether their mobile service provider might be throttling Amazon, Netflix, Skype, Spotify, or YouTube.[13]
  3. 3. Routing: The traceroute remains an important technique for testing local routing conditions, but it has also been productively used in crowdsourcing projects to understand larger patterns of internet routing. The IXmaps project asks the public to run traceroutes and upload them to its common database. Researchers then aggregate and map these traceroutes to track how packets move across international borders and whether they pass through known surveillance points of the U.S. National Security Agency (NSA). IXmaps uses popular and government websites as the destinations for its traceroutes, so the results also reveal what services might have a probability of being collected by bulk surveillance.[14]

These different internet-measurement techniques represent a few of the tools that exist to reveal the hidden work of daemons. As of this writing, I have found no clear example of how to study policy management by daemons or the distribution of rules between then.

These tools have yet to be widely adopted in any regulatory contexts. Instead, most national broadband-measurement programs tend to focus on measuring broadband speed (download and upload capacity, as well as latency). These studies provide important details about the digital divide, giving important insights into the differences between peak and off-peak performance, as well as between different ISPs and different regions. However, these programs tend to use HTTP performance as a proxy for overall web performance. Assuming HTTP exemplifies average use might become untenable with the turn toward better traffic differentiation. In response, reporting might have to provide per-protocol or per-application breakdowns. Further, the influx of new daemons in post-network neutrality regulatory contexts will require newer tools like WeHe, capable of understanding the changing modulations of flow control.

Beyond the telecommunications context, these tools may guide the study of other daemonic systems. Internet measurement could be a source of inspiration for other kinds of algorithmic audits and studies of black-box systems. In every case, mediators have to be developed that negotiate the problems of studying distributed and dynamic systems. The crowdsourcing approach, particularly M-Lab, offers a good course of action when done in the public interest.

Annotate

Next Chapter
Notes
PreviousNext
Copyright 2018 by the Regents of the University of Minnesota
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org