RETURNING to John’s (2013, 176) claim that “sharing is the fundamental and constitutive activity of Web 2.0,” it is important to note that later, he pushes this further. “It could even be argued that . . . the entire internet is fundamentally a sharing technology” (179), he writes, citing the importance of open source software and programming languages, and sharing economies of production, in the development of websites based on user-generated content. Likewise, Engin Isin and Evelyn Ruppert (2015, 89) claim that “the ubiquity of various uses of digital traces has made data sharing the norm.” While I’m also interested in sharing configured in this way, I want to slightly rephrase and shift the emphasis of these assertions to suggest that sharing can be conceived as the constitutive logic of the Internet. Rather than thinking about sharing primarily as something that users do on the Internet, then, I want to focus more on the idea that sharing operates at a protocological level. My use of this term here draws on Alexander Galloway’s (2004, 7) exposition of computer protocols as standards that “govern how specific technologies are agreed to, adopted, implemented, and ultimately used by people around the world.”
In arguing this point, I want to be clear that I am not supporting a utopian celebration of the Internet’s open, or free, origins. Galloway, among others, makes the error of such an assumption clear, as he characterizes the Internet as a technology marked by control and hierarchies of enclosure. Rather, in positing sharing as protocological, I want to imply simply that the Internet’s grain is, first and foremost, “stateless” in the sense that programming intends: as a lack of stored inputs. In other words, the basic architecture of the Internet does not automatically keep a record of previous interactions, and so each interaction request is handled based only on the information that accompanies it. For example, the Internet’s fundamental method for sending data between computers, IP, works by sending small chunks of data, “packets,” that travel independently of each other. These discrete packets are put together at an upper layer, by TCP, yet IP itself operates without state. We can also look to how the Web’s HTTP serves up requested pages but does not “remember” those requests. Such discrete communications mean that no continuity is recorded.
As Tom Armitage points out, because the Internet’s default architecture is open or stateless, it is very good at sharing but not so good at privacy and ownership. By this, he means, quite simply that “implementing state, or privacy, or ownership, or a paywall, is effort” (T. Armitage, pers. comm., February 9, 2016). State is a secondary level, patched onto a stateless system. I have to stress that this is not to say that the development and design of the Internet were free from a proprietary impetus, nor that “default” architecture is anything but conscious and intentional. My point, rather, is that at a technical level, limiting connection and sharing on the Internet is something that has to be introduced in secondary layers and mechanisms. It also follows that tracking a user’s activity has to be imposed at a secondary level. Netscape, for example, introduced the cookie—a by-now ubiquitous text file that stores small amounts of data associated with a domain. For as long as the cookie has not expired, it will track the pages a user visits and help build a user profile (see Elmer 2003). In its stateless formations, before the “effort” to impose statefulness, the Internet, then, can be conceptualized as a technology of stateless, borderless, always already sharing. I want to suggest that sharing (without tracking or remembering) in this instance is a rule conditioning the possibility of computers communicating with each other at all.
This links protocol and profits. Illegal and legal entities want a share of our data. This would include hackers should our data be interesting or profitable enough, able to overcome any data loss prevention (DLP) software and systems from firewalls to encryption. It would also include trackers utilized by Web publishers, such as DoubleClick, that log the data we create through our online activity to customize service and advertising and sell it to third parties. Such trackers don’t often announce themselves to us unless we seek them out through antitracking browser extensions (like Ghostery) or forensic examination of user agreements (which still do not list specific trackers used). Many websites have multiple trackers—cookies and beacons. Ironically, even website publishers that employ trackers are themselves subject to “data leakage,” which “occurs when a brand, agency or ad tech company collects data about a website’s audience and subsequently uses that data without the initial publisher’s permission” (McDermott 2015). Such secretions, the unintentional “sharing” of already “shared” data, also highlight the difficulties of not sharing from a different perspective.
The idea of sharing as protocological is posited here to emphasize the fact that specific modes of sharing and not sharing, as well as the particular distribution of the (data) sensible, are determined by ideologically charged dispositifs. As Galloway (2004, 8) puts it, “protocol is how technological control exists after decentralization.” Crucially, the conditions of sharing/not sharing today inflect a subjectivity that makes a particular call on, and imposes a limitation to, the veillant and agential capacities of citizens.