Labs produce and assign value to people, both internally and for the outside world. That is, labs are about social production and the production of social relations inside and outside the lab, as idealized humanism in some cases and as critical reflective practice in others. This chapter pays particular attention to the management techniques of people in labs, beginning with Edison’s Menlo Park laboratory and continuing with the evolution of these techniques in the MIT Media Lab in the 1980s, which paves the way for the mainstream North American corporate innovation lab model that continues today. In combination with contemporary lab discourse, such techniques create the conditions for the high visibility and empowerment of some while simultaneously rendering others invisible and disempowered. Our aim with these case studies and examples is to consider how people are constantly produced in—as well as effaced or removed from—lab projects and discourses. Although this chapter outlines how the focus on directors emerges from existing lab cultures over the twentieth century and especially during the 1970s’ turn to managerialism, we are also interested in the impact of this tendency toward singular, “great man” narratives on the wider social context of the knowledge worker in the lab, with repercussions for questions of gender, sexuality, and ethnicity, including practices of racialization.
Our analysis of hybrid labs is very much informed by our current political moment, including our awareness of the intensive forms of direct and indirect violence toward particularly vulnerable groups of people of color, immigrants, and other targeted minorities, not least in the United States—all of which has been radically accelerated and exaggerated by the COVID-19 pandemic (a topic we return to in the book’s Conclusion). Another trend of the current political moment includes a strategic focus on manipulating, even falsifying, scientific statements. We do not write the foregoing sentences lightly, but scientific discourse itself, not just lab discourse, is under direct attack from the highest, most powerful offices in the land. Even the conservative RAND Corporation argues that we are suffering from “truth decay”—“the diminishing role of facts and analysis” in public life.1 Under such conditions, it is difficult to be frank about the shortcomings of laboratories themselves when we so heavily rely on them to be major sources of facts and analyses.
This is not to say, though, that lab discourse itself has never employed hyperbole as a way to produce value in the public imagination—far from it. The long line of swindlers, hucksters, and snake oil salespeople extending back to ancient times includes plenty of lab denizens and lab directors. However, an odd thing happened over the course of the twentieth century in North America. The mutual desire of advertising for the legitimacy of the sciences and of the sciences for the persuasive power of advertising has resulted in our current situation, where it can be difficult to distinguish one from the other. In Fables of Abundance, Jackson Lears documents the history of the advertising industry in the twentieth century, locating the carnivalesque huckster at one end of its spectrum of possibilities, with P. T. Barnum as its epitome, and empirical market researchers like N. W. Ayer and the J. Walter Thompson company on the other. But in Lears’s account at least, advertising never quite escapes the snake oil because it is fundamentally engaged in the opposite of the production of facts:
Despite their drive toward professionalism, advertising executives could never cast aside their Barnumesque inheritance, could never make common cause with the clinicians of society whose ideology they emulated. Part of the problem was the limited nature of their authority: unlike doctors and lawyers, they claimed professional expertise but always bowed to the opinions of the client, however inexpert he might be. Yet a deeper difficulty was embedded in the very nature of the advertising business: it had always involved the clever orchestration of surface effects, in a fashion that undermined all pretensions to sincerity and claims to objective truth. Straining to stabilize meanings with resort to a managerial idiom of expertise, advertisers remained surrounded by the ambiguities of their trade.2
What Lears didn’t imagine was that parts of laboratory research—especially those parts that had to do with the development of modern media technologies—would be quite eager to meet advertising in the murky middle ground.
One of the places where early-twentieth-century industrial manufacturers, R&D labs, and media labs found common ground with advertising was in the notion of planned obsolescence. Many accounts attribute the relative beginnings of planned obsolescence to the bicycle industry in the last years of the nineteenth century, which began producing annual models that were aesthetically (if not mechanically) different than their predecessors and then depreciated those models at year-end sales.3 In 1924, the chairman of General Motors, Alfred Sloan, and his styling chief, Harley Earl, formulated the technique as “dynamic obsolescence” by pairing the annual model change with aggressive advertising that emphasized novelty and the GMAC Bank, which offered loans that allowed potential customers to buy cars on credit.4 However, on December 23, 1924, in a different industry altogether, representatives from the world’s major lightbulb manufacturers (including General Electric, Philips, Osram, and Compagnie des Lampes), representing hundreds of factories, met in Switzerland to form the Phoebus cartel, “a supervisory body that would carve up the worldwide incandescent lightbulb market, with each national and regional zone assigned its own manufacturers and production quotas.” More importantly, the cartel specified a shorter life span for the incandescent bulb, mandating that its members scale back from the fifteen hundred to two thousand hours common at the time to one thousand hours by 1925.5 This was a carefully produced engineering difference rather than a stylistic one. In his archival research on the subject, Markus Krajewski “found meticulous correspondence between the cartel’s factories and laboratories, which were researching how to modify the filament and other measures to shorten the life span of their bulbs.”6 By the 1930s, U.S. advertising and industry alike had embraced the notion that innovation was wedded to disposability. As the example of the Phoebus cartel demonstrates, part of the job of R&D laboratories became not only developing new products but also ensuring their disposability. Both parties had become eloquent and vocal enough about the idea that high consumption of novelty was the engine of U.S. prosperity that it became something like national ideology.7 This method of producing value by way of tying persistent, overblown discourse about the lab’s wonders to actual production has much to do with why labs such as the MIT Media Lab are synonymous, in the popular imagination, with innovation, entrepreneurialism, and profitability, regardless of whether or not their products are successfully monetized.
Regrettably, one of the outputs of modern and contemporary labs is hyperbolic discourse. When paired with particular management techniques and policy decisions, hyperbole has real effects and can make a lab powerful and seemingly successful. It is also part of the assemblage that drives the creation, construction, and manipulation of facts which are, in turn, part of the structure upon which “history” and “the future” are built. This line of thinking comes close to the work of Bruno Latour, whose influence weaves its way through the entirety of this book. As he put it in “Why Has Critique Run Out of Steam?,” his earlier work—including his sociological account with Steve Woolgar in Laboratory Life of how science labs produce facts—was dedicated not to undermining the existence of facts altogether but to renewing empiricism. Latour hoped to spare “the public from prematurely naturalized objectified facts” by revealing the inner workings of labs, including how they produce facts for public consumption.8 Instead, he writes, “a certain form of critical spirit has sent us down the wrong path, encouraging us to fight the wrong enemies and, worst of all, to be considered as friends by the wrong sort of allies because of a little mistake in the definition of its main target. The question was never to get away from facts but closer to them.”9 “Why Has Critique Run Out of Steam?” is important because it conveys Latour’s horror at the recognition that arguing about the social construction of facts was co-opted as a technique by the political right and that it was time to correct course. It remains as important as ever to scrutinize the role that many kinds of labs continue to play in the creation of facts and, therefore, in the battle over who owns the past, present, and future of media technologies.
Because of its prominence in the hybrid lab landscape and its apparent dominance in shaping ownership of the past, present, and future, we are particularly interested in the MIT Media Lab.10 Major accounts of the MIT Media Lab’s Nicholas Negroponte tie him firmly not only to innovative engineering and technical knowledge but also to the carnivalesque end of the advertising and sales spectrum. In Stewart Brand’s reverential The Media Lab: Inventing the Future at MIT (1987), IBM senior scientist Nat Rochester describes Negroponte as one who “combines very great technical knowledge and creativity with . . . really world-class salesmanship.”11 Thomas A. Bass elaborated on this description of Negroponte in his 1995 Wired magazine profile: “His MIT colleagues sometimes dismiss him as the P. T. Barnum of science, someone who puts on a flashy show without much substance. ‘This is the red-light district of academia,’ jokes a young scientist at the lab. But 10 years after Negroponte began selling multimedia as rich terrain for scientific prospecting, he has been proved right. The human-computer interface that began as Negroponte’s promotional pitch is now the linchpin of an industry whose sales are pushing a trillion dollars a year.”12
The projects Negroponte developed while he was director of the Architecture Machine Group (AMG; founded in 1967) and the Media Lab (founded in 1985) demonstrate he could be the very embodiment of the Barnumesque promotional wizard, playing to popular fantasies about the emancipatory powers of technology. In Negroponte’s strain of lab discourse, this fantasy is particularly effective when deployed in conjunction with the figure of the child in the developing world who lacks access to digital technologies. But of course, the resounding failure of Negroponte’s early 2000s One Laptop per Child (OLPC) project demonstrates the particular bankruptcy of the hyperbolic promotional culture of certain strands of innovation lab discourses. In this case, as we describe below, overblown, inaccurate statements about “reality” help to support the ever-widening gap between the powerful and the powerless as well as the ever-widening gap between the material, embodied, lived realities of particular technologies and people and popular and academic discourse about those realities.
After we discuss the production of value by the MIT Media Lab we delve into the particularities of management techniques at work in Edison’s Menlo Park laboratory, and and then proceed with a discussion of their use at several of MIT’s best-known and influential labs, particularly the Radiation Lab, the AMG, and the Media Lab. We then look at the history of the OLPC project as an example of liberal humanism in the corporate lab. In contrast, and in light of the recent intensive discussion about corporate sponsorship models and the MIT Media Lab’s affiliation with Jeffrey Epstein, we end this chapter by discussing a key example from the other legacy of critical media labs from the 1990s onward: the ACTLab. Founded in 1993 by media theorist and performance artist Allucquére Rosanne (Sandy) Stone, the ACTLab established a very different sense of people and discursive space of projects than liberal humanist ones. It was also one of the first hybrid labs specifically dedicated to questions of radical politics of identity in contexts of technology, and to this day it offers a compelling way to describe radical lab work taking place inside a more conservative institution as “codeswitching.”
Producing Value in a Lab: Blurred Boundaries between Higher Education and Industry
Most American universities founded in the mid- to late nineteenth century were created in the spirit of entrepreneurialism. In Innovation and Entrepreneurship: Practice and Principles, Peter Drucker (who enshrined in mainstream consciousness ideas such as the centrality of marketing, the knowledge worker, and the emergence of the information society) points out that “no better text for a History of Entrepreneurship could be found than the creation and development of the modern university, especially the modern American university.”13 And in fact, MIT’s 1861 founding charter states that the fledgling polytechnic institute is being created “for the purpose of instituting and maintaining a society of arts, a museum of arts, and a school of industrial science, and aiding generally in . . . the advancement, development and practical application of science in connection with arts, agriculture, manufactures and commerce.”14 In a related document from the same year, the founders elaborate on how the need to keep pace with (if not outdo) the European economy hinges on a tight connection between “intelligent culture” and “industrial pursuits.”15 In today’s parlance, they were aiming for a more even balance between theory and practice, embodied by what would become the school’s official motto: Mens et Manus (mind and hand).16 Three years later, a committee provided a more detailed account of the “Scope and Plan” of one MIT school in particular: the School of Industrial Science. Amid the forty mentions of “practice” and “practical” peppered throughout this twenty-eight-page document are detailed instructions for the creation of four laboratories to bring together theory and practice via student training and “the prosecution of experiments and investigations . . . including the examination and testing of new machines and processes, and the conducting of original researches in the different departments of applied science.”17
The way early MIT labs emphasized experimental, mechanical research and strove to avoid the appearance of pure intellectuality anticipated the ethos of Edison’s Menlo Park laboratory, which opened in 1876. As we describe in chapter 1, Menlo Park was based on the importance of research, but only to the extent that it was “practical.” In contrast to “theoretical” science, which Edison believed was “pointless and slow,” he was adamant about always producing working prototypes.18 Because the lab was funded almost entirely by entrepreneurs and venture capitalists eager to see a return on their dollar, one of Edison’s most famous aphorisms was “You got to make the damn thing work.”19
Drucker argues that Edison was the first to understand “knowledge-based innovation” and the way it leads to control of the future. Crucially, innovation has little to do with novelty; instead, it capitalizes on iterating and improving ideas that are already in circulation: “Every other electrical inventor of the time began to work around 1860 or 1865 on what eventually became the light bulb. Edison waited ten years until the knowledge becomes available.”20 We know from numerous scholarly accounts that at Menlo Park Edison set the stage for the knowledge-based innovation that becomes a hallmark of the modern American university with a style of management that was, at the time, cutting-edge in its strategic blending of craft-based and industrial-based management. According to Michael J. Gall:
Edison created a pre-industrial atmosphere at the lab facility, such as apprenticeships and education, irregular work hours and wages, breaks, bonds of mutuality, and limited autonomy, which was instrumental to an efficient process of invention. Invention entailed conception creation, planning and drafting, parts and tools fabrication, experimentation, development, testing, refinement, patent model manufacture, and patent application. Such a process required a malleable, skilled, and competent workforce capable of conducting a variety of tasks under Edison’s leadership and guidance. . . . This . . . culture was an essential part of the fabric of Edison’s management strategy and the success of his invention operation.21
However, between 1878 and 1880, as the lab expanded its workforce and the number of ongoing projects, Edison began to rely more heavily on conventional industrial management tactics, including “the creation of a worker hierarchy as well as moderate task subdivision and specialization.”22
Menlo Park is the most influential early instance of modern lab management practice. As the twentieth century progressed, its pairing of seemingly contradictory techniques became enshrined as common practice for media labs. At the same time as they were part of a more horizontal structure consisting of semi-autonomous clusters (e.g., experimental assistants; machine workers, mechanics, and apprentices; office workers, accountants, and bookkeepers), workers were also part of a hierarchical reporting structure that led directly to Edison himself. Further, workers had flexible and fluctuating work hours while still being accountable to Edison’s production schedule. Collaboration and camaraderie were encouraged, but the discourse of singular, individual achievement—usually, Edison’s achievement—dominated.
As David Noble notes in America by Design, once the industrial research laboratories based on Edison’s original design began to multiply and expand, “the role of the scientists within them came more and more to resemble that of the workmen on the production line and science became essentially a management problem.”23 Noble goes on to distinguish broadly between scientists in universities, who were “relatively free to chart [their] own paths and define [their] own problems,” and industrial researchers, who were “more commonly [soldiers] under management command, participating with others in a collective attack on scientific truth.”24 However, labs at MIT like the Research Laboratory of Applied Chemistry (founded in 1908), the Servomechanisms Laboratory (founded in 1940), and the Radiation Lab (also founded in 1940) are either a mixture of both or are aligned more with the latter than the former.
The Radiation Lab (Rad Lab), created during the late years of World War II to improve radar technology, was the first “largescale interdisciplinary and multifunction R&D organization set up at a university.”25 Given the way Edison managed Menlo Park, the Rad Lab was unusual but not unprecedented in the way it pulled together the research, development, and production of radar technologies into a single organization. Quoting an unpublished manuscript by Leroy Foster on “Sponsored Research at MIT,” Henry Etzkowitz paints a picture of a culture of rapid prototyping and production at the Rad Lab that later became a key hybrid lab technique: “‘Most of the knowledge was gained by building something as quickly as possible and trying it out. Theoretical knowledge generated pari passu to be plowed back into the work at a later date . . . Improvements were made, and the apparatus was tested again.”26 While Etzkowitz characterizes the organization of the Rad Lab as “highly decentralized and flexible,” Henry Guerlac points out that although the lab started out as loose and informal, the Pearl Harbor attack instigated a shift to a management structure that was both horizontal and vertical to ensure it would be suitable for many kinds of projects:27
The combination of both vertical and horizontal organization brought together the groups working on related components in basic research into larger units called divisions and brought the related systems groups under a single divisional head. Above the divisions stood the Director’s Office and Steering Committee, comprised of the Director and the Associate Director and the heads of the technical divisions.28
Departing from Menlo Park’s single-minded focus on profit, the Rad Lab may have appeared as a business-like entity, but it operated without a budget, as most of its expenditures simply needed to be approved by the federal government based on their perceived contribution to the war effort.29
The combination of flexible management structures and rapid prototyping and production techniques laid some of the groundwork for the MIT Media Lab, which opened in 1985. Without ever directly declaring itself the first of its kind, tech journalism quickly positioned it as occupying a “new niche in technical research, somewhere between industrial R&D . . . and the academic engineering sciences.”30 Predictably, Wired’s hyperbole is thick here. The history of higher education laboratories in the United States is defined by this precise mix of the private and public sectors. Still, it is true that the MIT Media Lab was and continues to be successful in terms of its astonishingly large $75 million annual operating budget.31
Nearly all of the MIT Media Lab’s annual budget comes from corporate sponsorship. More than eighty companies are either “consortium lab members” or “consortium research lab members”; the former provides “access to all of the research conducted at the Lab, Lab-wide visiting privileges, invitations to semiannual member-only events, and full intellectual property rights,” while the latter provides “the added benefit of an employee-in-residence at the Lab.”32 The incentive the MIT Media Lab offers to these member companies is that students and faculty in the lab can conduct research that is “too costly or too ‘far out’ to be accommodated within a corporate environment. It is also an opportunity for corporations to bring their business challenges and concerns to the Lab to see the solutions our researchers present.”33 In return, the MIT Media Lab receives funding to pay for an astonishingly wide array of research projects, the ability to promote itself aggressively, and a clear pipeline for students that leads from the lab to industry.
The issue of financing also has connections to how other power structures are reproduced in terms of work, diversity, and indeed, people. While the MIT Media Lab will never be able to completely depart from its long history of being directed, populated, and dominated by certain racial and gender demographics, it will likewise never be able to remain unaffected by funding that comes from U.S. tech companies with the same deeply ingrained racial and gender demographics, which in turn affects which projects are supported and implicitly promoted. As former MIT Media Lab faculty member and director of the Center for Civic Media Ethan Zuckerman stated to us in an interview:
I think [the MIT Media Lab] can be alarmingly insular. I think we tend to feel like we look at the world in a way that’s unique, and I’m not sure that it’s as unique as we think. . . . We have great international diversity . . . [but] when you look at underrepresented minorities, which is how MIT measures diversity, which is basically American populations that are usually underrepresented within universities—Native Americans, African Americans, Latino/Latinas, not Asian Americans—how are we doing? The answer is we’re doing dismally. . . . [However,] we have made a lot of progress on gender. I think when I was here, we were at about 2:1 male to female, and I think we’re closer to 60–40.34
We conducted this interview in early spring of 2017. In August 2019, Zuckerman declared his intentions to move his work out of the MIT Media Lab by spring 2020 in light of realizations that in 2014, former director Joi Ito had accepted over $250,000 in funding for lab projects from known sex trafficker and pedophile Jeffrey Epstein. These realizations were followed by a press release by MIT later in August indicating that the university as a whole had accepted $800,000 in total from Epstein over a period of twenty years, all of which went either to the MIT Media Lab or to professor of mechanical engineering and physics Seth Lloyd.35 By early September, journalist Ronan Farrow revealed in the New Yorker that the lab had in fact knowingly accepted as much as eight million dollars of funding from Epstein over the years, marking his donations as if they were given anonymously and going against MIT’s decision to disqualify Epstein as a donor after he pleaded guilty in 2008 to charges of solicitation of minors for prostitution.36
What makes these revelations about the lab’s indiscriminate fund-raising practices more horrifying is that even in the face of MIT president L. Rafael Reif’s apology for their lack of judgment about what funding to accept and what to refuse, and Ito’s apology and subsequent resignation, Negroponte continued to maintain his belief that taking funding from Epstein was justified and that he would still recommend to Ito (as he did in 2014) that the lab accept Epstein’s funding. “If you wind back the clock . . . I would still say, ‘Take it,’” he reportedly stated at an all-hands meeting at the Media Lab in September 2019.37 Angela Chen and Sarah Hao for MIT Technology Review describe what unfolded toward the end of this same meeting:
Negroponte stood up, unprompted, and began to speak. He discussed . . . how he had used [his] privilege to break into the social circles of billionaires. It was these connections, he said, that had allowed the Media Lab to be the only place at MIT that could afford to charge no tuition, pay people full salaries, and allow researchers to keep their intellectual property. Negroponte said that he prided himself on knowing over 80% of the billionaires in the US on a first-name basis, and that through these circles he had come to spend time with Epstein.38
However, contrary to Negroponte’s justifications for accepting funding from the most abhorrent of sources, the lab’s funding does not support anything out of the ordinary for a university lab as, first, many graduate programs in the United States, Canada, the UK, and Europe do not charge their students tuition in exchange for these students taking on research or teaching duties; and, second, as is clearly outlined in the lab’s FAQs about sponsorship, intellectual property at the MIT Media Lab is either owned in whole by sponsors or is shared among students, faculty, and lab sponsors. Furthermore, even if we were to entertain the possibility that Negroponte’s justifications for taking funding from Epstein are not entirely rhetorical flourishes, money alone does not solve deeply entrenched problems around privilege and thereby diversity. The problem is more properly about the sources of and results from indiscriminate funding practices—who and what ends up being supported and made visible, and likewise who and what is made invisible and even dehumanized.
In the face of such stunningly amoral fund-raising practices and unabashed embrace of the wealth and power that comes along with consorting with the world’s 1 percent, it makes sense to see the MIT Media Lab as an exceptionally egregious example of contemporary universities’ neoliberal strategies since the 1980s. Indeed, the spirit of “demo or die” became one of the early mottos and techniques at the MIT Media Lab with its spirit of progress and innovation, engineering and science, at the cost of “studies, surveys, or critiques.”39 However, we argue that even knowing what we know now about the lab’s affiliation with Epstein coupled with Negroponte’s tone-deafness to the significance of the lab’s affiliation with a known sex trafficker and pedophile, the MIT Media Lab is just a more extreme version of a long lineage of entities dedicated to innovation and invention, driven by internal management techniques whose workings depend on collaboration between higher education and industry. But of course, now we also know the details about the girls and women whom Epstein preyed upon over many decades, most of whom were underage and economically disadvantaged. The likeness of the MIT Media Lab to other university-based entities whose fund-raising practices may not (yet) be implicated in anything of the size and scope of the Epstein scandal makes it no less implicated in perpetuating sexual violence and stark, generational economic, gendered, and racial inequities.
Lab Management Techniques and the Production of Value
The primacy of building markets and the discourse of innovation—both of which are driven by particular management techniques, and both of which also drive the Media Lab’s fund-raising practices—have persisted over many decades as core themes of the lab. Ito argues that in the MIT Media Lab “there is a ‘Lab Culture’ but each research group and each unit of staff has its own culture. Each group buys into some or all of the Lab Culture and interprets this in their own way. This creates a complex but very vibrant and, in the end, self-adapting system that allows the Lab to continue to evolve and move ‘forward’ without any one piece entirely understanding the whole of it or any one thing controlling all of it.”40 But despite this portrayal of the lab as nonhierarchical, flexible, and collaborative, in 2017 Ito responded in-depth to a question about the lab’s organizational structure by outlining its inherent hierarchies, which culminate in Ito himself:
The Media Lab has a director, Joichi Ito, me. I am in charge of the operations which include the staff functions as well as the research which includes the Media Lab consortium that funds the majority of the work at the Media Lab. The Media Lab also has a number of initiatives and centers that also report to the director. The majority of the financial resources as well as the space allocation is managed by the director. The staff functions are roughly divided up by functional units that include network and IT systems (NecSys), human resources, academic administration, finance, facilities, communications and events. Most of these units have a director that reports to the director of the lab.
The lab has many research groups and each group is led by a professor or a principle [sic] research scientist. Each group admits students/RAs and also usually has staff members, post-docs and other researchers. Groups typically supported financially by funding from the consortium funding managed by the director as well as funds raised directly by the group.41
As we learned on our own research trip to the MIT Media Lab, despite its claims to transparency, supposedly embodied by the abundance of glass throughout the lab complex, it is difficult to figure out the precise contours of the everyday doings in the lab (exemplified by its strict rules forbidding photography of current projects). On the level of infrastructure, it is also difficult to discover the exact nature of the funding streams leading to the lab’s projects, prototypes, and patents from sponsors such as Google and Twitter (not to mention earlier sponsors such as the U.S. Army, the FBI, and individuals such as Epstein).
However, many such issues in and around the MIT Media Lab have existed for almost as long as media labs themselves have existed. Only a few decades after the heyday of Menlo Park, Thorstein Veblen became one of the first to offer what appears like a critique of American universities’ appropriation of business management techniques to handle their administration but is more properly a critique of the way knowledge is produced by way of the “businesslike organization and control of the university.” As he put it in his 1918 book The Higher Learning in America: A Memorandum on the Conduct of Universities by Business Men, “In this view the university is conceived as a business house dealing in merchantable knowledge, placed under the governing hand of a captain of erudition, whose office it is to turn the means in hand to account in the largest feasible output.”42 While Veblen does not dwell on laboratories, he keenly understands that as long as universities seek to produce knowledge as efficiently and profitably as possible—as a factory churns out widgets as cheaply and rapidly as possible—its employees must also “be organized into a facile and orderly working force” and that “the faculty is conceived as a body of employees, hired to render certain services and turn out certain scheduled vendible results.”43 From Veblen’s account, it’s clear that the management of university workers had already become part and parcel of university life.
There have been abundant accounts of the history of neoliberalism and higher education.44 Suffice it to say that by the late 1960s and 1970s, as neoliberalism rose to its dominant position, the management of universities became a greater concern. Experts in business management increasingly saw universities and their role in producing education and technological innovation as one of the most important engines behind economic expansion. French journalist and center-right politician Jean-Jacques Servan-Schreiber declared in his massively best-selling 1968 book The American Challenge that “today the most important factors in economic expansion are education and technological innovation.”45 Writing during an extended stay in the United States and guided by the belief that Europe, especially France, was quickly losing an economic war to the United States, Servan-Schreiber continues by asserting that “the technological gap is misnamed. It is not so much a technological gap as it is a managerial gap. And the brain drain [from France to the United States] occurs not merely because we have more advanced technology here in the United States but rather because we have more modern and effective management” (emphasis added).46 By the time Servan-Schreiber was writing, “modern and effective management” was a key element of university discourse.
In his canonical 1973 work Management: Tasks, Responsibilities, Practices, Drucker collapses the distinction between the public and private realms to claim that “business enterprise is only one of the institutions of modern society, and business managers are by no means our only managers. Service institutions—government agencies; armed services; schools and universities; research laboratories . . . labor unions . . . are [all] equally institutions and, therefore, equally in need of management.”47 Having flatly stated that everything, including universities and labs, can and even ought to be under the purview of management, he goes on to say that “these public-service institutions . . . are the real growth sector of a modern society” because those employed by such institutions are a new breed of “knowledge worker” whose productivity and continual achievement is the basis upon which “every developed society” also becomes productive.48 Further, if everything is now under the purview of management, then managers are now responsible not only for creating conditions for productivity and profitability but also for protecting and even guiding the social good: “The fact remains that in modern society there is no other leadership group but managers. If the managers of our major institutions . . . do not take responsibility for the common good, no one else can or will.”49 By the 1970s, education and technological innovation are tightly paired as the real drivers of economic expansion; management should or does reign supreme over everything, including higher education and the common good; and, finally, management is also now responsible for anticipating and molding the future, through monitoring the social impact.50
Even MIT’s administration was forced to respond to the questions raised by social movements. In the wake of 1967’s student and faculty protests against military-related research, MIT president Howard W. Johnson delivered a commencement address in 1968 with the theme of “humane technology” in which he declared that “there is a disturbing gulf between our technological achievement and the quality of our living—our sense of community.”51 Focusing less on MIT than on the larger Boston community, Johnson asserted that “at first glance, the city personifies all of the problems faced by individuals and institutions with whom the professional must function.” In the face of “the magnitude of its ills,” where lies the solution other than in management? “[Boston] is amorphous and unmanageable and impersonal and cold. . . . It is a large system that does not work, but must.”52 In was in this context—of the coupling of management with “humane technology”—that Negroponte founded the AMG and began to transform Johnson’s avowed dedication to “humane technology” into “humanism” through technology. However, the particular use of “humanism” as a justifying trope for multiple AMG and MIT Media Lab projects was actually an early version of “solutionism”—of technological fixes for social issues—as well as a particularly troubling form of ignorance of the social complexities of race, gender, and other issues that many other labs raised in the wake of the MIT Media Lab scandals.53
This project-oriented technique of technological humanism is shot through with contradictions: “technology” is always a homogeneous, abstract, and neutral entity at the same time as it also (magically) shapes the present and the future. “Humans” are likewise homogeneous abstractions who are always inventive and in control of technology, and attentive to the present and the future—even though they are not in control of either. Once this particular brand of techno-humanism is aimed at those perceived as underprivileged, it morphs into a kind of digital colonialism.54 The next section focuses on one project started at the MIT Media Lab in the early 2000s as an example of the operations of technological humanism and its production of troubling assumptions about race and solutionism.
Production of Value outside the Lab: From the AMG to the OLPC
A lab organized to ensure profitability is also a lab that sells itself as dedicated to innovation. For decades, the popular and tech presses have positioned the MIT Media Lab as the first and the best known of its kind. Wired staff writer Fred Hapgood wrote in 1995 that “for a few years after it officially opened in 1985, the MIT Media Laboratory may have been the most celebrated research institute in the country, at least as measured by inches of newsprint or minutes of air time. Perhaps it still is.”55 Even in 2019, nearly twenty-five years later, CBS still ran stories about how the lab is a “future factory” where “tomorrow’s technology is born.”56 For a quarter of a century the lab has been synonymous with inventing the future, because of the central role it has historically played in the fields of wireless networks, field sensing, web browsers, and the web, as well as the role the lab is currently playing in neurobiology, biologically inspired fabrication, socially adaptive robots, emotive computing, and many other areas. Or at least this is a role the lab asserts it has played. The MIT Media Lab has become synonymous with the future largely because of a dogged thirty-year marketing campaign whose success we can measure by the fact that almost any discussion of the technology of tomorrow is a discussion about some project at the MIT Media Lab.57 But as Alison Hearn and Sarah Banet-Weiser remind us, writing at the intersection of the COVID-19 crisis and the slow collapse of institutions of higher education which are performing “an idea of the future that looks just like the past,” “access to the ‘future’ is a privilege—one that is differentially meted out to classes, races and genders of people around the globe.”58 And thus the question is, whose future is the MIT Media Lab writing, controlling, predetermining? In its future-oriented vision of “humanism through technology,” which humans are being raised up and which are being effaced?
The contradictions of a discourse that espouses humanism through technology as the primary mechanism for generating value outside a lab (and thus, again, controlling the future) were already evident in projects produced by AMG, founded by Negroponte in 1967. In a section of his 1970 book The Architecture Machine titled “Humanism through Intelligent Machines,” Negroponte lays out the need for “humanistic” machines that respond to users’ environments, analyze user behavior, and even anticipate possible future problems and solutions—a machine that does not so much “problem-solve” as it “problem-worries.”59 Negroponte seems to be thinking not only of the potential of such a machine in the form of a computer in the home but also of one specifically aimed at children, for “the computer utility will become a consumer item, and every child should have one”—precisely what the OLPC computer aspired to be thirty years later (6). Negroponte also seems to be responding to then–MIT president Johnson’s call for members of the university community to consider the social impact of their work to a greater degree. Negroponte asserts that the aim of such responsive architecture machines is “definitely humanization.” He continues: “It is simply untrue that ‘unpleasant as it may be to contemplate, what probably will come to be valued is that which the computer can cope with—that is, only certain kinds of solutions to social problems.’ We will attempt to disprove the pessimism of such comments” (7). In short, computers ought to make everyone more human, whatever that meant, not less (or other), and they ought to be able to anticipate and solve any social ill.
His example of what such an adaptive, responsive machine could look like as it actively attempts to “problem-worry” comes from an experiment that undergraduate Richard Hessdorfer undertook in the lab. Writes Negroponte: “Richard Hessdorfer is . . . constructing a machine conversationalist. . . . The machine tries to build a model of the user’s English and through this model build another model, one of his needs and desires. It is a consumer item . . . that might someday be able to talk to citizens via touch-tone picture phone, or interactive cable television.” To help build this machine conversationalist, Hessdorfer decided to bring teletype devices into a neighborhood in the south side of Boston that Negroponte calls “Boston’s ghetto area.” While the project description never mentions race, documentary photographs that appear in The Architecture Machine of residents using the machines feature a Black man (56). The following description implies other users might have been immigrants or simply non-English speakers:
Three inhabitants of the neighborhood were asked to converse with this machine about their local environment. Though the conversation was hampered by the necessity of typing English sentences, the chat was smooth enough to reveal two important results. First, the three residents had no qualms or suspicions about talking with a machine in English, about personal desires; they did not type uncalled-for remarks; instead, they immediately entered a discourse about slum landlords, highways, schools, and the like. Second, the three user-inhabitants said things to this machine they would probably not have said to another human, particularly a white planner or politician: to them the machine was not black, was not white, and surely had no prejudices. (The reader should know, as the three users did not, that this experiment was conducted over telephone lines with teletypes, with a human at the other end, not a machine.) (56–57)
While the project was supposed to be an example of what Negroponte calls an “adaptable machine” (and thus an architecture machine), it was an elaborate sleight of hand involving fairly standard telecommunications equipment for the time (7). As Orit Halpern notes, the experiment was at best performing “a future ideal” of interactive media communication.60 The passage also demonstrates the contradictions—and profound problems—of a humanism via technology that grants agency to (for Negroponte, “humanizes”) certain humans over others (presumably the white, remote observers over the racialized and/or immigrant humans who the narrator assumes will behave poorly and type “uncalled-for remarks”); and grants agency over technology by some rather than others. Moreover, under the guise of a laboratory experiment, it does so via a deception that presents the teletype machine to research subjects as somehow responsive to expressions of concern from racialized or immigrant people about living conditions.61 It also demonstrates what can happen when we believe so completely in the neutrality of the machine—or its assumed capacity to give us pure, unmediated access to reality—as to think it can be called on as a magical, mechanical solution to any human problem. As Halpern writes: AMG “attempted to turn the external traumas of American racism and economic crisis into an interactive simulation and to advance computing as a solution to these structural problems.”62
The second AMG project was just as problematic. This time the subjects in the experiment were not Black people and immigrants, but gerbils. The experiment, referred to as Seek, was part of a 1970 exhibition at the New York Jewish Museum called SOFTWARE. It consisted of a computer-controlled environment full of small blocks and gerbils, all of which was contained by Plexiglass. The gerbils were there to change the position of the blocks, and a robotic arm was then supposed to analyze the gerbils’ actions to try to complete the rearrangement according to what the machine thought the gerbils were trying to do. The catalog for the exhibition describes Seek as follows:
Seek metaphorically goes beyond the real-world situation, where machines cannot respond to the unpredictable nature of people (gerbils). Today’s machines are poor at handling sudden changes in context in an environment. This lack of adaptability is the problem Seek confronts in diminutive. If computers are to be our friends they must understand our metaphors. If they are to be responsive to changing, unpredictable, context-dependent human needs, they will need an artificial intelligence that can cope with complex contingencies in a sophisticated manner . . . much as Seek deals with elementary uncertainties in a simple-minded fashion.63
Not surprisingly, with its simultaneous lack of interest in the needs and particularities of gerbils and its equation of gerbils with “people,” the experiment was a disaster. As Halpern puts it, “The exhibition’s computers rarely functioned . . . the museum almost went bankrupt; and in what might be seen as an omen, the experiment’s gerbils confused the computer, wrought havoc on the blocks, turned on each other in aggression, and wound up sick. No one thought to ask, or could ask, whether gerbils wish to live in a block built micro-world.”64 Such experiments included multiple implicit assumptions about “people” (and other intelligent animals) as part of technological systems, but it was also very much part of the marketing traditions of technological spectacle.65 Like Claude Shannon’s maze for Theseus the mouse, it seems to be mostly about showmanship as a way to generate notoriety, funding, and, eventually, profitability. Even Negroponte later describes these early projects by the AMG as stunts rather than thoughtful interventions, in that they were “using museums and exhibits as a way to do sort of the outrageous ’cause you had a pass where you could do things and you don’t necessarily have to justify them in some scientific context and we were allowed to play.”66
In the years after Seek and before the opening of the MIT Media Lab in 1985, this pattern of promoting humanism via technology continued in a series of projects by Negroponte’s colleagues and collaborators. The projects were also instrumental to building the methodologies of “demo or die,” testing, and experimenting that became integrated as key lab techniques (see also chapter 6).67
What these projects and their techniques have in common is that they treat humans and technologies as abstractions, grant agency to some humans over others, and thereby grant agency over technology to some humans rather than others. Continuing the push Negroponte described in The Architecture Machine to develop a personal computer that was appealing to children, fellow faculty member, MIT Media Lab affiliate, and eventual cofounder of the OLPC project Alan Kay launched the “interim DynaBook” in 1972. Similar to the Hessdorfer Experiment’s assumption that teletype users would simply figure out how the machine worked, without any guidance or instruction, while simultaneously disempowering these same users by subjecting them to an elaborate deception at the hands of remote (white) researchers, Kay launched his project at the Association for Computing Machinery National Conference in Boston by declaring his belief that children “learn by doing” and that, unlike “the African child,” American children lack meaningful opportunities for learning by doing: “Unlike the African child whose play with bow and arrow INVOLVES him in future adult activity, the American child can either indulge in irrelevant imitation (the child in a nurse’s uniform taking care of a doll) or is forced to participate in activities which will not bear fruit for many years and will leave him alienated.”68 With the vast variability of cultures across the continent of Africa, not to mention the vast variability in hunter-gatherer techniques and methods of teaching children, once again lab discourse utilizes an abstraction—“the African child”—to promote a personal computer. Even stranger and more ironic is the fact that Kay seems to have believed he was going against the grain of MIT president Johnson’s exhortation to fix social problems with “humane technology” and Negroponte’s advocacy for “humanism through technology.” He writes: “For many years it has been a tradition to attempt to cure our society’s ills through technology. . . . Unfortunately, most of these ‘cures’ are no more than paint over rust; the sources of the initial problems still remain.”69
For Kay, the initial problem is the learning process that he assumes is informing most technological cures. Inspired by work by John Dewey, Jean Piaget, and Seymour Papert, he asserts that more attention ought to be paid to the child as “a ‘verb’ rather than a ‘noun,’ an actor rather than an object; he is not a scaled-up pigeon or rat” (or a gerbil).70 The DynaBook was designed, from the ground up, to allow this active, empowered child to learn “algorithmic thinking.” However, whether U.S. American or African, the figure of the child is yet another abstraction that existed outside of any specific socioeconomic context. While Kay may not have been making assertions about the neutrality of the machine as Negroponte did alongside later Media Lab directors, his reliance on the child as an abstraction points to how he cannot conceive of how bias is inevitably part of the design of any technology, including the DynaBook.
In Negroponte’s second volume on the work of the AMG, Soft Architecture Machines (1974), the leveraging of abstract, decontextualized versions of nonwhite, non-adult, and/or non-Western communities to promote particular digital technologies continues. In this book Negroponte draws on Bernard Rudofsky’s well-known Architecture without Architects (1964), which Rudofsky claims “attempts to break down our narrow concepts of the art of building by introducing the unfamiliar world of unpedigreed architecture” (i.e., building practices that are “vernacular, anonymous, spontaneous, indigenous, rural”).71 From this foundation, Negroponte extrapolates the idea of “the indigenous architect as an archetype.”72 This figure, according to Negroponte, lived in an environment that was “simple and comprehensible, punctuated with limited choices and decisions. He no more needed a professional architect than he needed a psychologist or legal counselor.”73 By contrast, he continues, “in our fast-moving societies our personal experiences are phenomenally varied. . . . This is why we need to consider a special type of architecture machine, one I will call a design amplifier.”74 According to Negroponte, his 1973 project Urban5 was an ideal example of a design amplifier. Essentially a more sophisticated version of the Hessdorfer Experiment and Seek, it was a computer-aided design program that allowed the user to manipulate virtual cubes with a light pen while engaging in dialogue with the program (which drew from a dictionary of five hundred possible options and questions) about their architectural wants and desires.75 On the surface, Urban5 sounds like a perfectly reasonable, early experiment in machine intelligence. However, once we know it is framed by the figure of “the indigenous architect”—whose fictionality becomes apparent via assertions about how “he” lives in a simple and comprehensible world lacking complexity, not to mention that it also lacks any kind of cultural and geographical specificity—Urban5 looks structurally identical to the Hessdorfer Experiment, Seek, and the DynaBook in creating problematic racialized figures in the context of primarily white elite corporate university practices, techniques, and experiments.76
By the late 1970s, the turn toward leveraging people of color, non-adults, and non-Westerners as a way to promote particular digital technologies shifts to another complex abstraction, namely, the “third world.” Following the massive success of The American Challenge, Servan-Schreiber went on to publish The World Challenge in 1981—the same year he founded the Centre mondiale informatique et ressource humaine in Paris and appointed Negroponte as its founding director. This politician-turned-journalist-turned-tech-advocate describes a “renewed fraternal endeavor” that took place in Riyadh, Saudi Arabia. Its participants included oil ministers from Middle Eastern countries and unnamed, unspecified “pioneers” who we can only assume are tech entrepreneurs from the West.77 The point of the meeting, called the Taif seminar and chaired by Servan-Schreiber, was to discuss a secret report that was supposed to be made public by Sheikh Yamani on the twentieth anniversary of the founding of OPEC, but never was.78 According to Servan-Schreiber, this report “aims for nothing less than a new alliance between the Arabs and the peoples of the Third World against their traditional exploiters, the industrialized West. It is conceived as a warning, a challenge and finally a demand for a massive transfer of technology from the United States, Europe and Japan to the poor and needy.”79 While this sounds admirable enough, by the end of the book Servan-Schreiber slips into American-biased rhetoric about the conjunction of “Third World peoples,” telecommunications, microprocessors, and the importance of individual (read “American”) learners “learning how to learn”—a concept presumably gleaned from the same trio of thinkers influencing Alan Kay (Dewey, Piaget, and Papert). In short, “the peoples of Asia, Africa and Latin America should not have to repeat this process [of heavy industrialization]. Telecommunications, microprocessors and their tendency to converge in the new creative process should be placed freely and completely at the disposal of Third World peoples—so that they can become creators themselves.”80
By comparison with more egregious technology projects in the name of humanism, Servan-Schreiber’s account of the Taif seminar and of the technology transfer he believed ought to happen in the name of the “third world” is not nearly as problematic. However, it is noteworthy for its dependence once again on leveraging an abstraction, “Third World peoples,” to use rhetoric of technological innovation to, in turn, create new markets for products. It also paves the way for the OLPC project insofar as the first project Negroponte takes up in the early 1980s as founding director of the Centre mondiale informatique et ressource humaine is the design of a pilot project for “computers-in-education for developing countries.”81 Negroponte and Seymour Papert worked to bring this pilot project to Pakistan, Colombia, and Senegal, where they installed several hundred Apple II computers donated by Steve Jobs. As Negroponte describes it in 2005, “for a time, these school kids commanded more computing power than did the central Senegalese government.”82 While the project only lasted for a year, Negroponte continued to launch similar computer campaigns, including a nationwide program to install computers in Costa Rica’s primary and secondary schools in the mid-1980s, and a program he launched with his son in 1999 to send fifty laptops to schools in a rural village in Cambodia.83 Negroponte’s tendency to rely on Cambodia to illustrate the power of his computer projects also involves frequent resort to the abstractive logic of humanism via technology, as his anecdotes always willfully ignore the particularities of individual cultures and communities for the sake of advocating for the mass exportation of technology projects.84
After traveling the world since at least the early 1980s to sell personal computers to developing nations, Negroponte announced in 2005 he had created a nonprofit organization, One Laptop per Child, to produce a one-hundred-dollar laptop “at scale.”85 In other words, the cost of the laptop could only be this low in the early 2000s if, according to Negroponte, they could amass orders for seven to ten million machines. Despite his oft-repeated statement that OLPC was not a laptop project but rather an education project, its essence was the same as the Hessdorfer Experiment and Seek: Got a poverty problem? Get a computer!86 A year after the program’s launch, Negroponte continued to rely on Cambodia as an example, touting how the children there learned “Google” and “Skype” as some of their first words while their parents “love the computers because they’re the brightest light source in the house.”87 He goes on to sell the project by declaring that its rightness and goodness is so unquestionable that “this laptop project is not something you have to test. The days of pilot projects are over. When people say well we’d like to do three or four thousand in our country to see how it works, screw you, go to the back of the line and someone else will do it, and then when you figure out this works you can join as well.”88 By 2012, studies were clearly indicating that whether they were used in Peru, Nepal, or Australia, the laptops made no measurable difference in reading and math test scores. Given the cavernous gap that exists between the rhetoric about OLPC and the realities surrounding the project, as Rayvon Fouché compellingly puts it, “OLPC demands belief in an altruistic illusion of American technology, African technological incapability, and its value neutrality to embark on a program to create a computer to change the lives of children in the developing world.”89
The pitch-perfect ending to this story is that in 2013, after selling millions of laptops to developing nations around the world, laptops that made no measurable improvement to anyone’s lives, Negroponte left OLPC and went on to chair the Global Literacy X Prize as part of the XPRIZE Foundation. However, the prize itself no longer seems to exist, and just a year later (2014) there was no record of his being with the organization. XPRIZE, however, does exist, and has set new records for the density of humanist sloganeering:
XPRIZE is an innovation engine. A facilitator of exponential change. A catalyst for the benefit of humanity. We believe in the power of competition. That it’s part of our DNA. Of humanity itself. That tapping into that indomitable spirit of competition brings about breakthroughs and solutions that once seemed unimaginable. Impossible. We believe that you get what you incentivize. . . . Rather than throw money at a problem, we incentivize the solution and challenge the world to solve it. . . . We believe that solutions can come from anyone, anywhere and that some of the greatest minds of our time remain untapped, ready to be engaged by a world that is in desperate need of help. Solutions. Change. And radical breakthroughs for the benefit of humanity. Call us crazy, but we believe.90
The board of XPRIZE includes every major corporate executive one can think of, and it appears they are not even aiming to produce things anymore, just “incentives.” In many respects it’s the logical conclusion of the trajectory toward abstraction we have outlined.
At the time of this writing, even in the wake of abundant revelations about how the OLPC was a resounding failure (most recently written about by Morgan G. Ames, who offers a deep look at the project’s “charismatic roots” along with a careful critique of charisma itself), and even in the wake of the Epstein scandal, the MIT Media Lab still exists.91 In spite of the lab’s tarnished image as a result of its affiliation with Epstein, it is busy churning out demos and products for consumers, corporations, and the military. It is also full of people—students, faculty, administrative staff, lab techs, and others—who were not privy to the benefits of its entrepreneurial showmanship, nor desired to be. Ito’s notes about MIT Media Lab’s lab culture as a complex, self-adapting system is an interesting way to point to how this self-description tries to account for the existence of multiple projects and their very different stakes, ethics, and methods. But the situation at the lab still brings to the fore the internal contradictions in many large-scale hybrid labs that have to be acknowledged in order to understand how the practices, discourses, infrastructures, and policy contexts of labs are more complex than mission statements and manifestos lead one to believe. While a “follow-the-money” methodology might be useful in bringing out some of the most problematic sides of the current neoliberal university system, we also need to understand that hybrid labs are not reducible to stories about their (often white male) directors, and the narrative of their histories has to account for the complexities and alternative examples in these same labs. The complexity of hybrid labs is also why we want to turn to a different kind of media lab that emerges in the wake of the MIT Media Lab, one that engages with an alternative formulation of articulating people, institutions, and lab culture.
In addition to the consolidation of corporate university structures around the enthusiasm for innovation, the 1980s and 1990s were also about the birth of experimental media arts practices, groups, institutions, and exhibitions. Of course, these two trajectories were not entirely disconnected insofar as some larger institutions and the art-science focus was tied to the idea of artistic prototyping. However, a plethora of experimental practices came from a different place and used the term “media lab” in alternative ways than the corporate focus at the MIT Media Lab. For example, in Britain some of the early 1990s examples of media labs were grassroots emergent communities closer to an art and hacker ethos than to entrepreneurship.92 Some of the European net art scene, including the Nettime list community, was also wedded to the idea that the emerging network practices are situated in ways that demand physical interaction. Media labs were thought of as open sites and community spaces. As Josephine Bosma puts it: “The labs became concentrated exchange groups, in which artists, activists, and others learned not only about technology of the computer and the Internet, but also about the new social and cultural networks that were developing online. The media labs were a place of learning, inspiration and creation.”93 As Bosma emphasizes, these were also something different from the “high-tech media labs of media art institutions like the glossy spaces at ZKM in Karlsruhe or Ars Electronic in Linz.”94 The focus was less on products and marketing of new digital gadgets and more on “participatory culture” that put the emphasis on people in new ways.95
In Austin, Texas, the ACTLab was founded in 1993 by media theorist and performance artist Allucquére Rosanne (Sandy) Stone, who directed it up until 2010. The ACTLab was one of the first labs of its kind that (in the context of art, community, and experimental labs) also set out to establish an interesting cross-disciplinary practice inside the university institution. It presented a case for the ethics of experimentation and served as an example of how to exist inside a larger institution, something Stone describes as codeswitching. As a way to protect the highly creative, flexible, situated, critical-minded experiments going on inside the ACTLab, rather than uncritically adopting management techniques that ostensibly ensure efficiency and profitability, Stone and the lab members performed these techniques for the world outside. Where the MIT Media Lab nested a corporate mind-set inside a veil of creativity, the ACTLab used a corporate veneer as a shield that allowed the non-utilitarian and the non-profit-minded to flourish within its spaces. Likewise, rather than producing a hyperbolic lab discourse designed to sell one particular version of the future, ACTLab produced what Stone calls “‘the Unnameable Discourse’—which was ‘unnameable’ because the language to describe it didn’t yet exist,” where “it” is anything that is open-ended and critical-minded, from a flamethrower to food, games, art, essays, and computer-mediated experiments.96
As we pointed out in the Introduction, hybrid labs and lab-like entities matter because they construct new “forces and realities” out of their materials. Those like Stone, who are “empowered to act as their credible interpreters,” mobilize these realities and forces in social programs. Rather than abandon the moniker “lab” because it’s been evacuated of all meaning through profligate use, the ACTLab retained it as a paleonym, redeploying it in different terms. As Jacques Derrida notes, retaining old names risks falling back into the systems one is critiquing, but pretending that it’s possible to vault outside of all their assembled meanings by changing a word or two is to ignore that while a lab may appear to be solid, stable, and ordered, it “is constantly being traversed by forces, and worked by the exteriority, that it represses.”97 The “lab” in ACTLab therefore provides us with a way to think about labs which suggests that artists and humanists are in a position to defy the pressure to pursue the new and rethink what it means to do twenty-first-century humanities work. In short, it is a prototype of what a hybrid lab can do.
Before the ACTLab, Stone worked in several contexts, including activism, technology, and science fiction. Her initial education led to work in sound engineering and collaborations with various figures of the 1960s rock scene, including Jimi Hendrix and Crosby, Stills, and Nash.98 The interest in sound and performance found its way into the ACTLab, but it also nurtured a subtle understanding of space as another aspect of the lab’s theoretical activity. In the mid-1970s, Stone went through gender reassignment, and from that point on her transgender identity has been central to her writing. In retrospect, Stone’s involvement in the 1970s with the “radical feminist lesbian separatist music collective” Olivia Records was a major formative part of designing supportive spaces for gender identities and learning in pressured social situations.99
Stone’s key text for transgender studies, “The Empire Strikes Back: A Posttranssexual Manifesto,” came out in 1987. In part it is a response to attacks directed at Stone, mainly Janice Raymond’s The Transsexual Empire: The Making of the She-Male, which was part of a longer series of episodes of trans-focused hate speech.100 However, it is important to consider “The Empire Strikes Back” as part of a discussion about the materialities of embodiment in relation to critical epistemologies that emerged in the 1970s and the 1980s in fields such as transgender studies, feminist STS, and (later) critical posthumanism. Stone was heavily influenced by the work of Donna Haraway in the 1980s, partly because Haraway became Stone’s PhD supervisor while she was writing “A Cyborg Manifesto” (a piece whose techniques of irony, the denial of closure, feminism, situatedness, and materialism Stone surely brought with her to UT Austin in 1993).
Stone’s work is also influential for the establishment of the field of transgender studies, which emerges in “The Empire Strikes Back” as part of a productive opposition to forms of knowledge that demand stable positions, proposing instead a rejection of compulsory binary identities as the poles between which desire fluctuates. Gender as a product of medical, sexual, and related discourses is the obvious focus, but the book already implies that this radical trans-position feeds into other forms of cultural inscription where the body is at stake.101 Resonating with Haraway’s “coyote” and other key conceptual figures of embodied and situated thought, Stone writes: “I am suggesting that in the transsexual’s erased history we can find a story disruptive to the accepted discourses of gender, which originates from within the gender minority itself and which can make common cause with other oppositional discourses.”102
The formal description of the ACTLab places the same emphasis on combining multiple stakeholders and traditions:
The UT ACTLab was a radical new kind of experimental program based on interactive, collaborative, student-centered learning created by a unique international and transdisciplinary group of artists, scholars, teachers, techies, and hackers. Founded in 1993 by Allucquere [sic] Rosanne (Sandy) Stone, our special qualities derived from courses and activities based on the ACTLab’s unique pedagogy; our custom multimodal studio specifically designed for ACTLab work; the enthusiasm and dedication of our community; the guiding vision of our directors, visiting artists and lecturers; and our students’ broad spectrum of interests.103
Stone often explains the ACTLab as an entity that translates radical pedagogical ideas into institutionally accepted forms. With a critical awareness of anti-hierarchical institutions, such as Black Mountain College as well as contemporary media labs, ACTLab bundled technology, educational forms, embodiment, and sexuality in ways that produced something much more radical than a normalized “interdisciplinarity.” As Brian Holmes has argued, the discourse of interdisciplinarity has itself become one particular marker of the industry of cognitive capitalism in academic work. Instead, we might consider ACTLab activities in terms of how they work inside institutional walls and yet speak to wider culture in useful ways. They connect to the legacies of modern avant-garde arts experimentation but also keep tabs on the particular ways that aesthetic play can easily be normalized as part of industry-oriented lab discourse in the university “production machine.”104
Stone did not consider academia to be the most creative of environments, but she was still aware that “the structure incumbent upon the academic project was important for developing critical thinking.”105 ACTLab became a way to work with the tradition of critical theory that intersects with institutions while also attempting to transform them. In her talk “On Being Trans, and under the Radar,” Stone explains being trans as an experiential state that is connected to radical epistemologies in such a way that evades certain forms of living. For example, she refers to Gloria Anzaldúa’s term “mestiza consciousness” as a “state of belonging fully to none of the possible categories.”106 Stone guides the reader swiftly through implications of the postcolonial connotations of such positions, then moves to a discussion of the ACTLab as a space of learning, theory, and experiments. There is no one answer as to why the ACTLab is a lab, but this talk clarifies the matter by describing its role as a facilitator of particular sorts of educational and research ideas.
Because of the predominance of lab discourse that champions the primacy of innovation, this is often the context in which ACTLab is described: “From its early 1990’s virtual world research into the creation of collaborative spaces (both text-based and three-dimensional worlds) and behavioral research of the inhabitants to the early 2000’s exploration and development of peer-to-peer video streaming systems, and its most recent work with BarCamps and social-media based interaction phenomena, the ACTLab has long existed at the cutting edge of New Media.”107 Without ignoring this side of the lab, from our perspective it is interesting to consider ACTLab as a technology lab for humanities theory and cultural studies. While engaging with new technologies, ACTLab places the emphasis on the production of new forms of knowledge creation. In this way, the lab is a spatial and organizational contribution to the poststructuralist legacy that emerged in relation to what Rosi Braidotti has called “radical epistemologies,” and to transgender studies in general.108 Such radical epistemologies establish a different sense of the subject than implied in the corporate lab types, especially when it comes to their narratives of technological humanism and the racial politics that are implied, as we analyzed above.
A significant difference between the characteristic discourse of many studio-laboratories and media labs after the 1980s and the discourse around ACTLab is that the latter’s courses and philosophy are concept-driven. From Stone’s perspective, concepts are just one form of material to work with; ACTLab often contextualizes them in terms of making: “The basis for our class structure is that deep learning engages all the senses. We believe that theory flows from the act of making. We consider hermeneutics to be the basis of ACTLab philosophy: active, playful engagement, informed by individual effort and open to surprise.”109
While “making” has gradually come to refer to a wide range of academic and non-academic contexts in critical design and beyond, it’s important not to strip it of its more radical implications in theoretical and political discourses. The tactile, sensorial nature of conceptual and critical work is one part of what the studio environment in the lab is able to support. Such lab practices and discourses acknowledge that people, subjects, are embodied. At the same time as we remember the etymological relation of “laboratory” and “labor,” it is also useful to consider how the particular strain of cognitive and aesthetic knowledge-work at ACTLab engages with the embodied aspects of using digital technologies. The use of “transdisciplinarity” and words like “surprise” are not merely off-the-cuff remarks, but central to the way in which technology functions as part of the lab’s concept-driven pedagogy: “Our focus is primarily on creativity and secondarily on technology, on circuit bending rather than using prepackaged devices, on ripping up technology, reassembling it in unfamiliar forms, and making it do unexpected things.”110 Concepts entangle with social, cultural, aesthetic, and political contexts, and making stuff becomes a generative methodology for engaging with concepts and using critical thinking to produce unexpected directions.
Stone also articulates a useful way to visualize, design, and conceptualize the work of the ACTLab as part of an institution. She expresses this nested existence through the figure of the codeswitching umbrella. We see it as key to the particular actions and processes of hybrid labs in universities and beyond. Under the codeswitching umbrella, three principles inform all activities in the lab: a refusal of closure; insistence on situation; and seeking multiplicity. But since institutionality itself forbids anything that resembles any of these three principles, the lab and its activities must exist under protective cover.
The umbrella is opaque, hiding what’s beneath from what’s above and vice versa. The umbrella is porous to concepts, but it changes them as they pass through; thus “When’s lunch?” below the umbrella becomes “Lunch is at noon sharp” above the umbrella. In this conceptual model the lowest subbasement level you can descend to, epistemically speaking, is the ACTLab, and the highest level you can ascend to, epistemically speaking, is Texas. Your mileage (and geography) may vary. . . . The codeswitching umbrella translates experimental, Trans-ish language into blackboxed, institutional language. Thus when people below the umbrella engage in deliberately nonteleological activities, what people above the umbrella see is organized, ordered work. When people below the umbrella produce messy, inarticulate emergent work, people above the umbrella see tame, recognizable, salable projects. When people below the umbrella experience passion, people above the umbrella see structure.111
The umbrella works as a sort of a transformer as well as an enabling device. Specifying it as a “codeswitching” umbrella alludes to the necessary fluidity inherent to trans-subjectivity: to be able to—and sometimes to be forced to—switch among roles, languages, positions, and performances. To paraphrase Stone, ACTLab epitomizes the messy reality of the sort of informal, experimental, open-ended work which increasingly needs to be blackboxed so that university accreditation and management systems recognize it.112 In sum, the lab is less a stable space and more a codeswitching mechanism.
Such self-reflective ideas emphasize that a lab is a place of situated practices that acknowledge the particularity of local knowledge. This emphasis also helps us to ask questions about where, with whom, and under what limits we engage with embodied experience in academic and para-academic culture.113 Rather than thinking of fixed places and reassuringly solid objects, we should be thinking of movements, transformational experiences, affects, and activities that (code)switch people and things. ACTLab was an institution within an institution; nested but partly autonomous; constantly learning and adapting survival techniques; and perpetually working through issues around how to shelter the lab from the various storms of university culture, while constantly switching the work they are engaged with on a local level to particular units of administrative addressability.
However, some have raised critical observations about whether a shielded lab existence might lead to separatism. Patrik Svensson, former director of the Humlab at Umeå University in Sweden, discusses the ACTLab in the context of digital humanities infrastructures and in the context of his own site visit to the lab. Svensson frames the umbrella as an “oppositional stance” which might be so opaque to outsiders that it could hinder institutional collaboration. Svensson writes that “it would seem that an inside position—under the umbrella—is not easily compatible with changing or subverting what is outside (e.g. the rest of the university)—above the umbrella. This is a deliberate and justifiable strategy, of course, but nevertheless an important question is whether there could be mutual gains from a more permeable umbrella?”114 If one assumes labs are places of connectivism or trading zones, Svensson’s critique is understandable. But trade does not necessarily happen on equal terms, and the value of labs establishing ties across disciplinary boundaries and existing departmental categories needs to be weighed against other meaningful contexts.
Consider the issue from the perspective of feminist hackerspaces and other hybrid labs established by traditionally marginalized groups (including people of color and those in the LGBTQ+ communities). As Sophie Toupin argues, withdrawal, boundary making, and separation can be deployed in tactical ways to build an institutionally shielded existence. Quoting Faith Wilding and Critical Art Ensemble, Toupin reminds us there is “a distinct difference between using exclusion as a means to maintain structures of domination, and using it as a means to undermine them,” echoing bell hooks’s point that one may choose “marginalization as a space of radical openness.”115 The lessons of such asymmetrical institutional situations also need to be approached on their own terms. We need particularly sensitive cartographies of labs that do not merely function to connect, network, and produce profitability. Seeing the ACTLab in the context of generative tactical closure frames it in terms of survival in an intellectually difficult environment, one that at times questioned the principles of the lab and its director’s engagement with identity politics and theory. The umbrella, therefore, does not signal a hindering closure but rather a device that is able to sustain multiple realities in a para-institutional coexistence.
What if hybrid labs are indeed partly defensive structures, shelters, and membranes that can thrive in institutional settings? What if they are interfaces that open up to multiple worlds and harbor particular techniques of academic practice? Consider how labs allow double personalities, multiple fronts, and interfaces to exist as part of institutional infrastructures. Worlds of making and activity should include the sort of intellectual work that goes into thinking about how the particular form of the lab can both shelter and transform institutional structures, including how people are supported, connected, and recognized.
While it is hard to deny that in many instances labs are places driven by metrics and in close connection to a wider set of economic policies, we must be aware of the diversity of discourses and practices and be able to support these alternative voices. Besides interventions into disciplinary discussions, hybrid labs can be places where the gray work of institutional mediation means that funding can support research and activities that intentionally push against traditional boundaries.