2
Thought
Acceleration, Automated Thinking, and Uncertainty
Intelligent technics slip through the net . . . and out of control, into the harsh swarming of dynamic equilibria. At the end of history, no-one will be there to put the brakes on positive feedback systems.
—Steve Metcalf, “Killing Time/Strife Kolony/NeoFuturism”
Anyone trying to work out what they think about accelerationism better do so quickly. That’s the nature of the thing. It was already caught up with trends that seemed too fast to track when it began to become self-aware, decades ago. It has picked up a lot of speed since then.
—Nick Land, “A Quick-and-Dirty Introduction to Accelerationism”
Education governance studies have thoroughly analyzed and discussed new modes of accountability, performativity, and policy as numbers.1 Critiques of these developments, including our own, often focus on the instrumental rationality that underpins them, echoing Lyotard’s early definition of performativity as the pursuit of the best possible input/output relations and framing data-driven governance as the “calculative” pursuit of narrow ends.2 From this perspective, the introduction of data science and artificial intelligence (AI) in education governance appears as the latest expression of the desire to govern education systems according to an instrumental rationality. In this chapter, we propose that the introduction of AI as a technology of education governance can exceed instrumental desires and conjures new possibilities for the governance of education systems. We conceptualize the ways in which education governance now incorporates machinic forms of power, force, and control that shape human desires and values. We develop the concept of synthetic thought, and we argue that machine cognition can be more than a tool to advance educational efficiency, performativity, and accountability. If synthetic thought exceeds mere calculation, by employing logical inference and critical reasoning, then in this chapter we ask whether it can produce more than an intensification of educational performativities and data-driven accountabilities that have saturated many educational systems and sectors globally.
Critical education studies exploring the early impact of AI on education tend to draw on the humanist values that have long underpinned educational thought. Critiques of the rapidly growing EdTech industry have focused on the limitations of applying AI, such as adaptive learning and personalization, when compared to human pedagogical relationships.3 Reports on the use of AI in public services such as education emphasize the potential risks and biases of “black-boxed” AI systems and the importance of the “human in the loop.”4 Significant investment is being made in human initiatives that focus on producing “explainable AI.” Calls for the development of twenty-first-century skills also emphasize the importance of noncognitive, embodied traits that cannot be readily automated.5 Each of these arguments, in different yet cognate domains, sustains a dichotomous view of humans and technology that can be all too easily read as positioning humans as both prey to, and guardians against, the negative effects of automation and AI.
Our aim in this chapter is to open up alternative theoretical perspectives premised on the role of synthetic thought in education governance. While recognizing the merits in the critical perspectives outlined above, we explore the potential, at least in theoretical terms, of focusing on synthetic thought to open new possibilities for understanding how AI may exceed instrumental rationality, as well as creating new, possibly unsettling, political rationalities in education. As Reider posits, machine learning and associated approaches are not only opaque (the black box) but also “think” in ways that are “potentially strange and hard to fit into established categories.”6 As such, it is important to “consider the possibility that they challenge cultural modes and social institutions in . . . fundamental ways,” a perspective that we take up in relation to education governance.7
This chapter is, therefore, an experiment in creating new lines of development for thinking about education governance by conceptualizing the cooperation of human and machine cognition in ways that blur both their assumed and actual distinctions, and that simultaneously attempt to resist understandings that treat AI as a reduction of human cognition. We are interested in what it would mean to think expansively about agency and objects, and about the associations and relations of machines and humans in governance. These relations are central concerns for fields such as actor-network theory or new materialisms,8 and while our interests do cover similar ground, we do not locate our discussion in these territories. We take up a different set of theoretical tools to consider the systems of thought that operate across different agents in education governance—some of which are increasingly nonhuman—and the processes of object-becoming-subject and subject-becoming-object that these systems produce. To pursue this aim, we draw on work that examines how the development of AI has generated approaches that go beyond the calculation of clearly defined ends based on a set of premises. We begin with a brief overview of theories of technology and connect these theories to automation and governance. We show how technology is a key mediator of political rationalities and emphasize the need to move the discussion beyond notions of technical determinism. We suggest that while technology may not be “in control,” the locus of control does not reside with human “users” either.
Next, we extend this thinking into a review of accelerationism, which has been a marginal field of theoretical development in the humanities and social sciences over the past three decades. The emergence and development of accelerationism occurred at the same time as data-driven modes of performativity and accountability became prominent in education, yet it offers a very different perspective on these developments when compared to more-familiar critical positions in education policy studies. The group of scholars that forged the accelerationist perspective in the 1990s has also influenced, and contributed to, recent efforts to open new perspectives on computation and thinking. Accelerationism thus provides a good starting point for an alternative theorization of AI and its development in education. We trace the emergence of accelerationism and outline its main tenets. We adopt a history of ideas approach to illustrate the development of a line of thought that has given rise to recent work on computational reason. We argue that the core accelerationist position offers a strong critique of the influence that human agency can exert over the development of technocapitalist systems—that is, no one can put the brakes on the acceleration of these systems, as the epigraph above suggests. Drawing on this theoretical tradition, we then consider how critical philosophy of, or indeed as, synthetic thinking can offer new perspectives on the role that AI might play in governing education.
Automation, Technology, and Governance
Many critical perspectives of technology continue to center a human actor as either a participant, user, or object of technological decision-making.9 We would, perhaps a little unfairly to some, locate much of this work in the instrumentalist view, where technology is seen to be a tool serving human ends. Even approaches that treat AI and algorithms as sociotechnical assemblages can continue to treat these technologies as tools to be utilized, resisted, or ameliorated. This section provides an overview of theories of technology that move beyond the dichotomous view of humans and machines that, as noted above, tends to dominate current discussions of AI and algorithms in education governance. We turn to concepts that suggest “we can think, signify, make sense and represent who we are in part only because of technology.”10 We examine what this conceptualization means for understanding the links between technology, automation, and governance.
The view of human-technology relations that we take up is the “substantivist” position, according to which technology is a dynamic system that “is a decisive mediator of social actions and cultural values.”11 Stiegler, in a detailed theorization of “technics,” which he defines broadly to include the first Stone Age artifacts and the latest digital platforms, argues that technology and culture are irreducible.12 Roberts outlines that, for Stiegler, culture can be understood as
the product of technics as the prosthetic relationship between the human and its “exteriorisation” in matter. Technics therefore does not have the instrumental sense of technology as a tool that the human makes use of but rather defines the human as no longer simply a biological being.13
There is constant movement between two nonautonomous agents in the imbrication of humans and machines. As Mackenzie notes, “The technical runs ahead of culture, but it is not alone. It enlists humans to power its instantiation. . . . It is not autonomous or intrinsically dynamic.” Technology has collective implications; it shapes the social, cultural, and political conditions of human life, but this does not imply technological determinism.14 As Roden argues, “While technology exerts a powerful influence on individuals, society and culture, this cannot be an ‘autonomous’ influence because there are not ends or purposes proper to it.”15
Stiegler’s emphasis on exteriorization provides a helpful way to theorize machines and governance, including exploring the emerging political rationalities of anticipation, prediction, and automation. For Stiegler, exteriorization describes the condensation of culture into “prosthetic” artifacts that augment human action (e.g., a hammer). An example of exteriorization is the significant shift from statistical governance to algorithmic governmentality, which results in “a certain type of (a)normative or (a)political rationality founded on the automated collection, aggregation and analysis of big data so as to model, anticipate and pre-emptively affect possible behaviours.”16
The combination of automation and behaviorist approaches allows us to begin to examine how governance is increasingly exteriorized as aspects of the cognitive processes involved in governance become condensed into machines.17 Increasingly, governance is operating through machines that are changing the organization of life, and thus “the critique of the evolution of artificial networks must concentrate on their emergent powers of cognitive, somatic and economic synchronization.”18 In other words, the so-called black boxes of AI, schools, classrooms, and even educational systems themselves inscribe autonomous interiorities to these objects. Thus, initiatives designed to produce “explainable AI,” for example, are attempts to expose the interiorities of AI, which for our purposes are the exact same logics, discourses, and rationalities that have governed classrooms, schools, and educational systems for the past forty years and that have been exteriorized into new platforms.19 In contrast, we will emphasize the synthetic relationship between the exteriorization of human culture and its folding back into human cognition through our engagement with machines.
Central to our theorization of synthetic thought is what Roden calls “new substantivism,” and self-augmenting technical systems. This perspective does not collapse automation into autonomy or suggest that self-augmenting technical systems are self-aware. Self-augmenting techniques, and specifically the forms of AI and automation upon which we focus, are those that are technically abstract, but the “techniques do not determine how they are used.”20 Examples include the wide applications of machine learning, from medicine to streaming services and online shopping platforms. Recommender systems that anticipate user preferences are another example. While mostly used in business applications, these shape everything from what song is recommended next in a streaming service like Spotify to what problem should be undertaken in an adaptive math tutoring program. As Roden argues, “Techniques are more abstract the more they are available for reapplication or reconfiguration in disparate contexts.”21 We might suggest, therefore, that in education governance different applications will employ different algorithms, and there is an abstract isomorphic relationship between algorithmic approaches adopted across myriad contexts, from funding, resourcing and predictions of results.
With the increasingly widespread use of algorithms and AI, we suggest self-augmenting technical systems are beginning to reshape not only what is knowable but also what is doable. A system of this kind “does not remove human agency but mediates it through networks where no single agent or collective is able to exercise decisive control over the technical system.”22 These technical systems act as catalysts for further action rather than determining this action. Machines are thus creating semiautonomous conditions of existence, and in doing so they are creating new conditions of possibility for what we can understand as action and control in education governance. As Savat posits,
any technology, or machine, opens up a specific form of action, recognising that thought too is a form of action. At the same time, any technology or machine may close off certain forms of action (and thought).23
Much of the work on technology and governance indicates that control is difficult if not impossible to ascertain, because “the order of a network is total and open, horizontal and distributive, inclusive and universal.”24 Therefore, it may be that we need to come to grips with an idea of technology and governance that is neither controllable nor in control. As Roden suggests,
we need not attribute agency or purposiveness to technology to explain why the evolution of technical systems eludes our control. If technology is “out of control” it does not follow that it is “in control” of us or under its own control.25
If technology is neither in nor under control, then we need to grapple with the possibility that new types of governance systems exteriorize thought by automating certain cognitive processes. However, automation, if we follow Roden, does not mean rendering governance fully autonomous. Yet automation does reinforce digital data that become the basis of algorithmic governmentality and that can change the possibilities for political agency and governance. As noted by Crogan, Stiegler posits that “algorithmic governmentality disrupts established material and institutional arrangements for producing and verifying truth.”26 Through forms of automated decision-making, governance takes the individual and fragments it, instantiating a central insight of Deleuze’s theorization of societies of control, which involve the transformation of populations—which were the object of governance in disciplinary societies—into “samples, data, markets, banks” and individuals into “dividuals,” or “numbered bodies of coded ‘dividual’ matter to be controlled.”27
Three aspects of exteriorization are important here. First, exteriorization enhances “the potential of technology as a source of contingency, rather than as a limit or a threat to it.”28 Consequently, rather than politics being understood as the outcome of contestation over interests and intentions, governance through automation destabilizes the subjects of governance. Second, exteriorization connects previously “hidden” and heterogeneous elements together within new educational surfaces, new data arrangements, new codes, and new infrastructure systems designed to govern through access to assembled forms of “dividualized” information. Third, exteriorization can be understood as disrupting our understandings of not only normative action but any form of human intentionality in governance. As Nealon explains, exteriority “posit[s] a discursive field or network in which no term can rule from a privileged place of interiority.”29 In other words, agency and intentionality are no longer interior or intrinsic to human actors, but rather desire is shaped by exteriorized forms of cognition in accelerating networked and self-augmenting technical systems.
Accelerationism
Accelerationism is a theory of the time compression produced when “commercialization and industrialization mutually excite each other in a runaway process, from which modernity draws its gradient.”30 Accelerationism emerged as an identifiable theoretical perspective in the 1990s at Warwick University, in the work of Nick Land, Sadie Plant, and the Cybernetic Culture Research Unit (CCRU). It was a marginal area of theoretical development within the humanities and social sciences until the early 2010s, when a new generation of thinkers engaged with the writings of Land and the CCRU, which had become even more powerfully explanatory of contemporary technological and cultural developments over the intervening years. By 2017, the ideas of the accelerationists had gained sufficient currency in the United Kingdom for The Guardian newspaper to publish a lengthy feature about the movement, which it described as a “fringe philosophy” that “predicted the future we live in.”31 Accelerationist ideas were also influential in a RAND Corporation foresight report published in 2018, which is titled Speed and Security: Promises, Perils, and Paradoxes of Accelerating Everything. The report begins from the assumption that “technological developments and social dynamics are working in tandem to shift society into hyperdrive.”32 The authors aim to draw insights for policy from explorations of future scenarios informed by sociological theory and accelerationism, which the report describes as an “outlier philosophy-turned-ideology.”33
Accelerationism has a longer, occulted history in the writings of thinkers concerned with modernity; indeed, it is first and foremost concerned with “the disorienting sensation that modernity is out of control.”34 This sensation found expression in the work of diverse thinkers, from Marx and Nietzsche to the Russian cosmists and the Italian futurists.35 Indeed, a central tenet of accelerationist thought—that regulation of dynamic systems is a secondary process that inhibits a primary process that would otherwise destroy the coherence of the system (the dynamic captured in Deleuze and Guattari’s concept of deterritorialization)—is already present in Marx’s claim regarding the destructive tendencies of free trade within capitalist systems:
In general, the protective system of our day is conservative, while the free trade system is destructive. It breaks up old nationalities and pushes the antagonism of the proletariat and the bourgeoisie to the extreme point. In a word, the free trade system hastens the social revolution. It is in this revolutionary sense alone, gentlemen, that I vote in favor of free trade.36
This emphasis on destruction is also evident in Nietzsche’s writings, particularly in a fragment from the Will to Power notebooks, with the claim that “until now, ‘education’ has had in view the needs of society: not the possible needs of the future, but the needs of the society today. One desired to produce ‘tools’ for it.”37 Nietzsche observes that modernity—and we would note the mass education systems that emerged to reinforce it—involved “the increasing dwarfing of man,” which created the conditions for a stronger, more excessive people through “a great process that cannot be obstructed: one should even hasten it.”38 Here, in both Nietzsche and Marx, we can see the seeds of the accelerationist notion that one ought to side with, or does not have agency to indefinitely forestall, the primary processes underpinning capitalism and modernity, and thus the only escape from this world system is through its more or less rapid destruction.
The development of accelerationism as a more explicit school of thought occurred in three waves, beginning in France in the wake of the events of May 1968.39 At this time, belief in the potential of collective anticapitalist politics was giving way to a sense of political exhaustion and the punk nihilism of the 1970s.40 Thinkers such as Lyotard, Baudrillard, and Deleuze were experimenting with “heretical” lines of thought, at least from some perspectives and understandings of Marxism. In a foundational and widely cited “accelerationist fragment,” Deleuze and Guattari ask whether efforts to resist the development of market capitalism were misdirected, and whether a revolutionary politics could instead be pursued through
the movement of the market, of decoding and deterritorialization? For perhaps the flows are not yet deterritorialized enough, not decoded enough, from the viewpoint of a theory and practice of a highly schizophrenic character. Not to withdraw from the process, but to go further, to “accelerate the process” as Nietzsche put it: in this matter, the truth is that we haven’t seen anything yet.41
Deleuze and Guattari argue that societies perform a regulatory function by codifying desire, but capitalist societies are distinctive insofar as they deterritorialize desire. However, this deterritorialization is unsustainable, and society’s institutions must reterritorialize desire, or repurpose it, for example, onto the family, the nation, or consumption. For Deleuze and Guattari, deterritorialization is the primary destructive process that capitalist society tries to make sustainable through secondary regulation. From this perspective, resisting capitalism contributes to the reterritorialization that capitalism needs to keep its explosive tendencies in check.
The second wave of development in accelerationist thought occurred in the 1990s, in the wake of the Cold War and during a heady moment of economic globalization and technological advances, most notably the emergence of the World Wide Web. At this time, the CCRU at Warwick University brought together a group of philosophers, cultural theorists, and artists around the controversial figure of Nick Land, whose idiosyncratic reading of Deleuze and Guattari has been pivotal in the development of accelerationist thought. Land’s writings during this period were characterized by an interest in cybernetics and AI, and an experimental cyberpunk style that has influenced recent speculative philosophy exploring the potentials of synthetic thought. Land’s work also has implications for thinking about politics and governance in capitalists societies. Land argues that
“acceleration” . . . describes the time-structure of capital accumulation. . . . [T]echnology and economics have only a limited, formal distinctiveness under historical conditions of ignited capital escalation. The indissolubly twin-dynamic is techonomic (cross-excited commercial industrialism). Acceleration is techonomic time.42
Put simply, profit is reinvested into technological development that begets more profit while disrupting and deterritorializing social systems. A clear example of this is the Silicon Valley business model, which disrupts existing economic and social processes (although the knowledge underpinning this disruption depends on the existing social systems of ongoing government funding for university-based computer science research). Land acknowledges that, inevitably, this dynamic will be perceived as a problem demanding regulation. We have seen many examples of this, from “slow” movements to data protection regulations and calls for the development of ethical and explainable AI. However, Land raises the question of whether the primary processes driving these developments are ultimately resistant to political intervention, or whether regulation can only temporarily limit their destructive tendencies. More recent analysts of “surveillance capitalism” share this reservation.43 For example, the regulation of AI can be resisted or worked around by technology companies as they frame technological development as contributing to a nebulous but nonetheless expansive notion of the social good.44 This period of accelerationist thought can be characterized as promotional in its celebration of the (destructive and disruptive) possibilities of AI.
The most recent developments of accelerationism have occurred over the past decade, following the 2008 global financial crisis and the AI spring spawned by Google’s early successes with machine learning and more recent research on deep learning.45 Seeking to combine the accelerationist sensibility with Marxist political strategy, Williams and Srnicek’s “#Accelerate: Manifesto for an Accelerationist Politics” renewed interest in a leftist politics that embraced technological disruption.46 The Laboria Cuboniks collective went further still, fusing accelerationism and gender politics in its important Xenofeminism: A Politics for Alienation. Both interventions were re-energizing in their rejection of a technophobic politics of sustainability that is ultimately conservative, reactive, and exclusive, in order to embrace the world-making potential of new technologies. Gardiner provides an important clarification of this accelerationist school of thought, explaining,
Accelerationism is often accused of techno-fetishism, and a penchant for technocratic solutions. Williams and Srnicek do, however, make it clear that technosocial acceleration doesn’t simply “happen” of its own accord, or by dint of a self-styled “expertocracy.” It must always be under the aegis of collective political agency, and ultimately subordinated to continually evolving social ends: “The command of The Plan must be married to the improvised order of The Network.”47
These thinkers thus argue for the appropriation of technology to serve a different set of values to those driving its development in the commercial world.
The key insights of Landian/CCRU accelerationism have also been developed further in the form of “unconditional accelerationism,” which takes the prognosis from an obscure CCRU text as its basic tenet: “At the end of history, no-one will be there to put the brakes on positive feedback systems.”48 From this perspective, the answer to the question of what is to be done about acceleration, automation, and their potentially destructive effects is to let go of our assumption that something can and should be done because the primary process of deterritorialization cannot be subordinated to human regulation. This school of thought more readily embraces the view that acceleration happens of its own accord. This is ultimately a position of acceptance in relation to technological development and disruption.
This brief overview of accelerationism provides background and context to the theoretical disposition that animates our aim in this chapter, the book more broadly, and that we return to in the final chapter: the problematization of approaches to AI in education governance. Accelerationism offers a variation on the cybernetic view of technological development within financial-capitalist societies and embraces the creative potential that inheres in deterritorializing processes, rather than emphasizing human values, regulation, and sustainability. We draw on accelerationism to posit four contemporary theoretical and political responses that each imply a normative relation to AI (i.e., an answer to the question of what ought to be done with AI). We characterize these positions as (1) promotion (AI will make things better); (2) appropriation (if we use AI to serve particular values, then we can make things better); and (3) acceptance (AI is a part of evolutionary and cybernetic dynamics over which we have little control). These three responses can be seen as different acts in a Nietzschean drama, with the tension between good and evil conceptualizations of acceleration in the first two responses giving way to passive nihilism.
Before moving to the fourth position, we note that accelerationism thus describes a broad set of conflicting theoretical and political positions, but our focus here is not on arbitrating debates between right and left acceleration or evaluating the merits of these theories for undergirding political agendas. Rather, we extend the speculative theoretical disposition that has been a hallmark of accelerationist thought by examining recent work on algorithmic governmentality, unconscious cognition, and automated thinking. This leads us to a fourth theoretical and political position, problematization (AI is a condition for thought to become aware of being synthetic),49 which marks a shift to an active nihilism in which synthetic thought and its creative possibilities are embraced.
Nonconscious Cognition and Automated Thinking
We have argued that machines—and specifically accelerating, networked, and self-augmenting technical systems in which AI is embedded—can exhibit exteriorized forms of cognition. Our contention is that with the rapid growth of data infrastructures, big data, and AI, and the introduction of these elements into education governance, we have arrived at a precipice of automated cognition that has the potential to shape human thinking. Indeed, our conception of synthetic thought describes (1) this conjunction of human thinking and exteriorized cognition and (2) the creative potential of this development to go beyond the intensification of calculation and instrumental rationality. In this section, we take up the work of Katherine Hayles and Luciana Parisi to develop the concepts of nonconscious cognition and automated thinking that inform our notion of synthetic thought.
Cognition can be defined as an informational process that does not necessarily involve consciousness. Hayles distinguishes between thinking as a property of conscious entities and cognition as an informational process that occurs without consciousness but, nonetheless, with intention.50 Cognition differs from material processes (e.g., geology) insofar as it involves emergence, adaptation, or complexity, and it differs from animal thought insofar as it does not require consciousness. As Hayles explains, “We can say that all thinking is cognition, but not all cognition is thinking.”51 Nonconscious cognition “operates across and within the full spectrum of cognitive agents: humans, animals, and technical devices.”52 Moreover, when embedded in the networked technical devices that constitute infrastructure space, “the cognitive nonconscious also carries on complex acts of interpretation, which syncopate with conscious interpretations in a rich spectrum of possibilities.”53 Nonconscious cognition inhabits different temporalities to human thought, creating new possibilities for integration with, and exploitation of, our thinking. As Hayles writes, “One of the ways in which the cognitive nonconscious is affecting human systems . . . is opening up temporal regimes in which the costs of consciousness become more apparent and more systemically exploitable.”54 Consider, for example, the advantages gained by automated algorithms in stock market trading in comparison with human traders. As we automate more and more of the cognitive load of modern life, we inescapably and inexorably change the cognitive ecology in which human thinking evolves and functions.55 Hayles characterizes this process as “technogenesis, the idea that humans and technics coevolved together.”56
While Hayles provides a framework for conceiving of cognition as broadly distributed in technical systems, we still confront the view that these systems necessarily perform a reductive form of cognition that is limited to calculating outputs from inputs. In her recent work, Parisi pursues “an alternative approach to reasoning that accounts for the inferential potential of automated computation.”57 Parisi argues that the conflation of automation and capitalist logics in critiques of instrumental rationality blinds us to other possibilities of automation, particularly that “the critique of the instrumentalisation of reason according to which automation and the logic of capital are equivalent needs to be re-visited in view of rapid transformations of automation today.”58 The critique of instrumentalism has been influential in critical education studies and has led to the view that data-driven technologies are part of a capitalist logic that is detrimental for public education, particularly due to their association with neoliberalization and privatization. These instrumentalist critiques align with critical scholarship that explicitly or implicitly rejects the production and use of quantitative data to drive policy and practice because it undermines the values and purposes of humanist education.59
While recognizing the importance of these critiques, we suggest that other possibilities are also created by the algorithms that enable automation. Parisi contends that it is
the transformation of the logic of the technical machine itself and thus of a philosophy of computation that needs to be unpacked, disarticulated and reconstructed so as to allow for a critique of capital that is not immediately a negation of automation and its possibilities for thinking.60
In this context, Parisi posits that developments in the algorithms used for machine learning have created further conditions for automated thinking: “Since 2006, with deep learning algorithms, a new focus on how to compute unknown data has become central to the infrastructural evolution of artificial neural networks.”61 Historically, AI research has explored a number of different techniques, from good old-fashioned AI, which is a symbolic rule-based approach, to machine learning, in which systems are trained on large data sets.62 A recent meta-analysis of papers in the field of AI has shown substantial shifts toward machine learning, deep learning (machine learning using neural networks), and reinforcement learning (training neural networks using punishments and rewards).63 Reinforcement learning has received significant attention since the success of Google’s AlphaGo and AlphaZero algorithms. While deep learning approaches are likely to be replaced by another paradigm, just as previous approaches have been, they have taken over from knowledge-based approaches.64 Theories, logic, and rules are being replaced by data, networks, and learning.
We are still a long way from general AI, which is the imaginary popularized in much science fiction and recent debates about the existential risks of AI. But another common view of AI is that it is simply inductive data processing, which unthinkingly produces generalizations from big data sets. This latter view, we think, misrepresents the action of these algorithms. This view is exemplified, for example, in Lyotard’s influential description of the emergence of performativity within information systems, which he defines as the optimization of the relation between inputs and outputs.65 Parisi critiques the “automation of decision, where information processing, computational logic, and cybernetic feedbacks replace the very structure, language, and capacity of thinking beyond what is already known.”66 However, the introduction of machine learning approaches raises other questions about the possibilities for automated thinking to think beyond what we already know.
Machines are more than efficient tools for instrumental reasoning. Parisi suggests we can conceive machine cognition as
not simply a cybernetic form aiming at steering decisions towards the most optimal goals. Instead, operating systems are computational structures defined by a capacity to calculate infinities through a finite set of instructions, changing the rules of the game and extending their capacities for incorporating as much data as possible. These systems are not simply tools of or for calculation.67
Machine learning differs from earlier approaches to AI due to the increased volume and variety of data available to train algorithms today, and because in artificial neural networks “algorithms do not just learn from data, but also from other algorithms, establishing a sort of meta-learning from the hidden layers of the network.”68 While much of the political work on bias and AI has been concerned with the idea that bias is built into both data sets and algorithms,69 Parisi provides an alternative, and perhaps more tenuous, theorization of machine-based decision-making.
Artificial neural networks do not necessarily calculate all possibilities within a system and then make the most optimal decisions. These algorithms can also make inferences about the most optimal decision in a given situation while remaining ignorant of a larger set of indeterminate possibilities. Indeed, building more powerful cognitive machines only increases the horizon of indeterminacy.70 Machine learning thus entails a move beyond deduction and induction toward abduction: “the process of inferring certain facts and hypotheses to plausibly explain some situations and thereby also discover some new truths.”71 The possibilities of deduction and induction are already contained in premises or empirical cases, whereas abduction involves creative uncertainty in machine learning.
In contrast to rule-based machine cognition, such as IBM’s Deep Blue chess program, deep learning algorithms like AlphaZero are trained on the data they are fed and, in the process, can learn about this training:
Deep-learning algorithms do not just learn from use but learn to learn about content- and context-specific data (extracting content usage across class, gender, race, geographical location, emotional responses, social activities, sexual preferences, music trends, etc.). This already demarcates the tendency of machines to not just represent this or that known content, or distinguish this result from another. Instead, machine learning engenders its own form of knowing: namely, reasoning through and with uncertainty.72
The “knowing” described above references the relationship between deep learning algorithms and data; in deep learning approaches, the data is unstructured or not organized into a uniform format (e.g., video), and the algorithms provide this structure. Automated thinking of this kind is necessarily hypothetical and creative.
This perspective provides an alternative to critiques of datafication and automation that focus on instrumental rationalities and the detrimental effects of these rationalities for education. In subsequent chapters, we focus on the creative uncertainty of nonconscious cognition and its syncopation with human thinking. We aim to reopen “the question of how to think in terms of the means through which error, indeterminacy, randomness, and unknowns in general have become part of technoscientific knowledge and the reasoning of machines.”73 We do not deny the prevalence of instrumental reason in AI and its pernicious effects, but we do hold open the possibility that automated thinking, and its synthesis with human thought, can produce new knowledge, values, and decision-making processes. We do not argue that these developments are desirable or beneficial, but we do argue that it is necessary to understand how automated thinking, which is emerging with data science and machine learning, is beginning to change the possibilities for education policy and governance.
Uncertainty and New Norms
The indeterminacy and creativity of automated thinking is also the focus of Louise Amoore’s work on algorithms and uncertainty, which examines the productive capabilities of machine learning. Amoore offers important insights into machine learning as a probabilistic set of techniques and tasks that has productive capacities in both technical and political terms.
The technical aspect of Amoore’s argument derives from the claim that doubt is central to the mathematical underpinnings of machine learning and its application in what we have discussed as anticipatory and predictive governance. This is governance predicated on calculating the likelihood that a future event will occur and making decisions on that basis. Amoore emphasizes that probability is a form of doubt that embraces errors and inaccuracies in order for a machine learning algorithm to train and develop. Doubt is not what stops calculation; rather, it is central to machine learning algorithms that can be understood as an “arrangement of propositions that significantly generates what matters in the world.”74 Amoore proposes that what matters is an algorithm’s capacity to propose, or output, an optimal action.
Machine learning approaches like deep learning are inherently multiple and probabilistic.75 Amoore calls this multiplicity and probabilistic aspect “doubt.” However, this is not only a technical point about how machine learning works. Amoore also points to how multiple data sets and algorithms produce not only an output of calculation, or even an optimal output, but also a political action. Amoore suggests that the “condensing of multiplicity to a single output matters because it is the output that becomes actionable.”76
The use of machine learning to produce actionable outputs creates a paradox. The calculations are inherently doubtful, with some level of opaqueness even in supervised learning and in the use of approaches such as causal inference.77 Yet what is produced by machine learning is a decision that is often taken as beyond doubt. It is this that Amoore suggests can encourage us to become “indifferent to persistent doubt.”78 First, doubt is collapsed into certainty because machine learning decisions are presented as authoritative, even though they are inherently uncertain. In governance contexts, this can involve the creation of certainty in decision-making from uncertain techniques. But it is not just producing certainty from uncertain conditions that places algorithmic decision-making and automation in the same realm as other interpretations of causality and uncertainty in governance.
Rather, as Amoore posits, this is not a matter of reinforcing old norms but of creating new nonhuman ones: “Contemporary algorithms are not so much transgressing settled societal norms as establishing new patterns of good and bad, new thresholds of normality and abnormality, against which actions are calibrated.”79 Amoore is thus concerned with pushing us beyond ethics and improved automated decision-making to correct algorithmic “wrongs.” As Amoore proposes, algorithms are “implicated in new regimes of verification, new forms of identifying a wrong or of truth telling in the world. Understood in these terms, the algorithm already presents itself as an ethicopolitical arrangement of values, assumptions, and propositions about the world.”80 We need to therefore consider the worlds that algorithms are creating.
From Thought to Governance
In this chapter, we developed an argument for why we need to think about education governance as more than a discrete set of machine and human interactions. Drawing on a “new substantivist” position, we argued that technology can be understood as simultaneously cultural and technical, uncontrollable and yet not in control. We briefly surveyed the development of accelerationism as the backdrop against which contemporary thinkers such as Stiegler, Hayles, Parisi, and Amoore are pursuing new theorizations of cognition and its automation. While there are many variants of accelerationism, we have emphasized its key insight: that technological and economic development are becoming ever more autoproductive in positive feedback loops that exceed human control. Humans and machines are involved in a technogenesis that is as old as the human species and that blurs the distinction between the two, contributing to the marginalization of technics within Western metaphysics.
We argued that the incorporation of new computational techniques in education governance marks an intensification of this technogenesis but not a qualitative break with what has come before. That is, governance commonly describes the exercise of power, influence, or control over oneself or others. Humans govern themselves and their environments through rules and regulations, aligning desires and behaviors with ideals in pursuit of “good governance.” But a governor is not only a person with responsibility for managing an institution or society; a governor is also a device for regulating the speed of machines by increasing or decreasing the flow of inputs (e.g., fuel). The history of centrifugal governors predates the Industrial Revolution, but their early use is commonly associated with James Watt’s steam engine. The role of the governor in regulating engines gave rise to its metaphorical use as a concept for analyzing dynamic systems theory or cybernetics. Norbert Wiener coined the term “cybernetics” to describe a field of ideas that included “the study of messages as a means of controlling machinery and society, the development of computing machines and other automata.”81 Cybernetics shares an etymological root with the word “governor” and describes the study of communication and control.
In acknowledging the political and physical aspects of governing, we suggest the introduction of AI into education governance represents a further exteriorization of cognition and decision-making that has, and will continue to, change governance processes, but not simply by intensifying instrumental rationalities. Instead, we propose the concept of synthetic thought to describe (1) the conjunction of human thinking and exteriorized cognition and (2) the creative potential of this thinking to go beyond the intensification of calculation and instrumental rationality in its machinic instantiations. In the following chapters, we try and use the concept of synthetic thought to underpin our notion of synthetic governance. As we explore, synthetic governance is not a different process to Anglo-governance, but a different perspective on this process—a perspective that emphasizes the indeterminate and potentially creative nature of new approaches to machine learning and other forms of AI and datafication that increasingly augment human thinking and decision-making in contexts of education governance.