Introduction
Synthetic Governance
Algorithms of Education
Humans were special and important because up until now they were the most sophisticated data processing system in the universe, but this is no longer the case.
—Yuval Noah Harari, “Sorry, Y’All—Humanity’s Nearing an Upgrade to Irrelevance,” Wired.com
The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.
—Mark Weiser, “The Computer for the 21st Century,” Scientific American
Ava stands on a city corner. It is the final scene of Ex Machina, Alex Garland’s science fiction film exploring artificial general intelligence. Ava is a sentient android with female features that escapes a remote laboratory after “seducing” Caleb, a software engineer who was invited to submit Ava to the Turing test.1 The impact of this scene comes from the unremarkable nature of the streetscape through which Ava moves, surveying a new environment to collect and analyze data that will further increase “her” intelligence. Ava’s shadow is shown to be indistinguishable from those around “her” as “she” passes effortlessly among the humans in the city. In observations about the film, Ireland notes that this feminized science fiction image of artificial intelligence (AI) appears benign, and yet AI is already becoming “absolutely ubiquitous and totally invisible.”2
While Ava serves as an object of both fear and longing for the (hetero) male characters in Ex Machina, our interest is in Ava’s ubiquity and invisibility as emblematic of the potential for machine intelligence to radically change society. In this book, we explore how algorithms of education move among us in the everyday workflows, values, and rationalities of education governance. Like the humans who share the street with Ava, we are generally not aware of the presence of algorithms and AI. While “robots in the classroom” have become a common trope when discussing and critiquing AI in education, in this book we consider how machines are complementing and extending contemporary education governance, and we explore whether there are other possible governance rationalities that may emerge from the introduction of AI.3 This synthetic development does not involve direct replacement of human minds and bodies, but rather it produces new ways of thinking through the conjunction of human and nonhuman cognition. These emergent systems of thought and their possible effects, like Ava, are not immediately recognizable as a presence shaping human life-worlds.
The machines that populate this book are not examples of artificial general intelligence, such as Ava, but more limited and specific forms of AI that encompass a wide range of techniques and tasks that aim “to make computers do the sorts of things that minds can do.”4 This involves efforts to produce “systems that think like humans, systems that act like humans, systems that think rationally, systems that act rationally.”5 There have been two main aims of AI research: (1) modeling intelligence in living minds and (2) using computers to act on the world in intelligent ways.6 Many of the primary techniques of AI use existing data sets that are historical and spatial. For example, machine learning uses algorithms that constantly learn and adapt from training data in order to identify patterns in new data sets. Predictions are made based on this nonhuman learning.7 In this book, we examine and discuss how these types of task-specific AI act on the world, particularly in relation to education governance. We adopt the view that Deleuze and Guattari celebrate in Gabriel Tarde’s microsociology—that is, that we can understand the introduction of AI in governance by paying attention to “miniscule bureaucratic innovation.”8
Narrow forms of AI are already part of education systems. Many proponents are applying machine learning in data science approaches and in student information systems, from small-scale education technology companies that support the administrative workflows of everyday school life, to ambitious Silicon Valley giants that aim to disrupt education systems and sectors. Start-ups have developed computer vision for facial recognition systems that claim to serve as time-saving tools for teachers taking daily attendance. Microsoft, Google, and Amazon Web Systems also provide off-the-shelf business intelligence products for education systems with embedded AI services. Systems of this kind became part of many people’s everyday lives during the Covid-19 pandemic when schools across the world were closed and students were forced to undertake education remotely at home, using common platforms like Google Classroom.9 The introduction of these forms of AI in education provoke responses that range from outrage to ambivalence, depending on how machines are seen to interact with the prevailing purposes and practices of education.
The emergence of AI in education is the latest chapter in a longer project of datafication that has both changed and intensified some aspects of education. Datafication describes the process of translating things and events into quantitative data that can be added to massive databases that are growing daily. Modern education systems have been predicated on datafication—that is, on acquiring information about the performance of students across a range of fields and then issuing with them credentials underwriting the authenticity of that information.10 What is different today is the volume and variety of digital data that are captured and analyzed more quickly than ever before. Big data purports to provide a basis for technological innovation that promises progress and disruption, and analyses of these data influence both the smallest and the most consequential decisions made by individuals and organizations.
Datafication, and the new modes of educational accountability and associated performativities that accompany it, have changed education governance. What has been variously called algorithmic or digital education governance describes the overlap of datafication and machines in governance processes, resulting in “the monitoring and management of educational systems, institutions and individuals . . . taking place through digital systems that are normally considered part of the backdrop to conventional policy instruments and techniques of government.”11 The growing use of these technical systems introduces new actors and organizations into education, while the combination of machines and humans in the process of decision-making is accompanied by the emergence of new political rationalities. We use the term political rationality in a similar manner to Foucault, who aimed to
identify specific political rationalizations emerging in precise sites and at specific historical moments . . . underpinned by coherent systems of thought, and . . . [to] show how different kinds of calculations, strategies and tactics were linked to each other.12
This book does not examine specific education policies in order to evaluate policy processes from agenda-setting to enactment. We also do not take a normative position on education governance initiatives and directions. Rather, we focus on political rationalizations and questions of knowledge, power, and truth claims. Rationalities are “more than just ideologies; they constitute a part of the fabric of our ways of thinking about and acting upon one another and ourselves.”13 We aim to explore how the fabric of education is changing.
To understand governance today it is necessary to consider new data-driven rationalities that are the lifeblood of governance networks, often referred to as the “Anglo-governance model.”14 At the same time, it is necessary to examine the computational rationalities that datafication generates and sets in motion and that provide an infrastructure for new forms of automated governance.15 These are not two separate modes of governance but rather two ways of looking at governing systems. Datafication, calculation, and automation are not necessarily replacing the human actors and rationalities of the Anglo-governance model. As we show, these processes are introducing new policy instruments and technologies of governance that interact with earlier approaches.
The main proposition underpinning this book is that datafication has combined with the rationalities of AI to produce a synthetic form of education governance. Synthetic governance is an amalgamation of (1) human classifications, rationalities, values, and calculative practices; (2) new forms of computation, what we might consider to be nonhuman political rationalities, that are changing how we think about thinking; and (3) the new directions made possible for education governance by algorithms and AI. To be clear, this amalgamation is not human or machine governance but human and machine governance. As we will illustrate, synthetic governance arises from a series of “conjunctive syntheses” that assemble established data-driven rationalities and emergent computational rationalities in many ways for many different purposes.16 Synthetic governance traverses and conjoins machines and bodies.17
The Cultural Politics of Technology in Education
Our object of analysis in this book is the broad introduction of new data-driven technologies in education, rather than specific purpose-built educational technologies. If the latter are “educational” in the sense that technology is used for educational purposes, such as mathematics-tutoring software or essay-writing programs utilizing computer vision, then the technologies with which we are concerned are educational by application. We are interested in the application of machine learning—algorithms that learn with or without human assistance and produce outputs that humans cannot always understand—for the purposes of governing education systems, institutions, and the people inhabiting these organizational structures. This book contributes to critical studies of technology and computation as part of a small but rapidly growing literature in the area of education governance.18
New instruments of education governance are emerging from forms of technocapitalism that have infused all parts of everyday life, shaping desires for entertainment, consumer goods, transportation, election outcomes, and so much more. We focus particularly upon three “technologies” that are interrelated across different scales: data infrastructures, algorithms, and forms of AI that are connected to, and emerge from, both. These technologies are changing the practices and processes of education policy and governance but are also emerging as a political problem. Technology is often linked to ideals of human progress, “defined in terms of efficiency or productivity, [which are] economic measures of the contribution of technology to human well-being.” However, technology may also be understood in more dystopian ways as accelerating technocapitalist development that is anathema to human values.19 We consider both perspectives in the analyses that follow.
The two broad approaches to technology that appear in the empirical and conceptual aspects of this book are: (1) instrumentalist (technology as a tool serving human ends by enabling control of human and nonhuman systems) and (2) substantivist (technology as a dynamic system). From the latter perspective, and following Heidegger and Ellul, “technology is no longer seen as neutral, merely enhancing human purposes. It’s framed as a kind of force, domain, or system with far-reaching effects, first and foremost on ‘our’ being.”20 This notion of technology as a dynamic system or force enables us to raise questions about the changing relations of agency between machines and humans in educational contexts, “to analyse the collective apparatuses of which the machines are just one component.”21 Technology is not discrete from human relations but a collective, cultural endeavor. As Mackenzie suggests, it is important to understand technological relations as “sites of collective investment.”22 The classifications and calculations that underpin infrastructures, algorithms, and AI are producing new kinds of collective investment and agency in education policy contexts and, hence, require new conceptual vocabularies with which to undertake and understand contemporary education governance. Contributing to the development of these vocabularies is the first aim of this book and our departure point.
Education Governance
Governance is not synonymous with government. This distinction is important. The concept of governance is used to describe a form of new public management premised on the operation of, and relationships between, governing networks that include governments and other actors that work according to market principles.23 As Fenwick, Mangez, and Ozga explain, “In the transformation of government to governance, hierarchical bureaucratic regimes are displaced by networks of relationships in which cooperation and coordination must be constantly negotiated and managed.”24 Governance studies examine how power, force, and control are construed or conceived (e.g., hierarchical or distributed), and how authority and influence are expressed when making decisions.
Following the “governance turn,” in this book we often discuss policy and governance interchangeably. Today, education policy is nearly synonymous with ideas of multilateral and networked forms of governance, forms that simultaneously truncate education systems in order to increase or promote choice and personalization. Wilkins and Olmedo argue that education governance is a polyvalent term, which they “loosely characterize . . . as a heuristic device, discourse and technology of government”25 and as “a policy strategy for governing acentred, polycentric systems of education.”26 Multilateral and networked formulations of governance attempt to anticipate, predict, and frame problems, unlike earlier approaches that emphasized ideas of deliberation, decision-making, and rational choice. Governance operates through networks of public, private, and voluntary actors who engage in “game-like interactions, rooted in trust and regulated by rules of the game negotiated and agreed by network participants.”27 The presence of new policy networks and policy mobilities means that education governance is no longer simply occurring within the prefigured boundaries of the nation-state but now involves a diverse cast of actors and organizations, including edu-businesses and philanthropic foundations, thus creating new policy spaces.28
A data-driven rationale for public policy—and increasing data production, use, and interpretation—forms part of the shift from government to governance.29 Data has, therefore, become the lingua franca of network governance in the twenty-first century. The datafication of education depends upon interoperability and standardization that enable data to be generated, shared, and used across networks as part of digital education governance. Indeed, Lawn posits that devolution in schooling is producing “systemless systems” in which data sustain relations across systems that are otherwise fragmenting; many education systems now derive much of their cohesiveness as a system from a common “set of data processes and coded behaviours.”30
Datafication is also reshaping our thinking about the educational ends we desire and the means for reaching them. We are interested in how nonhuman systems and structures of governance interact with, influence, and control human practices, desires, and affects. For example, the game-theoretical nature of interactions between Ava and Caleb in Ex Machina provides a rich metaphor for thinking about governance as a mode of control. The relationship between Ava and Caleb is characterized by the interplay between Caleb’s desire to intellectualize AI and Ava’s analysis of his desires:
As Caleb spends more and more time talking to Ava, he finds his attempts to intellectualize the situation consistently derailed by the AI and rerouted towards more libidinally charged subject matter. . . . Ava takes advantage of the hackability of human psychology and its unconscious excess, much of which is imperceptible to the humans in the film but available to the AI via a rich cartography of micro-expression analysis.31
The relationship between Ava and Caleb provides a vivid illustration of how the hyperrationality of AI and new data analytics is inextricably connected to questions of desire. We connect this idea of desire to Easton’s view of policy as the “authoritative allocation of values.”32 In this book we examine how the authoritative allocation of values directs the desires of the subjects of governance. Datafication, therefore, “is not only about more and more efficacious abstraction, more calculation and control,” but also a renewed “specificity of the material and the sensuous.”33
Desire. Data. Power. Of course, the use of data to govern education systems, and indeed societies more generally, is not new. As Desrosières shows, the generation and use of “statistics is connected to the construction of the state, with its unification and administration,”34 but measurement, datafication and statistical reasoning are also connected to the contemporary fragmentation of the state, including in relation to educational provision.35 The tension between the centripetal and centrifugal desires produced by the datafication—a tension between the production of consistency and the disruptive transformation that Deleuze and Guattari termed “deterritorialisation”36—lies at the heart of this book.
Contemporary education governance operates through measurement, rankings, and comparisons that are designed to motivate performance and direct attention to “best practice.” Getting others to want what you want, to value or desire what you desire, is a form of “soft” power that can be very effectively wielded through new modes of data analysis and representation.37 Many promises are made about the innovation and improvement enabled by datafication, but while superficially aligning with the interests of different actors, innovation and improvement also threaten and promise disruption. Technology companies large and small promise to help educational organizations and individual students achieve better outcomes through data-driven management, personalized learning, and the development of automated testing and marking, but the involvement of these new actors in education is reworking relations of authority and control. Governments promise educators more control over their “impossible” profession through data analytics enabled by new infrastructures, computational capacities, and analytical methods.38 However, these instruments also assist governments in controlling the teaching profession through new modes of accountability. What datafication gives with one hand is often taken away with the other. Today, the use of datafication to control education is becoming increasingly dysfunctional, in the sense that it threatens to disrupt existing spatial and temporal operations of systems and produce new desires. As Harari’s suggestion at the beginning of this chapter indicates, the new data-processing capacities that embolden humanity today also threaten to make it obsolete.
Algorithms
An algorithm can be defined as a set of instructions for using information to achieve a task.39 Algorithms use data to achieve an end and can thus be understood to have a limited form of intention, which increases as the algorithms become layered into more and more complex networks. For instance, algorithms often use the output of one layer of algorithms as the input for another layer (e.g., neural networks) or application. Reider suggests that “algorithmic techniques” are “means of production, not simply outpourings of computational principles or scientific ideas.”40 While algorithms have long been a component of governance, when combined with new computational capacities they have substantively changed the ways in which decision-making occurs. Our interest here is in the ways that “any technology or machine opens up a specific field of action . . . [while noting that] at the same time, any technology may close off certain forms of action (and thought).”41 New computing power has meant that algorithms provide a form, and a new temporality and spatiality, of decision-making that is often hidden, or at least inaccessible to humans. This is the “black boxing” of algorithms that results from “recursive algorithmic systems operating in ways that hide the very assumptions and logics that are modelled with them.”42
Given the computational contexts in which algorithms are discussed today, it is common to see the rise of algorithms as part of the rise of computation. However, it may be that biology has contributed to a more fundamental shift in scientific thought that is responsible for the authority of algorithms today. As Harari claims,
The new technologies of the twenty-first century may . . . reverse the humanist revolution, stripping humans of their authority, and empowering non-human algorithms instead. If you are horrified by this direction, don’t blame the computer geeks. The responsibility actually lies with the biologists. It is crucial to realise that this entire trend is fuelled by biological insights more than by computer science. . . . Once biologists concluded that organisms are algorithms, they dismantled the wall between the organic and the inorganic, turned the computer revolution from a purely mechanical affair into a biological cataclysm, and shifted authority from individual humans to network algorithms.43
Harari argues that the centrality of algorithms to contemporary life is one that emerges within the life sciences before the computing sciences. In turn, computer science, including AI, has changed biological questions and methods and given rise to computational biology.44 Computers used to model life are reducing the distinction between cognition in its “natural” and “artificial” forms and systems. Collapsing this distinction was a key contribution of cybernetics, which considered combined human, animal, and machine systems as equivalently subject to “control by informative feedback.”45 From this perspective, we can see how individual agency is integrated with networks creating new ways of thinking and acting, and different possibilities for how we understand aspects of life and its governance. As Halpern posits, “The ongoing penetration and interweaving of computation and life” is part of biopolitics, or the management of life and populations.46
Additionally, while we do not provide an extended discussion of cybernetics in this book, we recognize the importance of acknowledging the role of cybernetic thought as a rationality for governance. Indeed, it is possible to draw a line from cybernetics to systems analysis and onto policy sciences.47 Adams describes a return of the policy sciences that aim to identify the most-efficient options against ends of instrumental rationality, “set against courses of action technically arrived at through the utilisation of explanatory-causal accounts.”48 This return reprises a political rationality of control and prediction that informed education policy and governance during the second half of the twentieth century, in tandem with progressive faith in the potential of education to remake society. The rationality of control and prediction was manifest in the rise of technical approaches to the provision and administration of schools and systems, and in the policy sciences that emerged in the 1950s and that became solidified in the 1960s and 1970s. Connected to the history of the state management of populations through statistics, the policy sciences promised solutions to seemingly intractable social problems and the knowledge required to design and implement these solutions efficiently and effectively.49 Throughout the 1980s, however, policy itself became an object of immense politicization as the solutions advocated by the “policy sciences” were questioned and understood to be primarily designed for “power and social regulation.”50 The return of the policy sciences in the guise of algorithms reflects the resurgence of instrumental rationalities of governing.
The increasing use of algorithms has provided the basis for introducing different domains of knowledge and expertise into new data analytics units within education ministries and “learning analytics” centers in universities. These units and centers increasingly employ analysts that have come from fields such as physics and bioinformatics.51 These actors create knowledge communities around what has been termed “education data science,” which draws together biology, psychology, and neurosciences to enable new kinds of prediction in education policy.52 Data science is not yet fully embedded in education policy practices. Nonetheless, data science is already having a significant impact on (1) what constitutes professional knowledge about governing education, which is shifting from statistics and professional knowledge to knowledge produced through new modes of data analytics; and (2) the growing risk of automation bias in education, which emerges when “we value automated information more highly than our own experiences, even when it conflicts with other observations—particularly when those observations are ambiguous.”53
In this book, we explore how data science is developing in the context of education governance and policy and how this is generating new relations between knowledge and decision-making, while also strengthening established trends. For example, systems of mass education are now characterized by a narrowing of the purposes of education away from nation-building and social cohesion toward human capital production and economic outcomes. The latter purposes prioritize education for national economic growth, global competitiveness, and individual competitive advantage and social mobility.54 The developments we discuss in this book will likely intensify these rationalities while marginalizing others.
It is clear, for some, that there is growing faith in technical skill and scientific knowledge and in different sets of expertise that are gaining authority as education and economic systems are being driven by increasingly sophisticated data analytics. This expertise is not simply oriented toward making human life and its political, social, and economic aspects more effective and efficient; it also inculcates new beliefs in humanity’s ability to control and improve society through predicting the future and intervening in the present to produce the futures we desire. For others, there is growing distrust, fueled by the opaqueness and potential inexplicability of AI.
Anticipation
Governance today has become preemptive; societies are governed based on calculations of risk and predicted futures. Influential analyses of anticipatory governance have emerged in waves over the past three decades. During the 1990s, there was considerable attention to risk and the explosion of the audit society as new modes of accountability were deployed within neoliberal governance and new public management.55 In the decade that followed, and with the new role that securitization played in governance strategies after September 11, 2001, another generation of scholars examined preemptive governance techniques that operated through affective attunement to threat.56 In the past decade, the rise of big data analytics has enabled new kinds of predictions that are now deeply embedded in surveillance capitalism.57 Today, anticipatory governance can be defined as a mode of governance that “motivates activities designed to build . . . capacities in foresight, engagement, and integration,” incorporating the earlier approaches in more powerful predictions enabled by the development of big data analytics.58
Using digital data and algorithms in education governance changes our relation to time through the combination of speed and intensity that enables “real-time” predictive decision-making within ever-tighter feedback loops between indicators and action. The growing prevalence of informational feedback loops—that are inaccessible, unknowable, and uncontrollable—forces us to ask what sort of political rationalities can endure and what the nature of political action in contemporary education governance is. To answer this question, we need to expand our conception of governance to dynamic systems rather than just human societies. In this book, we consider dynamic systems across four areas.
The first area is that of data infrastructure, in which computer-based data management systems integrate statistical information from various sources. The field of infrastructure studies gathered momentum with the rise of the Internet and associated developments in computing and information systems during the 1990s.59 More recently, scholars across a range of fields in the social sciences and humanities have used the concept of infrastructure to examine the cultural and technical underpinnings of contemporary life.60 We are interested in how data infrastructure exceeds its material instantiations in hardware and software—that is, in the ways infrastructure constitutes, and is constituted by, social relations, desires, and beliefs that “establish a set of parameters for what [an] organization will be doing over time.”61 Bridle suggests that in order to understand the “real operation of technology” and, in our case data infrastructure, we must “start to understand the many ways in which technology hides its own agency—through opaque machines and inscrutable code, as well as physical distance and legal constructs.”62
The second area we examine relates to the connections between technology and governance. As noted above, we are interested in the collective aspects of technology, and we work with the notion that technology is imbricated with social, cultural, and political life. In education, there is work tracing how technology has shaped contemporary governance, including the idea that technology is itself a governing instrument.63 We are interested in the possibility that the conjunction of new automated systems and behaviorism is leading to an exteriorization of governance, in which governance by and with machines is changing how we understand and act upon educational life. We employ the term “exteriorization” to describe the condensation of culture into “prosthetic” technological artifacts that supplement human action. Exteriorization is contributing to the emergence of synthetic governance, in which (1) technological development is often uncontrollable, and accepted forms of control, such as regulation, are not adequate to control the rise and impact of automated systems; and (2) technology is a key mediator but not the “controlling influence.” Synthetic governance need not involve technological determinism.64 Yet governance is not in control of technological development. Synthetic governance arises from and contributes to the further development of complex systems of human-machine cognition that promise control while increasing the number of processes over which we have less and less control. As a result, we need to reconsider the possibilities and limits of dialogic politics, which form the basis of most critical policy studies interventions and proposed transformations. If synthetic governance involves no identifiable locus of control, then what does this mean for governing and its contestation?
The third area that we explore is accelerationist thought, which has had limited impact in education but provides useful ways of analyzing governance, datafication, and automation.65 According to a broad understanding of accelerationism, economic growth and technological development form an “explosive” positive feedback loop that may ultimately exceed human regulation. Education has become increasingly tethered to the economy and, for governments and international organizations, it is now considered a primary means to achieve increased economic growth through greater productivity. Productivity is a relationship between inputs (labor time) and outputs that can be increased through investment in human capital (education) and technological development (innovation). Surplus value created by increased productivity is reinvested in education and innovation to increase productivity further. Acceleration thus describes the time-structure of capital in which economics and technology fold into one another, creating a dynamic system driven by education and knowledge production. This accelerationist perspective suggests that the positive feedback loop of capitalist time and technological development is primary in relation to political attempts to govern and regulate the process, which are inherently conservative and, for some, inherently doomed to fail.66 Indeed, we believe the latter position is the most intellectually credible, “unconditional” stance on acceleration.67
Accelerationism offers a range of perspectives on the relationship between technology and governance, including novel ones that are rarely taken up in critical education policy studies. For example, different accelerationist positions suggest different attitudes toward AI in society: (1) promotion (AI will make things better, which is a position most commonly associated with technology companies); (2) appropriation (if we use AI to serve particular values, then we can make things better, which is a position that informs most efforts to regulate AI); and (3) acceptance (AI is a part of deep structural dynamics over which we have little control except outside of withdrawing from these dynamics in micropolitical ways).68 There is also a further possibility, beyond these three positions: problematization, in which synthetic governance and the role of automated machines becomes a condition for becoming aware of new types of computation and connections to thinking, both human and nonhuman. We must come to grips with the synthetic aspects of thought and its creative possibilities, as well as its risks.
As such, the fourth area that we examine is what it means to understand thought as synthetic. Without delving into the extensive work on the overlap of life and artificiality, our aim is to examine what types of governance become possible and what questions become provoked when considering thought, or cognition, as nonhuman. Developments in the field of AI have created new conditions for what Luciana Parisi calls “automated thinking,” which involves distributed cognition.69 That is, rather than locate thinking primarily within human cultural and cognitive systems, the development of machine learning, such as deep learning, opens possibilities for seeing machines as imbued with cognitive capacities for prediction, creativity, and valuation, creating nonhuman elements in cognitive feedback loops that are not, perhaps, simply instrumental.
Drawing on our explorations of the above four areas, we examine how automated thinking in education governance contexts can produce not only new decision-making processes, but new knowledge and desires that may challenge existing ways of understanding governance. Automated thinking does not map onto human thinking. Rather, it can overlap with, intervene in, and create new rationalities of governance, without these rationalities being entirely comprehensible to human actors. Automated thinking and decision-making are part of algorithmic processes that are machine interpretable, but not necessarily human interpretable. Nonetheless, even though machine learning is premised on uncertainty at the level of calculations (i.e., the black box of AI), these decisions become imbued with an authority and legitimacy when influencing governance. This involves creating certainty in decision-making in education from uncertain techniques, from calculations that cannot always be backward mapped. As we ask throughout this book, where does this leave our ideas about political rationality when governance is cocreated, synthetically, with the uncertain automated thinking of machines?
Our aim is not to make a case for whether the introduction of automated thinking is preferable or beneficial. Rather, we mount a case that it is necessary to understand how automated thinking, which is emerging with data science and machine learning to constitute new dynamic systems, is beginning to change the possibilities for education policy and governance. As we begin to understand these new ways in which technology and agency are shaping education governance and policy, we may reach the limit points of prevailing concepts and methods in critical education policy studies. Certain concepts and methods that enabled us to analyze the Anglo-governance model will need to be replaced or augmented in order to study synthetic governance. Thus, we aim to contribute to theory and methodology for education policy studies by analyzing empirical cases of the emergence of synthetic governance.
Outline of the Book
Our starting point for the discussion that follows is a suspicion that current thinking about policy and governance are inadequate to grapple with the opportunities and challenges posed by algorithms, data infrastructures, and AI. To provide alternatives, we develop our conception of synthetic governance as an emergent form that combines old and new political rationalities, methods, and technologies and that we characterize as algorithms of education. The book is organized into three conceptual and methodological chapters, three empirical chapters, and a concluding chapter.
The first three chapters locate our analysis in existing work on education governance, providing a set of conceptual tools to help analyze synthetic governance and introducing the methodologies used in the empirical cases. Chapter 1 provides an overview of governing twenty-first-century education, with a focus on the role of statistics and comparison in the Anglo-governance model, and an exploration of infrastructures, new data science expertise, and nascent automation. We argue that these forms of coexisting governance require us to reconsider the adequacy of existing concepts and methods. In chapter 2, we outline our conceptual tools that extend the perspectives, briefly raised above, on links between technology and governance, on accelerationism, and on the possibilities of automated thinking. This chapter aims to craft a theoretical perspective on the role of automated thought in education governance that explores the potential, at least in theoretical terms, of automation to exceed instrumental rationality. Chapter 3 develops an argument for the centrality of concept creation to problematize education, and particularly how education is governed and governs. We outline the empirical studies that form the basis of chapters 4, 5, and 6 and the methods we draw upon, including policy mobility, network ethnography, and infrastructure studies. The chapters crystallize our conceptual and methodological apparatus for investigating synthetic governance and the emergence and application of synthetic thought.
The methodological approach of this book takes inspiration from what Bourdieu termed “fieldwork in philosophy”: “a combination of concrete empirical analyses and practical philosophical considerations.”70 Chapters 4, 5, and 6 aim to contribute to developing new conceptual vocabularies in studies of technology and critical policy studies in education. These chapters explore the development, application, and speculative futures of synthetic governance in relation to empirical cases of building data infrastructures in education, the application of patterns in automated governance, and nascent forms of automated thinking in education data science. Chapter 4 aims to show how developing data infrastructure in education provides enabling conditions for increasing datafication and the application of data science and machine learning. We trace the development of an interoperability framework that aims to create new data-sharing capabilities by establishing a national data infrastructure. We focus on how standards enable new market actors into education, and how infrastructure expands its functional area of responsibility to create new jurisdictional spaces as a form of technological federalism. Chapter 5 examines computer vision and facial recognition as examples of applications enabled by data infrastructure. We explore the idea of the “human in the loop” that is used to explain patterns and to attempt to ameliorate uncertainty in the use of AI. The aim of the chapter is to trace how pattern-matching becomes pattern-making as part of automation, and we begin to sketch out what it would mean to collapse the distinctions between humans and machines in governance. Then in chapter 6, we move to a more speculative register to consider how early experiments in data science and AI in education policy-making may open new possibilities for governance. We trace the nascent use of educational data science in a government education department. The chapter examines both how data science becomes a part of decision-making in education, and the ways in which automated thinking can create new values for education policy and for governing education.
The book concludes by identifying how synthetic governance works on and through thinking, and outlining how we might study this phenomenon. Our final discussion is concerned with axiology, and we propose a synthetic politics that attempts to come to grips with the challenges posed by the rise of synthetic governance.