Skip to main content

Algorithms of Education: 7

Algorithms of Education
7
    • Notifications
    • Privacy
  • Project HomeAlgorithms of Education
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Cover
  2. Half Title Page
  3. Title Page
  4. Copyright Page
  5. Contents
  6. Introduction. Synthetic Governance: Algorithms of Education
  7. 1. Governing: Networks, Artificial Intelligence, and Anticipation
  8. 2. Thought: Acceleration, Automated Thinking, and Uncertainty
  9. 3. Problems: Concept Work, Ethnography, and Policy Mobility
  10. 4. Infrastructure: Interoperability, Datafication, and Extrastatecraft
  11. 5. Patterns: Facial Recognition and the Human in the Loop
  12. 6. Automation: Data Science, Optimization, and New Values
  13. 7. Synthetic Politics: Responding to Algorithms of Education
  14. Acknowledgments
  15. Notes
  16. Image Descriptions
  17. Index
  18. About the Authors

7

Synthetic Politics

Responding to Algorithms of Education

If we give the machine a programme which results in it doing something interesting which we had not anticipated I should be inclined to say that the machine had originated something, rather than claim that its behaviour was implicit in the programme, and therefore that the originality lies entirely with us.

—Alan Turing, 1951, quoted in A. Jones, “Five 1951 BBC Broadcasts on Automatic Calculating Machines”

We have been assimilated, all too willingly, and there is probably no going back.

—Paul Edwards, 2018, “We Have Been Assimilated: Some Principles for Thinking about Algorithmic Systems”

In 2020, during the height of the Covid-19 pandemic, students in the United Kingdom could not sit their final matriculation exams, which are necessary for entry to university. As an alternative, instead of using teacher assessments of students, the exam regulator used a simple sorting algorithm to predict students’ A-level grades.1 The A-level algorithm, even though it was represented as AI, can more accurately be described as automated decision-making, and it was designed to ensure fairness through standardization, but it ultimately penalized students who performed better than might be expected based on their school context and student performance in previous years. The algorithmically produced grades were withdrawn after widespread student protests.2 The use of the algorithm promised to control for difference under conditions of uncertainty.

Historically, education policy has exerted control through disciplinary approaches to learning, teaching, and knowledge. These disciplinary approaches and the spaces of enclosure of schooling have given way to new forms of educational control. Today we have proliferating numerical codes of control that continuously modulate action. Deleuze, following Foucault, argued in the late twentieth century that we would need to think of power not only as the disciplining gaze, of containment and boundaries, but also as an open, modulating form of control. This latter perspective provides a way for us to understand the palimpsest of new forms of data-driven work, new technologies and machines, and sedimented earlier forms of power. As Deleuze notes, “We’re moving toward control societies that no longer operate by confining people but through continuous control and instant communication.”3 Nevertheless, we are not witnessing the end of disciplinary institutions such as education so much as the transformation of these institutions. In the twenty-first century, education continues to promise that learning and socialization can be controlled, and education systems can be made to serve nation building through economic and innovation agendas. In some instances, these institutions—and the forms of governance that operate across them—have become more intensely disciplinary precisely through control.4

What has emerged in control societies is a political rationality of prediction in education policy and governance—a rationality, or “policy scientificity,” congruent with the rise of technical approaches to the provision and administration of schools and systems, and the policy sciences and systems thinking established in the 1950s that continue today as a form of cybernetic, technorationality in governance.5 We propose that this rationality can be understood as synthetic governance, an amalgamation of human classifications, rationalities, values, and calculative practices on the one hand, and new algorithms, data infrastructures, and AI on the other. This synthesis creates new potential for thought and action in education governance contexts. Synthetic governance is not human or machine governance, but human and machine governance. Synthetic governance arises from “conjunctive syntheses” that bring together and integrate data-driven human rationalities and computational rationalities, traversing both machines and bodies.6 As such, performance and administrative data are increasingly being generated, collected, and analyzed in various configurations in order to govern synthetically.

We posit that synthetic governance is a new invisible and ubiquitous mode of education governance in which algorithms and pattern matching operate with us in the everyday workflows and rationalities of education. For the most part, we are generally not aware of the presence and impact of algorithms that now shape the conditions in which decisions are made and the values that influence education governance. AI has not broken out of the mold, as Ava does in the film Ex Machina, but rather it draws us into new hybrid and networked modes of cognition that begin to pattern and shape—and eventually govern—our actions, thoughts, and memories.

However, the synthesis of human and machine rationalities—which is ubiquitous, invisible, hybrid, and networked—poses particular challenges to the idea that education is a controllable site of action and progress. We are now witness to a perpetual anticipation of control, or the desire for more information, evidence, and knowledge based on the view that it will allow us to “take control,” make the correct decisions, and govern the future. And yet as more and more systems are established to increase control, education becomes less controllable due to (1) a proliferation of behavioral feedback loops that can have unintended consequences; (2) the creation of new networks that incorporate diverse actors in governance, including platforms and algorithms that act as “black-boxes”; and (3) the increasing messiness of steering at a distance through data infrastructures and the probabilistic rationalities and prediction enabled by the data sciences. Indeed, as various accelerationist theses posit, desire to control these complex systems may be more of a fantasy than a possibility. This technical pursuit of control produces conditions that ultimately undermine control. Generating more and more complex information systems provides us with more information, but it reduces what we know about this information. As Bridle posits, “The more obsessively we attempt to compute the world, the more unknowably complex it appears.”7

A central feature of synthetic governance, therefore, is that desire for more control paradoxically results in less control. With algorithms of education we may know more, but we also become aware that we can do less about it. While this is a rather pessimistic diagnosis, it does provide an opportunity to think differently about what might be done. Even though the locus of control is shifting as we create ever more complex prosthetic supports for cognition, Winner argues that the problem is not so much technological determinism but “what might be called technological somnambulism—how we so willingly sleepwalk through the process of reconstituting the conditions of human existence.”8 While the previous chapters have provided examples of the introduction of synthetic thought into education governance, this chapter aims to ask what we might do about the changes wrought by algorithms, automation, data science, and data infrastructures in education. We offer some tentative answers to the following questions: What can we do with and about synthetic governance? And what would be a critical synthetic politics of education in this context?

We outline our answers to these questions over four sections in this chapter. The first is methodological, and we argue that synthetic politics must involve problematization—that is, using problematization as a method to think and act differently. The second section adopts an ontological perspective to summarize how synthetic governance works by drawing on our empirical chapters. We have revisited our previous empirical studies to develop new concepts, deliberately working in a speculative register. As Ross argues, “[Such] speculative approaches are aimed at envisioning or crafting futures or conditions which may not yet currently exist, to provoke new ways of thinking and bring particular ideas or issues into focus.”9 In the third section we adopt an epistemological perspective to identify how synthetic governance works on and through thinking. The final section is concerned with axiology, and we propose several ways in which we might respond to, and come to grips with, the new conditions of synthetic governance.

Problematization and Synthetic Politics

We developed the concept of synthetic governance with the aim of describing the transformations of thought/rationalities in education policy-making and analysis that we have discussed throughout this book. Synthetic governance is a continuation of earlier modes of governance, as well as an outcome and enabler of new developments in technology and governance. In particular, governance is becoming increasingly exteriorized as certain cognitive processes are devolved to machines, which both requires and further contributes to transformations of expertise, information, and practices.

We suggest early examples of such developments—the building of data infrastructures, the use of facial recognition technologies in classrooms, and the use of data science in policy-making—not only will challenge common sense about the locus of control within governance, but also will require us to rethink methodologies. To begin this task, we employed problematization as our primary methodological approach. We note Foucault’s reluctance to see problematization as part of social change, but we also note that Stengers’ reworking of problematization locates it in reference to the impossibility of being outside of practices, such as those linked to climate change. As such, Stengers maintains that problematization must “involve an experimentation with possibility.”10 In a congruent way, problematization encourages us to recognize that there is no outside of algorithms and AI in education governance, and experimentation with what is possible given the conjunction of machines and humans in synthetic governance is necessary, alongside debate about the educational and political implications of this development.

Anglo-Governance Model

Automation

Epistemology

Rationality

Instrumentality

Normativity

Ethics

Optimization

Agency

Human

Machine learning

Figure 6. Human governance and automation as discrete categories

To that end, we propose that the distinction between machine and human governance—represented in Figure 6—is quickly being elided. The introduction of AI into education governance already has and will continue to reshape thinking, not only by changing how we think about the potential of AI to refashion governing practices, but by changing the broader cognitive conditions in which this task can be pursued.

Synthetic governance results from these distinctions between Anglo-governance and automation collapsing, such that either side does not provide a stable basis for action upon the other. Our turn to problematization arises from the claim that in societies of control we require “a more thoughtful engagement with technology coupled with a radically different understanding of what it is possible to think and know about the world.”11 Engagements with synthetic governance, therefore, should not focus solely on promoting or resisting the rationalities, values, or entities from either column, but must come to terms with their conjunction. At times there may be some awareness of what is being created; at others there may be little awareness of what is being lost. For example, as we discussed in chapter 5, when a facial recognition system is used as a management system and a learning tool, it reconfigures the agency of a teacher. On the one hand, we can see that it displaces the observations of a teacher by privileging those of machine vision. On the other hand, we are not exactly aware of how the neural networks convert observed behaviors into a metric of learning. A focus on the first displacement still leaves us disoriented in relation to the impact of the second process and may blind us to its creative possibilities, however slim these may be at present.

This perspective has implications for how we might study new modes of power in education and create new types of politics adequate to the changes wrought by the emergence of synthetic governance. Our focus in these concluding remarks, then, is not on proposals to regulate algorithms or machines as new policy actors—while recognizing that such proposals are vitally important—but on thinking about the developments mapped in this book from the perspective of what can be done with the convergence between automated technologies and human thought and practices.

Synthetic Governance and the Social

How does synthetic governance work? In this book, we mapped changing power relations produced by introducing algorithms into education governance. We have shown how synthetic governance relates to network governance, with a focus on the changing role of the state in the move from government to governance, and the shift from vertical hierarchies to networks of calculation and comparison. This latter shift involves devolved and then recentralized school systems being reshaped by, and reshaping, governance in conjunction with new private actors and nongovernment actors. We argue that new kinds of state formations are emerging from data infrastructures, while network governance sustains a continued role for the state and its decaying institutions. Indeed, one explicit call from Silicon Valley is to actively accelerate this decay—particularly state-administered education—through the creation of “disruptive” technologies. In network governance, infrastructure becomes not only a space of governance, but a governing space or form that shapes what can be done. It is thus important to ask where, and what, is the state in new data infrastructures that underpin synthetic governance? Where and how can we intervene in these infrastructures?

The role of the state has changed with the combination of network governance and what Easterling describes as infrastructural “extrastatecraft.” Infrastructure creates new sites of control, such that “far removed from familiar legislative processes, dynamic systems of space, information, and power generate de-facto forms of polity faster than even quasi-official forms of governance can legislate them.”12 Educational governance is changing as digital governance and new relations of power create new spaces of extrastatecraft that intertwine with those of conventional statecraft. Schools and education bureaucracies are not being replaced by infrastructures like the National Schools Interoperability Program, but the emergence of this infrastructure creates the possibility of schools and systems becoming spaces in which existing relations are transformed by data flows and algorithmic processes. Infrastructure is infinitely expandable because it increases its own limits, absorbing previously human functions into new forms of automation. As a form of synthetic governance, infrastructure as we have conceived it allows for new relationships and additional networks to be assembled and to broaden its reach; it is a continuously updating platform that changes the state and its operations.

Technical aspects of governing have become the avenues for embedding corporate involvement in the spaces of extrastatecraft. However, there are important differentiations to be made between the role of large and small education technology companies in education governance. Companies may operate as proprietary providers of governance capabilities within emerging and established data infrastructures. This is the case in the use of business intelligence platforms and the use of facial recognition technologies in schooling. Both examples relate to products that have been developed for noneducational purposes but that are enabled by new digital infrastructure to become part of education governance. In other cases, companies may not be the providers of infrastructure and platforms but may shape these infrastructures through participation in the creation of standards and regulatory frameworks.

Distinctions between public and private actors are blurred in data infrastructure spaces, as private companies become legitimate actors in “open” infrastructures spanning schooling sectors and linking education into other infrastructures, notably health/medical, financial, and corporate. Having status as a legitimate actor also changes the kinds of expertise that have authority in infrastructures. Data infrastructures valorize new kinds of authority connected to forms of technical (e.g., standards) and computational (e.g., data science) expertise. There is faith among many policy makers in technical skill and scientific knowledge, and different sets of expertise that orient educational and economic systems around data use. Expertise is not just about making human life and its political, social, and economic aspects more effective and efficient. It is also about new kinds of technorationality that promise a level of certainty over the reshaping of human endeavors and about new ways of understanding how the future can be managed through data.

Synthetic governance is thus characterized by the expansion of its own limits and capacities, absorbing some of the features of network governance while replacing some of its actors with new ones, and creating new spaces of and conditions for policy-making in the process. With its particular infrastructural supports, synthetic governance facilitates control that is fabricated in different networks, with simultaneous temporalities and in ways that do not necessarily require, or that even actively elude, human oversight. Infrastructure begins to create the conditions of possibility for new types of algorithmic governance in education, and highlights the relatively mundane ways in which AI is already making a difference. But, just as importantly, network governance and digital governance are not merely “social” networks, but new kinds of human and machine networks interacting in recursive ways. The challenge becomes identifying the ways in which these networks can reinforce ossified ways of thinking about education governing and produce new forms of thought and decision-making that might interrupt processes and practices of education that many hold to be problematic.

Synthetic Governance and Thought

Optimization, efficiency, and instrumentality are rationalities commonly attributed to machine-based governance. These overlap with other rationalities already established within education governance, such as performative accountability. The confluence of these rationalities is enabled in part by the fact that governance is now both amenable to, and increasingly performed via, computation. Synthetic governance can accelerate and intensify existing values of choice, quality, and efficiency in education; indeed, the fact that most current applications of AI are narrowly instrumental leads to this conclusion. However, the conjunction of rationalities may also lead to the creation of new policy values and desires. In this book, we have drawn upon Easton’s definition of policy as the “authoritative allocation of values,” to emphasize the ways in which policy directs desires among subjects of governance.13 The shaping or generating of new desires through synthetic governance is not only the extension of human governance, but new desires also involve convergences between human and machine cognition. This synthetic development has its dangers and opportunities. It is entirely possible to imagine forms of governance—based on existing inequities and practices such as sexism and racism that are embedded in training data and algorithms—that will have deleterious effects for some humans (e.g., BIPOC, women), or even for humanity altogether. Conversely, is it possible to conceive human and nonhuman conjunctions that do not reproduce historical injustices but rather new ways of thinking and new problematics that open new lines of intervention?

The synthetic nature of governing systems becomes evident as we cross a threshold in the balance between human thinking and machine cognition. This threshold is apparent in the use of machine learning approaches such as deep learning in education governance, resulting in key elements of governance increasingly eluding human understanding. A “radical otherness” is being created as “machine-built systems use machine logic, not human logic.”14 There are two parts to this radical otherness. First, the outputs of some machine learning approaches use techniques that escape understanding because the analysis may be machine readable yet not human interpretable. However, this also means these systems of thought are not immediately recognizable as a presence in human life-worlds, resulting in a degree of autonomy. The second part to this radical otherness is the proliferation of what Parisi calls the “inhuman functions of decision making.”15 This is an iterative process not just in the calculations of machine learning but in the conjunction with human thinking that reinforces particular values, desires, and knowledge. As Amoore posits, algorithms are “generating a whole new world of their own unanticipated and unreasonable actions.”16

The complex, networked systems comprising education governance may begin to produce new norms by tacitly shaping conditions for thought and decision-making. There will be risks that these new desires reinforce existing knowledge and practices that are already felt to be problematic. A clear example is the biopolitical aspect of facial recognition, which is heavily racialized and where discredited ideas from phrenology, such as a connection between skull structure and mental traits, and the connection between a person’s facial expressions and character, make their way back into governance technologies.17 It is conceivable that, as with the use of facial recognition in law enforcement, this racialized problem is seen as less important than the efficiency gains of the automated system. In this way, automated thinking will likely create new conditions for contestation over how education policy should be made, and for what purposes, and the new norms that emerge may appear “unreasonable” from the perspective of human values and desires.

Synthetic Politics

Many technical and political responses are being proposed to deal with the impact of datafication, infrastructures, and nascent AI in education. Reviewing developments across diverse contexts, amid rapidly changing technical and regulatory landscapes, is beyond the scope of this book. However, we do want to conclude with some more general remarks on the politics of synthetic governance. Specifically, we ask what can be done about the synthesis of machine and human in governing if it is changing the conditions for agency upon which education politics and policy have previously been based. How should we respond to the new conditions of synthetic governance? We present three proposals for synthetic politics organized around the accelerationist positions identified in chapter 2: promotion, appropriation, and acceptance. These positions have been rehearsed in both education and beyond. We conclude with a fourth proposal for problematization as a strategy for synthetic politics.

Promotion

Promotion describes the boosterism evident in much discussion about the role of technology in governance, according to which the introduction of AI and data science will optimize responses to policy problems. This is a rationality that has its historical roots in overlapping discourses of education, progress, and technological innovation. Examples include the turn to data science based on the view that it can be used to better predict, and hence provide insights into, enduring problems in education such as student attrition. Today this position is most commonly associated with technology companies and organizations that promote AI for the social good, without questioning the design and operation of the technology itself. Additionally, within education, promotion has led to the widespread incorporation of teachers as technology company trainers within schools. As the Covid-19 pandemic showed, AI has been promoted as a desirable and inevitable aspect of the growth of digital platforms within schooling.

Appropriation

This position argues that technology can be directed toward socially progressive ends by utilizing existing regulatory and legislative tools.18 Zuboff describes this as a strategy of “taming” the digital platforms that constitute “surveillance capitalism.”19 This position can be a response pursued by in-house ethics panels in technology companies, which aim to provide an industry forum within which future innovation can be shaped. The limitations of this industry-based voluntary regulation, limitations that are all too clear, have led to calls for more external regulation.

As such, a taming strategy can include the introduction of legal instruments such as the European Union’s General Data Protection Regulation and a draft EU proposal for a legal framework for AI.20 Regulation has been the primary mode of politics for technology in many social areas, from bans on facial recognition to issues of bias and privacy. However, it is not easy to tame large and powerful companies. What might be different with AI and platforms is that we will need to consider how to regulate technology companies not only in education, but also companies such as Google that are often located both inside and outside education.21

A focus on regulation locates action firmly within a desire for technology to continue to be part of the Enlightenment project of political and social progress. This approach sits most comfortably with leftist politics of technology, or what Zuboff calls the strategy of indignation, arguing that surveillance capitalist utilizations of data-driven technologies are not inevitable. For Zuboff, indignation can arise as dissatisfaction with the mediation of our lives through digital platforms, and the rendition of our experience into data, “teaches us how we do not want to live.”22 Various proposals for a more democratic approach to AI and emerging technologies have been put forward.23 Sadowski calls for the need to “democratize innovation” to create alternative technology, which includes broader participation alongside “ensuring intelligent systems are also intelligible to the public.”24 Similar proposals have been made by groups such as AI Now, who have lobbied for algorithmic openness and banning the use of opaque algorithms in public services.25

At the level of product development there are suggestions that AI for education applications should always be codeveloped with educators, or at least have educators as consultants on this technology.26 Examples include using natural language processing as part of automated marking to support formative teacher feedback, or encouraging students to develop new ideas for using AI in “socially just” ways without going beyond the question of how best to apply AI.27

Acceptance

The acceptance position differs significantly from the previous two options. Acceptance involves recognizing that AI is a part of evolutionary, cybernetic dynamics over which we have little control, except perhaps to withdraw from these dynamics in micropolitical ways. For example, we can choose to “hide” from the gaze of surveillance capital by disconnecting from digital platforms and masking our identities and activities.28 Hiding involves at least implicitly accepting the presence of the technology from which we are hiding, and often involves the use of other platforms and software such as virtual private networks and other forms of encryption to protect us.

Nonetheless, hiding is a legitimate micropolitical response, as Parisi argues, following Galloway and the Tiqqun collective, in relation to the politics of networks: “Through the diffused devising of tactics of opaqueness, experimenting with a fog-like micropolitics, it is possible to counter-act the networked regime of visibility with the impersonal, the neutral and the invisible.”29 Sadowski argues that we need a tactical sabotage in the vein of Luddites, not as an antitechnology stance but as a targeted response to certain forms and uses of technology. As Sadowski notes, the Luddites did not just smash machines indiscriminately. Rather, they were selective: “By smashing machines, Luddites were targeting the tech that made their lives more miserable, and the engineers and owners who held power over them.”30

Toward Problematization

The three options above may each offer viable responses to the new challenges created by synthetic governance depending on the context. Moreover, each would be comfortably familiar to those in the field of critical policy studies. Certainly, there is need for a more robust discussion about regulation, data privacy, and the monetization of data generated in education. However, each of the above positions depends on an assumed distinction between human and machine that we have sought to trouble, locating political agency in a human subject who promotes, enhances, tames, regulates, hides from, or even destroys machines. Yet synthetic governance as we have characterized it is built upon a network infrastructure that is “medium of contemporary power, and yet no single subject or group absolutely controls a network.”31

Between the abstraction of data, the materiality of information systems, the emergence of new forms of thinking, and changes in the visceral terrain of political life, we can identify new governing relations where humans are in the loop, but not as independent arbiters of algorithmic decision-making. Even more strikingly, machine cognition is a product of human thinking that feeds back into human culture, and we must consider the view that governance has never been anything other than a cybernetic feedback loop. Resisting the deleterious impacts of automation will not be a matter of simply regulating or resisting the use of AI. Rejecting AI is untenable; this is not going to stop its development and use, nor does such a position reckon with the genealogy of the present moment and the long history of statistical reasoning in education governance that has brought us to our current position.

While this may seem somewhat fatalistic, we believe that we can develop a politics that is not premised on a dichotomous view of human and machine and that works with the uncertainty that is the departure point and destination of new data-driven technologies. While the instrumental, market rationalities underpinning education today will likely be reinforced by machines, it may also be possible that new rationalities and techniques might create other ways of thinking about the problems of education governance, disrupting long-standing problems and solutions in education.

We need a politics that is critical but not antitechnology, a politics that does not formulate the problem poorly by opposing human values, agency, and interest on the one hand, and technology on the other. We argue that a politics adequate to synthetic governance would not juxtapose human agency and technological determinism; rather, we would need to consider how to more consciously navigate this reconstitution. It will not always be possible to break open the black box of AI and digital platforms, because this assumes a particular form of separation between human and machine that is no longer tenable. Rather, we might ask: What kinds of worlds are being created by algorithms, and how will we respond to the types of new truths that are being created? What are the limits of rectification, of appeal and regulation? This perspective challenges some of the primary approaches to automation, and the technical and political solutions to issues of bias and black-boxing, that reflect a desire for a human in the loop as a corrective or safeguard.

We thus need to rethink political agency in synthetic governance. The epigraph from Edwards that opens this chapter suggests it is entirely likely that we have willingly been assimilated into new relations with technologies and that this process cannot be reversed. We could go further by suggesting that the assimilation Edwards describes would not have been particularly susceptible to our willing otherwise. This does not mean that politics is impossible, but perhaps not a politics of the deliberative variety. We think it vitally important that we develop a critical synthetic politics that responds not to fears that technology will get away from us (the singularity) so much as the politics of networks that become so diffuse as to resist meaningful intervention.

A synthetic politics begins from the premise that there is no outside of algorithmic decision-making and automated thinking. We must think with and through our imbrications with other modes of cognition as a kind of “co-learning” with automated systems.32 A particular rationality is needed—to be open to the co-adaptation of humans and machines by recognizing that machine learning is the latest iteration in a longer history of thought that has never been limited to the human. Education is a site where we can embrace synthetic thought with a carefully articulated view of the risks, rather than reacting against it or embracing it uncritically. Education is a site in which we can remain open to the uncertainties, risks, and possibilities of synthetic governance.

Annotate

Next Chapter
Acknowledgments
PreviousNext
The University of Minnesota Press gratefully acknowledges support for the open-access edition of this book from the University of Sydney, the Australian Research Council, and the Social Sciences and Humanities Research Council (SSHRC) of Canada.

A different version of chapter 2 was previously published as Sam Sellar, “Acceleration, Automation, and Pedagogy: How the Prospect of Technological Unemployment Creates New Conditions for Educational Thought,” in Education and Technological Unemployment, ed. M. A. Peters, P. Jandric, and A. J. Means, 131–44 (Dordrecht: Springer, 2019). A different version of chapter 4 was previously published as Kalervo N. Gulson and Sam Sellar, “Emerging Data Infrastructures and the New Topologies of Education Policy,” Environment and Planning D: Society and Space 37, no. 2 (2019): 350–66; and as Sam Sellar and Kalervo N. Gulson, “Dispositions and Situations of Education Governance: The Example of Data Infrastructure in Australian Schooling,” in Education Governance and Social Theory: Interdisciplinary Approaches to Research, ed. A. Wilkins and A. Olmedo, 63–79 (London: Bloomsbury Academic, 2018); Bloomsbury Academic is an imprint of Bloomsbury Publishing PLC. A different version of chapter 6 was published as Sam Sellar and Kalervo N. Gulson, “Becoming Information Centric: The Emergence of New Cognitive Infrastructures in Education Policy,” Journal of Education Policy 36, no. 3 (2021): 309–26, available at https://www.tandfonline.com.

Copyright 2022 by the Regents of the University of Minnesota
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org