Skip to main content

Algorithms of Education: 1

Algorithms of Education
1
    • Notifications
    • Privacy
  • Project HomeAlgorithms of Education
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Cover
  2. Half Title Page
  3. Title Page
  4. Copyright Page
  5. Contents
  6. Introduction. Synthetic Governance: Algorithms of Education
  7. 1. Governing: Networks, Artificial Intelligence, and Anticipation
  8. 2. Thought: Acceleration, Automated Thinking, and Uncertainty
  9. 3. Problems: Concept Work, Ethnography, and Policy Mobility
  10. 4. Infrastructure: Interoperability, Datafication, and Extrastatecraft
  11. 5. Patterns: Facial Recognition and the Human in the Loop
  12. 6. Automation: Data Science, Optimization, and New Values
  13. 7. Synthetic Politics: Responding to Algorithms of Education
  14. Acknowledgments
  15. Notes
  16. Image Descriptions
  17. Index
  18. About the Authors

1

Governing

Networks, Artificial Intelligence, and Anticipation

Power is local because it is never global, but it is not local or localized because it is diffuse.

—Gilles Deleuze, Foucault

Science and technology multiply around us. To an increasing extent they dictate the languages in which we speak and think. We either use those languages or we remain mute.

—J. G. Ballard, Crash

The future is already here, it’s just not very evenly distributed.

—William Gibson

Contemporary forms of calculation, computation, and data analytics have radically transformed education governance.1 This transformation includes accelerated reterritorialization of knowledge, practice, and expertise within the education sector. Moreover, these recent transformations are closely tied to the reconfiguring of relations between trust, discretionary judgment, and personal and systemic accountability in education. During the middle of the twentieth century, education accountability in liberal democracies was premised on trust in professionals. During the late 1980s, education policy pivoted to performative modes of accountability, or “living up to” the quantitative “evidence” about professional and systemic performances. Today, education governance stresses ideas of anticipation and prediction, or using evidence from the past to create claims about how things will perform in various educational futures.2 In this chapter, we map the changing relations between professional trust and quantitative evidence that have shaped contemporary education policy-making and enactment over recent decades. We outline how anticipation, prediction, and automation have become the ascendant rationalities in governing education as “a new world of state-craft emerges.”3

We explore these issues in this chapter. The chapter outlines developments of the “Anglo-governance model,” characterized by data-driven rationalities underpinned by the primacy of numbers and comparison within policy networks. Governing new state formations is, therefore, part of reconfigured power relations that “superficially ‘stabilize’ elements of complexity within interoperable, complementary systems of signification and quantification, thus helping to secure the always unstable and provisional as navigable and calculable sites.”4 We examine the capacity to create calculable sites through the emergence of data infrastructures in education, digital governance, and new types of expertise, in which the tools of calculation, “rather than simply as aids to decision-making . . . are themselves becoming the process of governing.”5 We then aim to show how policy by numbers, emerging data infrastructures, and data science expertise create the conditions for intensified modes of anticipatory governance and increased forms of automation in education. We conclude by positing that as governance becomes synthetic, we may need to reconsider the adequacy of existing concepts and methods used in critical policy studies in education.

Statistics, Computation, and New Governing Expertise

Statistics became established in bureaucratization processes in the nineteenth century, which introduced wide-scale use of numbers and shared information to manage populations.6 This management is what Foucault calls biopolitics: “the entry of phenomena peculiar to the life of the human species into the order of knowledge and power, into the sphere of political techniques.”7 According to Lemke, Foucault’s “notion of biopolitics refers to the emergence of a specific political knowledge and new disciplines such as statistics, demography, epidemiology, and biology.”8 These disciplines make it possible to analyze processes of life on the level of populations and to “govern individuals and collectives by practices of correction, exclusion, normalization, disciplining, therapeutics, and optimization.”9 The most pernicious outcome of these statistical and management practices was in eugenics as a future-oriented biopolitical logic of evaluation and optimization.10

Statistics were thus key to the development of modern government and the creation of novel fields of expertise and application. Statistics has “the capacity to represent reality in terms of quantifiable and manipulable domains, [and thus] the technology of statistics creates the capacity to relate to reality as a field of government.”11 The use of statistics has a number of functions, including underpinning efforts to address poverty and the management of mass schooling systems in the mid-nineteenth century.12 The latter development continued into the twentieth century when “the measurement of education became a defining element of the governing of education.”13 Governing through statistical expertise created “new sites of truth” in education.14 The history of mass education, and pervading contemporary practices around the normalization of standardization and separation, makes it is possible to understand, as Ball posits, that education policy operates as biopolitics through a variety of governance mechanisms intended to control bodies.15 These mechanisms include intelligence testing to assess the “ability levels” of a population, apportioning different potentials to different gendered and racialized groups, and rationing educational experiences and funding based on flawed ontological assumptions.16

Statistics continue to be central to governing education. However, in this section we want to tell a story that is not often told—a story not of the numbers themselves but rather the connection between statistics, knowledge, and expertise, and more importantly, a story about the ways education policy is made and analyzed through these rationalities. Our starting point, therefore, is that numbers, statistics, and mathematics have a political purpose (and are always politicized), with political decisions central to selecting what is to be measured, how this measurement will be represented, and how these measurements will be analyzed and otherwise used (e.g., comparisons, calculations, regressions). Paradoxically, numbers also depoliticize domains by serving as technical mechanisms that claim an ostensible neutrality while carrying substantial political weight.17

In the arena of policy studies, an important historical juncture is the middle of the twentieth century, when the “policy sciences” were established. The policy sciences emerged in the United States and Europe from the realms of politics, law, sociology, economics, and psychology, underpinned by statistical expertise.18 The premise of the policy sciences was that policy was instrumentalist or technorationalist. Policy was a solution to a particular problem that could be carried out in a series of steps from analysis to the provision of the “best options,” to implementation. As such, the policy sciences, while promoted as comprising objective approaches, were infused with values and judgment, framed within the “political arithmetic” of supporting social-democratic reforms.19 Social science knowledge, as policy science, was presented as being important in service of government: “The intractable problems . . . [governments] faced could only be solved through the rigorous application of research knowledge and techniques developed by social scientists.”20

What began to happen in the late 1960s and continued into the 1970s was a challenge to the Keynesian welfare state, to the efficacy and legitimacy of a social democratic policy science, as well as to the notion that the nation-state is the primary policy-making unit. The Keynesian welfare state was superseded by the market-based philosophy and political rationality of what became entrenched as neoliberalism, with Milton Freidman a key influence in education policy.21 From the 1980s onwards, “an ensemble of generic policies, which have global currency in the reform of education,” gave primacy to individualism and choice.22 These generic policies intensified accountability and performativity regimes in education, solidifying a move from professional to performative accountability.23 Contemporary forms of accountability are part of what Power calls the “audit explosion,” in which measurement became the primary, indeed almost unassailable, means of managing education systems.24 Numbers and statistics in accountability regimes now enable “close, intimate knowledge and control of individuals and organisations.”25

Alongside the change away from the social welfare state and deliberative forums that set policy agendas, toward neoliberalism and forms of marketization to govern education, the field of social democratic policy studies came under attack, as it was deemed to have failed to provide “reliable, generalizable and predictable policy knowledge.”26 The instrumentalism of statistics and the technorationality of the policy sciences was derided by critical scholars working with poststructuralism, critical theories of race, and the various waves of feminism, who sought to challenge the claims of the policy sciences and their apparent neutrality.27 The legitimacy of policy sciences to support decision-making was changed by this transformation of political context, expertise, and knowledge.28

The Computational Transformation of Governing Knowledge

The period from the early 1990s to the early 2000s saw the prevalence and growth of critical policy studies, which provided epistemological challenges to policy sciences emphasizing ad-hocness over causality, uncertainty over prediction. Concomitantly, there was a reshaping of political economy through neoliberalism, or what Brown calls the “economization of everything.”29 However, by the mid to late 2010s there was a new multidisciplinary field emerging—education data science—that built on the development of what was seen as “evidence-based policy making” in the 1990s, in which the uncertainties emphasized by critical policy scholars were seen to lack application and hence ignored by policy makers.30 The evidence-based policy-making focus coincided with the datafication of education and supported its development into the new field of education data science. This new field is an updated version of the policy sciences and has multidisciplinary origins, bringing together biology, psychology, and the neurosciences. Education data science is enabled by the rise of new computational capacities and reconfigured relations between humans and machines.31 Data science also claims some of the features of the policy sciences, especially the political rationality of technical solutions to educational and social problems.

Data science is broadly defined as “the study of the generalizable extraction of knowledge from data.”32 In data science, the focus is on data management, analytics and visualization, computational statistics, and high-performance computing and algorithms to derive patterns from large data sets.33 The field attempts to create, like the policy sciences, “actionable insights” that can be used to solve social and political problems.34 What sets education data science apart from earlier forms of statistical reasoning in education is the growing speed and scope of analysis, associated with the use of computer-based information infrastructures to integrate and analyze statistical information from various sources.

The interdisciplinary reasoning of data science is informed by probabilistic mathematics and computer science, and is enabled, as Lyotard predicted nearly forty years ago, by “computerization” through which

the nature of knowledge cannot survive unchanged. It can fit into . . . new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language.35

Within education data science, knowledge communities include information managers, data scientists, and learning analysts who are part of the production and management of data, and their work is underpinned by a political economy of data science that combines institutional funding and investment capital.36 The conversion of knowledge into digital information creates shifts in determining the value of particular forms of knowledge, who determines this value (data scientists, machines, algorithms), and modes of educational accountability, and it indicates the changing spatiality of governing knowledge. The production and application of digital information has meant that information increasingly is being produced about and for education from outside the profession or education systems. For example, the shift to new accountability systems at state and national levels has changed local and ad-hoc forms of accountability occurring at the school level. Examples of school forms of accountability include rankings, such as school league tables generated by governments in the United Kingdom or by free-market think tanks like the Fraser Institute in Canada, or evaluations such as No Child Left Behind in the United States. Rankings change how schools are assessed and which knowledge is deemed important. When analyzing these changes it is important to ask, “Who controls the field of judgement[?] . . . Who is it that determines what is to count as a valuable, effective or satisfactory performance and what measures or indicators are considered valid?”37 Data science continues the trend to provide knowledge and analysis, or “the field of judgement,” from outside education.

As Harari contends, “Meaning and authority always go hand in hand.”38 The introduction of data science and datafication may be changing what kinds of questions are, and can be, asked of education. It is possible that growing use of algorithms in education “will lead to a recursive state where data analysis begins to produce educational settings, as much as educational settings produce data.”39 The recursive aspect of computation in governance, therefore, may demand new questions, new kinds of expertise, and new authorities to determine legitimate governing knowledge. Longer standing logics of comparison, and the development of new policy networks in education, have made this more recent development possible.

Network Governance and the Rationalities of Comparison

The use of statistics was initially part of the management of national populations. However, comparison and governance in education gained traction within globalized spaces from the middle of the twentieth century. What we now experience as globalized forms of comparison, enabled by the production and multidirectional flows and interoperability of data, has, in education, been a long project of building calculable spaces. Numbers have been extended from the management of state spaces to the creation of global spaces of policy through measurement.40

Accountabilities that were initially part of national and state-level policy realms are now part of an intensification of global comparisons in education. Contemporary comparison and measurement in education is predominantly connected to the development and deployment of international large-scale assessments (ILSAs). In the wake of World War II, new global institutions were charged with the rebuilding of economies and focused on connecting labor needs and skill development on an international scale. In parallel, national responses involved investment in technological and educational capability, including in the United States following the launch of the first satellite, Sputnik, by the Soviet Union in 1957. A knowledge-based Cold War had implications across universities and schools as education became a site in which to protect and promote national security.41 It is against this backdrop that international comparisons expanded, linking employment, economic, and educational statistics.42 The latter became institutionalized with the creation of the International Association for the Evaluation of Educational Achievement in the late 1950s. However, the initial focus on international comparison waned in education, with skepticism about the value of international performance indicators.43

With the end of the Cold War, comparing education systems become closely linked to global competitiveness and human capital intensification. Education became imbricated with economic policy, such that measuring learning outcomes became a proxy for monitoring economic development and potential.44 Quantifying education outcomes was an important objective enabled by ILSAs that estimate achievement using standardized tests.45 Examples of contemporary ILSAs include the Program for International Student Assessment (PISA), the Trends in International Mathematics and Science Study (TIMSS), and the Progress in International Reading Literacy Study (PIRLS).

ILSAs demonstrate how the combination of numbers and comparison contributes to national and global policy-making. International organizations such as the Organisation for Economic Co-operation and Development (OECD) can use the results from ILSAs to have an impact on national policies via comparison. The OECD’s education work, and particularly PISA, has had widespread influence in local and national contexts.46 The OECD is constituted by member nations who provide funds to this organization that exerts soft power to shape national education policy. The OECD and PISA provide an example of the ways ILSAs are used to exercise power that changes national education policies and practices toward a focus on achievement in these global assessments. This includes changing how areas of teaching and learning are framed, in which curriculum and pedagogy are narrowed to aspects of improving test performance or matching national political agendas.47 Nonetheless, as the nation is the unit most frequently used for global comparisons of educational performance, national spaces continue to matter even as policy is made on a global scale.48

The rise of ILSAs, and the interaction between testing and comparison, provides a general background to our identification of new policy rationalities in education and new phenomena of governing.

The Shift to Governing Networks

The local, national, and global scales of education policy have tended to be flattened through what has been termed “network governance.”49 Rhodes describes governance as “a change in the meaning of government referring to a new process of governing” through “self-organising, interorganisational networks characterised by interdependence, resource exchange, rules of the game and significant autonomy from the state.”50 However, as August argues, while we can see network governance as a contemporary phenomenon, driven in part by globalization and neoliberalism, it is also possible to see the antecedents of network governance in the field of cybernetics. This is particularly the case with the focus in cybernetics on the “remodelling of ‘governance’ from linear and hierarchical models to circular and reflexive self-regulation.”51

Network governance has not simply replaced previous hierarchical modes of government. Network governance integrates with hierarchical structures of government, creating more diffuse links within policy processes. It is not that governance replaces government, but rather that it provides new ways in which policy actors, organizations, interests, and rationalities cooperate and compete. In education, the state is re-created through a heterogeneous mix of public, private, and philanthropic agencies that are brought into collaboration and create new sources of authority around policy problems, including local and national governments, international organizations, and edu-businesses.52 Wilkins and Olmedo argue that “power is not confined to the state or to the market but is exercised through a plethora of networks, partnerships and policy communities.”53 Additionally, it is not only the multiplicity of governing entities that matters. New types of power relations have emerged, in which power relations are “not so much positioned in space or extended across it” but rather “compose the spaces of which they are a part.”54 That is, network governance is not just a practice of new connections but is also creating new governable spaces. What allows this creation of new spaces is the dependence on digital information as a contemporary reworking of “steering at a distance.”55 It is to the materiality of digital governance that we turn next.

Digital Governance and Data Infrastructures

Schools still largely appear similar to those of a hundred years ago, with the same features such as buildings, administrators, classrooms, teachers, and students. What is less obvious is the way that digital data use has radically transformed how schools link to each other and to larger organizational structures such as school districts or local government authorities. This transformation is what Lawn describes as “systemless systems” in which data flows sustain relations across systems that are fragmenting. Many education systems now derive much of their cohesiveness as systems from standardized data, interoperable software, and the embeddedness of metrics across administrative and performance areas of schooling.

Metrics are embedded at the level of governance and in practice in the everyday lives of teachers and students in schools, including data walls, student information systems, and so forth.56 In schools and systems the focus on data is reflected by the proliferation of new lexicons of education—“data harvesting,” “data capture,” “trend data,” “data-driven decision-making”—and assumptions about the ways in which change can occur. These assumptions “rest . . . in part on the auratic appeal of computation, on its presumed neutrality, objectivity, and, perhaps most of all, its certainty.”57 The tools of comparison, or “regulatory technologies” of accountability and performativity, operationalize these assumptions and

include hardware-and software-testing regimes, travelling qualifications systems, performance indicators and benchmarks, standards, league tables and the networks of experts and consultants who construct and promote these instruments globally.58

We locate these regulatory technologies in the emergence of new types of data infrastructures in education.59 We understand infrastructure as “pervasive enabling resources in network form,” within which the rhetorical function of data enables “evidence-based” policy-making and data-driven rationalities in education.60 Data infrastructure in education can be understood as an assemblage of material, semiotic, and social flows or practices that (1) enables the translation of things into numbers (“datafication”); (2) enables the storage, transmission, analysis, and representation of data using algorithmic logics and computational technologies; (3) embeds data usage into a range of other practices; (4) produces new topological spaces through practices of classification, measurement, and comparison and new operations of power through the production of these spaces; and (5) contributes to new social practices, new problematizations of the social, and new forms of governance.61 Far from being a neutral form of technology, data infrastructures count as central people, networks, algorithms, and computational capacities.

Data infrastructures reconfigure education practices and processes. Hartong suggests that infrastructures in education “are increasingly reaching beyond and across traditional policy ‘entities,’ scales and geographies, while simultaneously transforming those entities (and also the individuals within) when being enacted.”62 These infrastructures have now become an important element in the governance of schooling, which increasingly operates in network modalities that connect a range of organizations and actors, from schools and local school boards to state and provincial education ministries, commercial providers of education products and services, national education departments, and international organizations, such as the OECD. While infrastructures may be “centrally owned by nation-states or corporations . . . at their edges they are imagined, arranged, and adopted in different ways by people or ‘end users.’”63

Data infrastructures are, therefore, central to “digital education governance,” in which statistical data are connected to local, national, and global data collection, analysis, and decision-making in education, premised on the interconnection of “new software developments, data companies and data analysis instruments.”64 Digital governance creates new linkages between, and expands the decisions that are based upon, data, and it enables the management of systems, schools, and individuals to be increasingly performed as “backroom” functions by new private actors.65 An important, and distinct, aspect to digital governance and new data infrastructures is thus the opening up of opportunities for privatizations of various kinds. Private actors in education have been the focus of sustained work over the past twenty years.66 As Ball has outlined in two key works on privatization in the United Kingdom, and as part of global policy networks, new policy networks are constituted by and enable diverse private interests—interests that are significantly different from earlier forms of privatization in education.67 With the development of data-driven schooling systems, we are seeing the potential for privatization of schooling and the enhanced involvement of edu-businesses as part of the economization of the state.68 Indeed, as Fourcade and Gordon claim, “When the state defines itself as a statistical authority, an open data portal, or a provider of digital services, it opens itself up to competition from private alternatives that may command equal or greater legitimacy on these terms.”69

Anticipation, Governance, and Artificial Intelligence

Lawn posits that “when the future can no longer be organized through meaningful projects by government, numerical data becomes a useful substitute for ideas.”70 Toward the end of the twentieth century and into the twenty-first century, policy-making became increasingly promoted as “evidence-based, data-intensive decision making,” with the aim of having predictive capacity. This move is analogous to well-established uses of data, and the desire for prediction, in other areas of public policy.71 Patton, Sawicki, and Clark describe policy practices that construct images of possible futures as either “predictive policy” or “prescriptive policy,” where the former “refers to the projection of future states resulting from adopting particular alternatives” and the latter “refers to analysis that recommends actions because they will bring about a particular result.”72 For instance, China, Brazil, Canada, the European Commission, Germany, Japan, the OECD, and the United Kingdom have all developed offices with mandates to anticipate futures through different predictive and prescriptive policy initiatives.73 Activities that these offices might engage in include crowdsourcing maps for natural disasters, forecasting battlefield casualties, anticipating terrorism, and predicting gang-related crimes, or “predictive policing.”

These forms of calculation and forecasting attempt to create new policy settlements about, and for, the future, by making claims about the present. As Anderson suggests, “Common to all forms of anticipatory action is a seemingly paradoxical process whereby a future becomes cause and justification for some form of action in the here and now.”74 This paradox underpins education policy, where taking action in the present is always about future making as part of the rationality of progress upon which mass education depends. Amsler and Facer argue that the contemporary focus of anticipation in education concerns “the organization of the future as a site of anxiety and control.”75 Anticipatory education actions include, for example, the focus on predicting what kinds of knowledge and skills should matter (e.g., twenty-first-century skills, life-long learning) and who should be able to learn those skills.

Anticipation is built into the nascent automation of education governance and applications of AI. Williamson and Eynon suggest that the growth of AI in education is an outcome of four converging threads: “(1) several decades of AIEd research and development in academic centres and labs, (2) the growth of the commercial education technology (edtech) industry, (3) the influence of global technology corporations on education, and (4) the emergence of data-driven policy and governance.”76 However, the volume of writing about the promise of AI in education, especially by education technology companies and the popular scientific press, far outweighs detailed descriptions about the impact AI has on the education sector. Part of the issue is that AI is both a useful and obfuscating term, because it covers a wide variety of techniques and tasks.

AI in education governance is built into everything from business intelligence platforms to real-time online testing and is used within the education sector in various capacities. Figure 1 outlines some of the key areas and their applications in education. For instance, some organizations use AI to mine data to track student behavior, attendance, and assignments, and some organizations use intelligent tutoring systems (ITSs). ITSs are increasingly common in gaming or game-based learning. At the time of writing, the Covid-19 global pandemic has forced schools across the world to halt face-to-face teaching, generating intensified emphasis by technology companies on using AI to support online learning, such as the proctoring of exams.77 Even as schools reopen, the pandemic will have enabled AI-supported proprietary products, or what Williamson calls “meta-EdTech,” to become more rapidly embedded in pedagogy, curriculum, and assessment.78

Much of the AI currently used in education is a variation of machine learning, which includes areas such as natural language processing, computer vision decision networks, and neural networks. Machine learning is now widely applied in many areas of society, including facial recognition (e.g., opening an iPhone and passport control), natural language processing (e.g., email prediction in Gmail), and recommender systems (e.g., Amazon, Netflix). Machine learning makes new associations between data sets (e.g., demographic data as representations of human characteristics). As Elish and boyd note, “This process is not about a search for meaning, but about the construction and depiction of statistical models.”79

Common Term for AI Techniques/Technologies

Function

Applications in Education

Predictive analysis

Assess probability from data sets

Student selection

Machine learning

Perform pattern recognition in large data sets (includes areas of natural language processing and computer vision)

School inspection, allocation of grades

Deep learning

Recognize objects, descriptions and people

Student surveillance in facial recognition systems

Neural networks

Identify patterns and behaviors

School discipline and student monitoring

Expert systems

Build systems based on direct human input

Scheduling and timetables

Figure 1. Summary of common terms for AI techniques and technologies and applications to education. Modified from S. Leaton Gray, “Artificial Intelligence in Schools: Towards a Democratic Future,” London Review of Education 18, no. 2 (2020): 165.

Machine learning is premised on probability theory that allows for the modeling of uncertainty in random systems, or “a measure of our belief in how likely that outcome is.”80 Machine learning is good at pattern matching, either as supervised learning, unsupervised learning, or reinforcement learning. In supervised learning an algorithm is trained on data (input) and predefined output, and “the assumption . . . is that the training data [patterns] reflects sufficiently well the characteristics of the underlying task, so a model that works accurately on the training data can be said to have learned the task.”81 Alternatively, an algorithm can learn to identify patterns (unsupervised learning), which involves finding groups in data sets according to a form of cluster analysis in which “there is no predefined output, and hence no . . . supervisor; we have only the input data. The aim in unsupervised learning is to find the regularities in the input, to see what normally happens.”82 Reinforcement learning has shot to public fame, as it is the underpinning approach for DeepMind’s AlphaGo, a machine learning system that first beat a human in the ancient game of Go (see chapter 6 for more detail). Reinforcement learning is machine learning, coupled with behaviorism, to create a feedback system of learning through reward and punishment.83

Machine learning is underpinned by particular rationalities. For example, the pursuit of optimization in decision-making has become embedded in the techniques of machine learning—for example, through the use of cluster analysis to save time in data labeling and categorization. Algorithms are now used to create “groupings of similar inputs (perhaps facial expressions) that are found to occur together. The goal of cluster analysis is to reduce the total variation in a dataset to a more manageable number.”84 Approaches like cluster analysis are also premised on the uncertainty of the techniques that are employed. In some of these approaches, the analysis may be machine readable yet not human interpretable. That is, in some deep learning approaches there are algorithms that “aren’t coded by human beings.”85 Therefore, what developers know about some forms of deep learning can be beyond human comprehension, unlike the techniques of human expert systems. Edwards calls this

the principle of radical complexity. This principle says that large, interactive algorithmic systems produce emergent behaviour that we cannot anticipate. Even if we can comprehend every individual component, scaling up highly interactive systems ultimately translates into cognitive opacity.86

Machine learning is not only about representing existing patterns but filling gaps through pattern recognition and creation. That is the prediction of patterns. Prediction models work when “algorithms couple modern statistical methods with powerful knowledge representation languages to generate rich prediction models from data and allow for the generation of new or inferred knowledge.”87 In education, for example, Zeide notes that “predictive analytics ‘cannot literally predict [student] life’ because they cannot incorporate the impact of outside circumstances and student agency.”88 However, this example also points to the problem of multiple divergent conceptions of prediction. In data science, the use of prediction is not literal; it refers to probability, and prediction in data science is about identifying patterns in data using classification rules. It is not about the future as much as it is about what is missing in data from the past, and thus “it is best to think of prediction patterns as predicting the missing value of an attribute rather than as predicting the future.”89 For data science, prediction and incompleteness are built into the calculations.

The rationalities underpinning prediction in data science and machine learning have contributed to the emphasis on prediction in education governance. Any form of governance that uses data-driven and data-based reasoning, including machine learning, will be reliant upon representation, modeling and simulation, with attendant problems such as coding demographic differences such as race and gender. Andrejevic posits that “the promise of perfect pre-emptive prediction . . . [is] an impossible one—but the fact of its impossibility does not hinder the way in which it is mobilized to legitimate increasingly comprehensive forms of data collection and processing.” Andrejevic identifies this promise as part of a series of cascading logics—from automating data collection, to data processing and automating responses—that comprise a bias toward automation.90 Moreover, the bias toward automation can easily be linked to predictive rationalities premised on ideas of incompleteness, leading to the development of systems to anticipate additional data that may provide missing values and (preferred) educational attributes. In other words, education is increasingly governed in the contradictory and automated spaces of impossible predictions.

Predictive approaches extend computational approaches to broadly defined “computational models that aim to model, or include some modelling of social processes.”91 Models are used in areas of policy design and appraisal, and in evaluation of policy objectives. This system of thought is a refrain of the policy sciences noted above, a link that is clearly established if we accept the connection of policy sciences to systems thinking and cybernetics. According to Fay, the policy sciences were “a type of ‘policy engineering’: the ‘policy engineer’ . . . is one who seeks the most technically correct answer to political problems in terms of available social scientific knowledge.”92 Fay described the policy sciences as a “set of procedures which enables one to determine the technically best course of action to adopt in order to implement a decision or achieve a goal.”93 While the policy sciences were located in the industrial age, introduction of automation in the information age is quite distinct. Andrejevic posits that automation in the information age is distinct from that of the industrial era insofar as industrial automation replaced manual work, whereas in the information age automation is designed to generate data and “to pre-empt agency, spontaneity, and risk: to map out possible futures before they happen so objectionable ones can be foreclosed and the desirable ones selected.”94

Governing the Future and Feedback Loops

The anticipation and mapping of possible futures, and the links between anticipation and governance, involve the rise of particular forms of expertise to deal with uncertainty, as the future becomes the object of governance. Rose and Abi-Rached posit:

Today we are surrounded by multiple experts of the future, utilizing a range of technologies of anticipation—horizon scanning, foresight, scenario planning, cost-benefit analyses, and many more—that imagine those possible futures in different ways, seeking to bring some aspects about and to avoid others. . . . In the face of such futures, authorities have now the obligation, not merely to “govern the present” but to “govern the future.”95

Anticipating and governing the future mobilize knowledge that is generated by both policy makers and machines, in which “algorithmic rules now generate or construct patterns from the re-assemblage of data.”96 New computing capacity allows for larger data sets to be searched, for new analyses to be undertaken, and for the trope of real-time analytics to become common in not only consumer but governance parlance.97 What is significant about the combination of anticipation and AI is the extensive range of applications produced by the intensification of new computing power and the availability of big data, including the possibility of AI to analyze education data autonomously. Also significant is the inclusion of anticipation into the use of machine learning and natural language processing in “tutoring assistants,” the latest iteration of intelligent tutoring systems.98 This type of forecasting is exemplary of how mundane areas of education are being influenced by AI. Other areas include student enrollment, both retention and future enrollments, and identifying potential bullying and violence in playgrounds, often being done under the purview of “data science for social good” programs.99

Automation is beginning to change policy-making, bringing new capacities for prediction and optimization through real-time feedback loops. Williamson suggests that while comparison has been the primary form of governance, it may be that

as a mode of governance comparison may be supplemented with prediction. Performance indicators will be augmented by predicted outcomes, with systems such as . . . [Learning Analytics] calibrated to identify students “at risk” of failure or to automatically intervene to pre-empt deviations from ideal or projected future outcomes.100

Interventions entail situations where there is a coupling of models, automation, and feedback loops that lead to experimentation and manipulations of student behavior as part of the new behavioral focus, or steering, in governance.101 What we are seeing in education is a rationality of anticipation and intervention that combines the behaviorism of machine learning approaches such as reinforcement learning, real-time feedback, and new iterations of behaviorism, based in behavioral economics and commonly referenced as “nudging.”102 We thus need to consider what to do about concepts such as agency, intervention, and diagnosis. What does it mean for governing education when inaccessible and unknowable feedback loops are part of, and generated within, probabilistic machine learning and applied to influence or nudge behavior?

Critical Policy Studies, Computation, and Concepts

Contemporary education governance is an intensification of previous forms of classification and calculation, and these intensifications portend the emergence of different political rationalities. Our discussion in this chapter has sought to map out the state of the Anglo-governance model and where it begins to verge into synthetic governance, linked to the changing role of the state through the introduction of comparison, globalization, and networks. The role of the state extends beyond statistical reasoning and is now both globally dispersed and locally intensified within data infrastructures and new forms of automation that work as the premise for, and enabler of, anticipatory governance. This role of the state is congruent with what Cuéllar calls “cyberdelegation” as “the delegation of administrative agency decisions to artificially intelligent systems.”103 This development creates new roles for technology companies that provide AI products as a result of the overlap between network governance and new forms of digitalized data use.104 As Edwards states, “We now live in a world governed not by algorithmic systems per se, but rather by interacting ecologies of algorithmic systems, human individuals, social groups, cultures, and organizations.”105

There is also something more for us to consider. As Galloway and Thacker posit, networks are both dispersed and corporately centralized, and they comprise new forms of control. This is an important point for our elaboration of synthetic governance as a hybrid of human governance and machines, for “the nonhuman quality of networks is precisely what makes them so difficult to grasp. They are . . . a medium of contemporary power, and yet no single subject or group absolutely controls a network.”106 How do we begin to grasp the nonhuman quality of contemporary education governance?

In responding to this challenge, we began to ask whether the conceptual and methodological tools we are so familiar with—those in the field of critical policy studies in education, which is often called policy sociology in the United Kingdom and Australia—are adequate.107 We are beginning to see new areas of policy sociology, such as “digital policy sociology,” and growing attention to the connection between automation and policy.108 Nonetheless, what has become obvious through mapping changes from government to governance, from bureaucracy to anticipation, from statistics to automation is the need for both description of new digital formations and practices, and new or reinvigorated concepts to study and analyze emergent phenomena.109 We need new research questions and agendas that pertain to the computational turn in social science research, “a turn that necessitates an entirely new way of thinking.”110

Annotate

Next Chapter
2
PreviousNext
The University of Minnesota Press gratefully acknowledges support for the open-access edition of this book from the University of Sydney, the Australian Research Council, and the Social Sciences and Humanities Research Council (SSHRC) of Canada.

A different version of chapter 2 was previously published as Sam Sellar, “Acceleration, Automation, and Pedagogy: How the Prospect of Technological Unemployment Creates New Conditions for Educational Thought,” in Education and Technological Unemployment, ed. M. A. Peters, P. Jandric, and A. J. Means, 131–44 (Dordrecht: Springer, 2019). A different version of chapter 4 was previously published as Kalervo N. Gulson and Sam Sellar, “Emerging Data Infrastructures and the New Topologies of Education Policy,” Environment and Planning D: Society and Space 37, no. 2 (2019): 350–66; and as Sam Sellar and Kalervo N. Gulson, “Dispositions and Situations of Education Governance: The Example of Data Infrastructure in Australian Schooling,” in Education Governance and Social Theory: Interdisciplinary Approaches to Research, ed. A. Wilkins and A. Olmedo, 63–79 (London: Bloomsbury Academic, 2018); Bloomsbury Academic is an imprint of Bloomsbury Publishing PLC. A different version of chapter 6 was published as Sam Sellar and Kalervo N. Gulson, “Becoming Information Centric: The Emergence of New Cognitive Infrastructures in Education Policy,” Journal of Education Policy 36, no. 3 (2021): 309–26, available at https://www.tandfonline.com.

Copyright 2022 by the Regents of the University of Minnesota
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org