Skip to main content

Algorithms of Education: 5

Algorithms of Education
5
    • Notifications
    • Privacy
  • Project HomeAlgorithms of Education
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Cover
  2. Half Title Page
  3. Title Page
  4. Copyright Page
  5. Contents
  6. Introduction. Synthetic Governance: Algorithms of Education
  7. 1. Governing: Networks, Artificial Intelligence, and Anticipation
  8. 2. Thought: Acceleration, Automated Thinking, and Uncertainty
  9. 3. Problems: Concept Work, Ethnography, and Policy Mobility
  10. 4. Infrastructure: Interoperability, Datafication, and Extrastatecraft
  11. 5. Patterns: Facial Recognition and the Human in the Loop
  12. 6. Automation: Data Science, Optimization, and New Values
  13. 7. Synthetic Politics: Responding to Algorithms of Education
  14. Acknowledgments
  15. Notes
  16. Image Descriptions
  17. Index
  18. About the Authors

5

Patterns

Facial Recognition and the Human in the Loop

In William Gibson’s Zero History. the character Milgrim describes the desire of Bigend, part advertising man, part futurist:

“But not all secrets are information people are trying to conceal. Some secrets are information that’s there. but people can’t have it.”

“There where?”

“It just is, in the world. I’d asked him what piece of information he’d most want to have, that he didn’t have, if he could learn any secret. And he said that he’d want something nobody had ever been able to have.”

“Yes?”

“The next day’s order flow. Or really the next hour’s, or the next minute’s.”

“But what is it?”

“It’s the aggregate of all the orders in the market. Everything anyone is about to buy or sell, all of it. Stocks, bonds, golds, anything. If I understood him, that information exists, at any given moment, but there’s no aggregator. It exists, constantly, but is unknowable. If someone were able to aggregate that, the market would cease to be real.”

“Why?” . . .

“Because the market is the inability to aggregate the order flow at any given moment.”1

Bigend’s advertising company, Blue Ant, utilizes pattern recognition to anticipate and create the next trend. Bigend desires aggregated data to not only match patterns but to predict them in advance and eliminate chance. Zero History is speculative fiction, where futures are predicated on plausible trajectories emanating from the present. Nonetheless, we suggest that Gibson’s speculative future is already evident in the emerging use of artificial intelligence (AI) in education governance. While the previous chapter outlined how data infrastructures are built and deployed to change educational governance, in this chapter we discuss some of the effects of automation implemented within these data infrastructures, and the implications of eliding distinctions between humans and machines in governance.

This chapter thus examines the application of automation, specifically machine learning and pattern recognition, and the governance effects of this application. We do not discuss issues of privacy or data ownership; rather, we investigate what gets done when machines begin to do education governance. We are interested in how governance is changing through pattern recognition technologies. These AI-supported products, such as facial recognition, are part of nascent automated education governance and used in attempts to anticipate tomorrow by aggregating today’s data. Our focus is on the recursive aspects of facial recognition in governance, and we aim to illustrate how automated technologies such as facial recognition can not only capture behaviors but also “model, anticipate and pre-emptively affect possible behaviors.”2

The first part of the chapter provides an overview of facial recognition in education, a nascent yet already controversial area of education technology. We then consider the role of facial recognition in digital and synthetic governance through an example of facial recognition in Chinese classrooms—the Class Care System—which operates as a governance technology. We begin to explore what kind of governance is produced when machine learning not only identifies but also generates new patterns—that is, when pattern matching becomes pattern making as part of automation. We discuss the problems of misrecognition in pattern recognition technologies and the ways in which this introduces a technical and political role for uncertainty in algorithmic calculations and governance informed by these calculations. We then problematize the idea that the “human in the loop” can explain patterns and ameliorate uncertainty in automated systems, drawing on interviews with computer scientists who research and build AI. We conclude the chapter by asking, In what ways do pattern matching technologies and techniques both supplement existing forms of educational governance and introduce new ones?

Facial Recognition in Schools

Facial recognition systems are part of the computer vision field, including aspects such as image recognition and machine vision, some of which is supported by machine learning. Facial recognition uses algorithms to create a “facial signature” from photos in databases, and “a learning program captures the pattern specific to that person and checks for that pattern in a given image.”3 This is both pattern matching, where there are predetermined patterns, and pattern recognition, which is looking for likely patterns in data. Amoore suggests that “with the growing abundance of digital images and cloud data for training machine learning algorithms, the process of learning shifts from recognition via classification rules to recognition via input data clusters.”4 A facial recognition system codes what we perceive as facial features based on

positioning and distancing between sets of geometric coordinates (for example, the centre of each pupil, the bridge of a nose, the ends of an eyebrow). Given the unique nature of every person’s “face-print,” when the geometric properties of a captured image are compared against a database of pre-existing personally identifiable images, the system should be able to make a match with a specific individual.5

Facial recognition technologies are being widely applied, “from apparently harmless uses such as tagging a photo on social media to ethically dubious applications such as racial or sexual profiling.”6 Computer vision that is underpinned by machine learning is a mobile technology, one that can be shifted easily from application to application—from policing and the military to schooling. Applications in the United States, United Kingdom, and Australia include office-entry systems, factory workforce “management,” and airport immigration.7 Myriad potential uses in education for facial recognition are increasingly put forward. This includes school responses to violence in the United States, where facial recognition is the latest in surveillance technologies being introduced ostensibly for safety reasons, or for authentication purposes in examinations and online learning.8

A focus on facial recognition takes us in an admittedly speculative direction due to its currently limited use in education. Nevertheless, it ought not be dismissed, because facial recognition is easily incorporated into existing data infrastructures, such as student information systems (SIS) which are a key part of school-based data infrastructures that cover everything from managing and aggregating testing and administrative data, to providing parent portals.9 SIS have become a key part of the data-based decision-making in schools and a way of connecting central education departments and schools through interoperable hub-and-spoke infrastructures. These systems reinforce the idea that schools are not only producers and users of data but also generators of analysis.10

Some applications of facial recognition, such as attendance taking, are easily integrated into SIS and existing data infrastructures in education. This integration is possible because facial recognition is a

technology that fits neatly with established school practices, processes and infrastructures. Crucially, schools have long traditions of routinely collecting and maintaining photographic records of students’ faces. Facial recognition systems are therefore able to appropriate existing name-and-face photographic databases.11

While it is possible to build systems in house (that is, with the expertise of school employees), schools and educational systems often introduce facial recognition via proprietary systems. There are three main ways through which proprietary facial recognition, and associated applications, is being introduced into education governance. The first is through computer vision capacities within off-the-shelf platforms that have embedded AI services.12 For example, Microsoft Azure includes MS Face and emotional AI software, and Amazon Web Services (AWS) includes Rekognition.13 The second way in which facial recognition is being introduced is through small-scale pilots.14 The companies selling facial recognition products to schools are often small start-ups. However, like other computer vision companies providing facial recognition to services from police to medical imaging, these start-ups typically rely on enterprise-level services and infrastructure provided by the likes of Microsoft (with Azure) and Amazon (with AWS).15 The third way that facial recognition can be introduced is through national level industry and research policies that support pilot projects, such as the Class Care System, which is the example we focus on in this chapter. Launched in December 2017, Class Care is part of an overarching national level AI program. In July 2017, China’s highest governmental body, the State Council, released a policy initiative called the “Next Generation Artificial Intelligence Development Plan” that has economic, political, and social dimensions.16 While this plan has a focus on introducing knowledge about AI into curricula, Class Care is related to these broad aims of building the AI industry in China.17 As such, “educational AI . . . [is] understood as occupying a special place among the entrepreneurial cultures of China’s technology sector.”18

Facial recognition’s introduction into an array of social policy areas, especially areas such as policing, has been very controversial. Indeed, companies such as IBM have stated they will no longer produce facial recognition products, and other companies have put a moratorium on development.19 Additionally, facial recognition is being challenged upon introduction into education. In 2019, a Swedish school district trialed a “Future Classroom” program using facial recognition technology developed by Tieto, a Swedish technology company.20 A class of twenty-two students used facial recognition for roll call. The data were photographs and full names stored on a local area network. The school board claimed that 17,280 hours would be saved per year, based on roll call taking 10 minutes for each teacher each day. The school aimed to use this technology in all classes. The Swedish data protection authority, under the European Union’s General Data Protection Regulation, stopped the trial and fined the school board. Among a variety of reasons for the adverse finding, the authority judged that there was no compelling use case for facial recognition and that “registering attendance in class can be made in less intrusive ways, meaning that the use of facial recognition was disproportionate to the purpose.”21

Despite the lack of a compelling use case in this Swedish example and more broadly, we suggest that facial recognition is still being introduced as part of the ongoing automating of education governance. The political rationality underlying this development, rather than the detail of individual cases, is the focus of our analysis. Following Andrejevic’s work on media and automation, we can see that facial recognition may well be part of “a cascading process of automation [in which] large databases require automated information processing, which, in turn, leads to automated decision-making processes.”22 We do not suggest that this “cascade” is a form of technological determinism, but we are suggesting that facial recognition is made possible by and accelerates datafication, shaping the conditions of possibility for practices in education to be deemed legitimate and desirable. As such, in what follows we examine how the introduction of facial recognition requires us to not only deal with the problems associated with this technology, but also to continue to interrogate broader governance shifts influenced by growing automation.

Facial Recognition, Patterns, and Algorithmic Governance

This section begins to examine how the Class Care System, an SIS supported by facial recognition, creates interventions in the physical world of the classroom. We argue that these interventions are exteriorizations that augment governance but also recuperate behaviorist approaches; change what it means to be a student, teacher, and administrator; and disrupt normative actions and forms of human intentionality in decision-making. We look at the ways pattern matching and recognition in computer vision become transformed into modes of governance, or indeed transform governance, using the example of the Class Care System.

For students and teachers in seven Chinese schools, each day involves being recorded, measured, and rendered into digital data through the Class Care System. Class Care works in five steps:

  1. 1. Capture. Using cameras in a classroom, it takes a photo of the entire class once every second. This image is sent to a local server elsewhere in the school. It is not a cloud service, but this does not mean that the data is not used as training for other products within the Hanwang range of products. The Class Care System is made by Hanwang Education, a subsidiary of Hanwang Technology, which is a “pattern matching” company that also produces surveillance products used by China’s Ministry of Public Security.23
  2. 2. Scan. Software analyzes the footage and identifies each student’s face.
  3. 3. Store. The analysis of the facial data is stored on the server.
  4. 4. Classification. Using an assessment of each face and student body position, the software uses deep learning neural networks to analyze the images according to behavioral categories—listening, writing, sleeping, answering.
  5. 5. Scoring. Deep learning algorithms pattern-match these categories to translate images and behavioral queues into a score between 0 and 100 for a week. These scores are provided to teachers, parents, and school administrators. It is real-time data but weekly analysis.24

The promise of the Class Care System is that school administrators will know more about the daily practices of their classrooms by receiving ongoing data and analysis on students and teachers. While there are many interesting areas to delve into about this product and the extensive literature on AI and learning, our interest is in the ways these five steps not only involve calculations that evaluate students but how they also illustrate the political rationalities of algorithmic governance. These rationalities are congruent with what we are framing as synthetic governance. These rationalities become evident in what Crampton posits as the role of facial recognition in algorithmic governance. This role is played by (1) turning facial data into statistical data (e.g., Class Care’s turning students’ faces into administrative and performance data); (2) producing knowledge from this data through automation (in Class Care this is the use of neural networks to produce a score from the labeled expressions [happy, etc.]); and (3) involving anticipation through behaviorism “action . . . taken on behaviors: the goal here is to anticipate and, if necessary, modify individuals’ behaviors.”25

The third aspect, of anticipation, looks somewhat different when facial recognition is used in education, as it takes place within carefully articulated and already existing regimes of anticipation based on previous knowledge (e.g., tests, attendance, behavior records). For example, while Class Care provides predictions of how students are learning, and ways of intervening, it also only provides a report once a week—hence, the real-time data is perhaps less about anticipation and more about providing new forms of certainty about behavioral intervention and control. The pursuit of certainty is evident in steps 4 and 5 of Class Care—from classification to scoring—where pattern matching becomes pattern recognition. That is, it extends the connecting of an identified face with a name (the move from facial detection to recognition) to assumptions about what else can be known from the face, aside from the student’s identity. The system aims to add to name verification by recognizing and continuing to learn what a particular facial expression means (e.g., if closing the eyes is sleeping rather than thinking deeply), what score should be attributed to this expression, what sort of behavior is being exhibited, and what can be done to modify or support this behavior.

Thus, facial technology enters the realm of governance in the simple act of automating and combining, indeed conflating, administrative roles with learning. In the Class Care System, the function of recording attendance (through being captured by the camera) becomes crucial for the second aspect, the focus on “time on task,” understood in relation to scores. Visual data are used to know how much time in each period students spend focused on the work at hand, or whether they are responding positively to the material being presented.

The Class Care System aims to make long-standing governing practices within compulsory schooling more efficient. However, it also extends the biopolitical function of governing into the realm of biometrics, which “is predicated on the technological reading and measurement of the body.”26 While biopolitics and biometrics have long been entwined, what is different in this case is how facial detection technologies “scan and analyse facial expressions in order to infer people’s moods, emotions and affective states.”27 In the case of emotive AI, or the scoring approach in the Class Care System, there is a claim about correspondence between faces and internal states. This is the work of facial coding, based on assumptions that emotions are “evolutionary, biological and not learnt. Importantly, the simplicity and universality of a basic emotions worldview mean that this readily translates to capture technologies, not least the cameras employed in facial coding practices.”28 This becomes problematic if emotions are seen as having a social nature. Additionally, there is now extensive work criticizing the ways in which facial recognition and intervention makes claims, and emerging research that states this is both possible and not possible.29

Exteriority is reified in the use of empathic AI; in which to ostensibly better understand our internal states (or those of students) neural networks identify and create patterns of understanding between gestures and pedagogical and governing categories. In Class Care there is a correspondence between the system’s learning about what behaviors indicate student learning, and a “biopolitical strategy of the calculation and transformation of the emotional life of the [student].”30 As Celis Bueno posits, following Deleuze and Guattari,

the face is always political or, to put it differently, . . . the question regarding the relation between politics and the face has to focus on those concrete circumstances which trigger the social production of the face.31

Facial cues and expressions (e.g., attention) are part of “facial coding” in which an “affective approach uses computer vision techniques to code combinations of movement to arrive at interpretations of emotional categories and states.”32 Emotive AI requires a set of assumptions regarding the possibility of equating personality traits (e.g., the big five personality traits) with facial expressions through machine learning.33 This allows evaluations of facial expressions to be scored, through neural networks, as proxies for emotions, which are then interpreted to inform interventions with students.

Facial recognition and machine learning—and extended applications such as emotion identification—when used in schools assume that a machine can access students’ cognitive processes through observation. Facial recognition, when used in education governance, accentuates notions of individuality while sublimating all the problems and conflicts associated with this technology. While the notion of “empathic AI” has been a key part of the development of adaptive or “affective” computing, in the Class Care System it is not personalization in the sense of the technology interacting with individual students.34 Rather, it is a biopolitical strategy for managing the school population; it is about creating learning and governing classifications based on facial signatures.

Misrecognizing (Technical and Political) Patterns

In the case of Class Care, decisions about interventions to support students can end up being reduced to a series of assumptions and connections that are made based on a series of steps in a probabilistic system: from coding a facial expression, to what the expression means for learning, to decisions about whether the student attached to a particular face needs intervention. These assessments are supplementary to the actions of the teachers but can carry with them certainty in weekly summative assessments of learning in the classroom. The example of Class Care helps us to explore the range of issues with facial recognition that have arisen in its applications elsewhere. In this section we focus on the issue of misrecognition and discuss its technical and political aspects.

A key technical aspect of misrecognition relates to bias, which is a feature inherent to machine learning. Amoore contends,

When deep neural network algorithms learn . . . they adjust themselves in relation to the features of their environment. . . . Notwithstanding the widespread societal calls for algorithms to be rendered free of bias or to have their assumptions extracted, they categorically require bias and assumptions to function in the world.35

Nonetheless, the operation of facial recognition and the concerns around its bias remain limited by ideas of this being an error rather than a function. Conversely, we might see the calls for algorithms to be “free” of bias as calls for algorithms to be “differently biased.” This matters if the data used in training facial recognition is trained on white people only, for example, or when the outcomes of automated systems are based on “historical patterns of discrimination and classification, which often construct harmful representations of people based on perceived differences, . . . reflected in the assumptions and data that inform AI systems, often resulting in allocative harms.”36

A consequence of bias is that a lack of accuracy in facial recognition systems means some populations—such as Black, Indigenous, and People of Color (BIPOC)—are being unfairly locked out of systems, such as office building access.37 We could have a more accurate system by including BIPOC in the training data and could therefore more accurately capture all faces. Our interest here is that certainty does not necessarily mean that this is a better outcome, and building robust (that is, accurate) models can still have pernicious consequences.

As numerous works on census data and classification, statistics, and judgment have shown, there is no easy way out of the essentializing function of classification.38 As Hacking has argued about race and statistics, “Classification and judgement are seldom separable. Racial classification is evaluation.”39 This issue of classification is evident when looking at facial recognition applications.40 A more technically accurate system means that populations that are already under heavy surveillance just become even more accurately monitored. As the system is made more inclusive, this becomes a double-edged sword in relation to race, visibility, and hypervisibility, where “inclusion is no straightforward good but is often a form of unwanted exposure.”41 It is easy to imagine how in education more accurate facial recognition could exacerbate the surveillance and punishment of BIPOC students.

Explaining Patterns: Humans in the Loop

While Class Care is a novel application, it is also one of many machine learning–based systems that are being introduced throughout education. Class Care exteriorizes learning, and behaviorism informs the decision-making (e.g., scoring). Furthermore, the system provides a new set of internal operations, not as human cognitive processes but as mathematical classifications. We can map the steps of Class Care as described above, but we cannot map the neural networks in decision-making processes. As such, with the use of machine learning, new questions are arising about what now constitutes governance in schools, including how we identify and evaluate the veracity of decisions made by machines.

While this is not a new question, it is relatively new for education. The issue of how to trust autonomous systems is central to the field of AI. The demand to explain a system was a key part of expert systems when AI research was primarily located in systems engineering, with scheduling being the primary form of applied AI. Part of expert systems was a focus on being able to explain the rules that were used. What is different now is that the move from rule-based systems to machine learning systems is based on opaque calculations, or what is vernacularly known as the problem of opening the black box of machine learning. A Canadian computer scientist who has worked in the AI field for over forty years outlined some of the issues that have arisen with autonomous systems that use neural networks, such as Class Care:

The main concern started with the fact that Deep Learning systems are black boxes, right? When it makes a decision, you can’t say, “Why did you make that decision?” . . . I think now it’s sort of expanded to just trust. How do you trust an autonomous system?

This aim to develop trustworthy AI has been part of the explainable AI field, which incorporates both computer science and the social sciences to identify and evaluate design decisions.42 This design approach is encapsulated in the notion of the “human in the loop,” which describes the aim to involve people in a feedback loop of training, tuning, and testing a particular algorithm. This can include humans labeling data, “tuning an algorithm,” and finally testing a model based on its outputs—that is, verifying a decision at the point of application. For example, in the case of Class Care, this could include cross-referencing the weekly scores of students’ progress with the teacher’s assessment.

Another design approach for including humans in the loop is to develop an algorithm to show what is happening in a “black box.” A German natural language processing computer scientist explained the opacity of the relationship between inputs and outputs:

In the abstract, like theoretically, because this and that happened and there must be some, you know, non-linear combinations of inputs that combine . . . but . . . that doesn’t really let me understand it. . . . So I cannot say, why is this happening? . . . What would I have to change in the world, in the machine, in order to get a different output? . . . How do we take this black box and rip the lid off and make things interpretable? To some extent, what we’re doing is, we’re now hooking up a second, well, transparent box that learns to take the inner workings of a black box and translate that into human readable, or interpretable things.

While an explanatory algorithm may help us to open the black box, as Amoore argues, it is difficult to ascertain what is identifiably human in machine learning, where the calculations are “never authored by a clearly identifiable human, but rather from a composite of algorithm designers, frontline officers, the experimental models of the mathematical and physical sciences, a training dataset, and the generative capacities of machine-learning classifiers working on entities and events.”43 This idea of a composite “human” is evident in facial recognition. For example, humans labeling data for low wages in the Global South underpins facial recognition software.44 If a company is not a global technology company, it is most likely that the company is using one of these free training sets, with all their physiognomic associations between label and face (e.g., BIPOC pictures labeled in multiple categories, including criminal). While Noble addresses this in relation to the Google search engine, where these associations can be corrected through oversight (e.g., in-house anthropologists pointing out these problems), the proliferation of “off-the-shelf” applications means that facial recognition can be developed without any sense of these problems.45

A nondesign issue regarding humans in the loop is the question of “which” humans are involved in the development of AI. Unsurprisingly, computer science is a mostly White and male field. The need to focus on who develops AI is premised on computation as the outcome of human relations, or as Campalo and coauthors posit, “Those who design, develop, and maintain AI systems will shape such systems within their own understanding of the world.”46 As such, this is a politics of recognition for algorithms, in that “we must do more than ask whether humans are in the loop—a phrase commonly used in the AI community to refer to AI systems that operate under the guidance of human decision makers—but which humans are in the loop.”47 This problem can be illustrated by extending the example above to include teacher assessments to verify Class Care’s score of student progress. In cases where a student may receive a low score, there can be any number of factors that would mean a teacher can reinforce or contradict this score, for the fallibility of teacher judgments is well-known, especially in the areas of race and ethnicity.48

Humans in the loop, therefore, both proliferate problems and provide technical corrections. As Katzenbach and Lena posit,

While earlier approaches conceived of algorithms as either augmenting or reducing human agency, it has become clear that the interaction between human and machine agents is complex and needs more differentiation. While typologies and debates typically construct a binary distinction between humans-in-the-loop vs. humans-out-of-the-loop, this dichotomy does not hold for in-depth analyses of the manifold realities of human-computer-interaction.49

We argue, therefore, that there are challenges for education governance concerning the authority and legitimacy humans are given to provide correction for automated systems. Amoore provides an insight into this issue by asking where we would locate a human in the loop:

Where would one locate the account of a first-person subject amid the limitless feedback loops and back propagation of the machine learning algorithm? . . . Who precisely is the human in the loop? The human with a definite article, the human, stands in for a more plural and indefinite life, where humans who are already multiple generate emergent effects in communion with algorithms.50

Amoore identifies a more fundamental problem of automation; the question of whether authority lies with the machine or with a human, and if the latter, which human is it and where is this human involved in the design process. The idea of the human in the loop is premised on a clear demarcation between the inside and outside of calculation. That is, it is not just inserting humans in an algorithmic process or correcting for lack of diversity in developers. Human-in-the-loop can be considered a continuation of the “self-determining modern subject (thinking, acting and living autonomously from the instrument he uses) foreclosing the possibility of reinventing what an instrumental subject can be beyond the dominance of the servo-mechanic models of machines.”51 As such, the human in the loop is inside the decision-making but outside the machine—and responses to AI systems are outside in things such as ensuring the ethics, trustworthiness, or explainability of the system.

Already Coded: Pattern Recognition and Pattern Making

Facial recognition is congruent with the existing rationalities of schooling, in which desires for personalization run up against depleting resources for teaching and professional development.52 Nonetheless, it seems clear that an area of AI such as facial recognition in education will continue both to be treated with suspicion and to be subject to overinflated ambit claims. It is possible that by the time of publication, facial recognition will be banned in schools as part of broader political moves, despite ameliorative responses such as introducing humans in the loop and correcting for issues of misrecognition. It is likely that there will be continued pressure to ask whether the substantive benefits of pattern recognition in schools outweigh the risks. As Andrejevic and Selwyn suggest, “A strong case can be made that any ‘added value’ or gained ‘efficiencies’ are outweighed by the consequences of automated sorting and classification for students.”53

Nonetheless, regardless of the question about a compelling use case, we think the substantive questions about governance and pattern recognition technologies will continue, because computer vision in classrooms is already part of hybrid control systems (e.g., the enterprise level systems such as Azure) that focus on augmenting administrative functions to modulate student and teacher behaviors. As we have outlined, the nascent introduction of pattern matching and recognition into education governance settings allows us to begin to explore the consequences of the application of automation in governance and its recursive features. Much like the data that Bigend desires, educational information already exists as actors (teachers, students, administrators, policy makers) and sets of rules based around knowledge outcomes, administrative procedures and policies, and established practices (e.g., curriculum, assessment, and pedagogy).

In this chapter, we have highlighted both a bifurcating and a conflating of the inside and outside of human and machine in the ways in which automated systems are introduced. In the case of Class Care, on the one hand, facial recognition systems pattern-match to transform points on a face into identity decisions and evaluations about learning. This matching forms the basis of decision-making within existing systems such as SIS. These decisions can ostensibly be made understandable by introducing a human in the loop, either as designer or user. However, the notion of the human in the loop is irrelevant to our notion of synthetic governance. As Amoore highlights, there is recursiveness built into automated technologies, and in ways that maintain humans in recursive relations with machines and each other. Humans are always in the loop, but this does not provide a clear basis for controlling and correcting the decisions of automated systems. Furthermore, reinscribing the human as coherent and essential, and as dichotomous with machine rationality, does not get us past classification and representation problems and avoids acknowledgment that humans, and in our case education governance, are already coded.

On the other hand, the same process of coded transformation collapses the distinction between human and machine, and in the case of neural networks, produces pattern recognition that is only machine readable, and then converted into human interpretable decisions through scores. However, the calculations that lead to this human interpretability, as Celis Bueno notes, “operates through what Paglen . . . has called ‘invisible images,’ that is, images created by machines and for machines, and which remain ‘invisible’ to the human eye.”54

The production of exteriority using facial recognition in education governance indicates that there may also be another way of entering this discussion. It is messier and more tentative, and it involves probing the ruptures in our understandings of human and machine interpretability and normativity. It is an approach predicated on the impossibility of assuming that technoscience and humans are either discrete or that the latter has ever been stable.55 In the next chapter, we unpack the link between the automated collection and analysis of data, intervention (or behavior), and the role of human and machine agency in this process. However, and this is far more precarious as algorithmic decision-making becomes widespread, we must come to grips with the idea that what is machine interpretable about patterns is not necessarily, or perhaps by necessity cannot be, human interpretable. This idea may help us to understand why a machine can do something a human cannot. That is, “contemporary algorithms are not so much transgressing settled societal norms as establishing new patterns of good and bad, new thresholds of normality and abnormality, against which actions are calibrated.”56 This is the arranging aspect of algorithms, where machine learning is not just pattern matching and recognition but pattern creating.57

Annotate

Next Chapter
6
PreviousNext
The University of Minnesota Press gratefully acknowledges support for the open-access edition of this book from the University of Sydney, the Australian Research Council, and the Social Sciences and Humanities Research Council (SSHRC) of Canada.

A different version of chapter 2 was previously published as Sam Sellar, “Acceleration, Automation, and Pedagogy: How the Prospect of Technological Unemployment Creates New Conditions for Educational Thought,” in Education and Technological Unemployment, ed. M. A. Peters, P. Jandric, and A. J. Means, 131–44 (Dordrecht: Springer, 2019). A different version of chapter 4 was previously published as Kalervo N. Gulson and Sam Sellar, “Emerging Data Infrastructures and the New Topologies of Education Policy,” Environment and Planning D: Society and Space 37, no. 2 (2019): 350–66; and as Sam Sellar and Kalervo N. Gulson, “Dispositions and Situations of Education Governance: The Example of Data Infrastructure in Australian Schooling,” in Education Governance and Social Theory: Interdisciplinary Approaches to Research, ed. A. Wilkins and A. Olmedo, 63–79 (London: Bloomsbury Academic, 2018); Bloomsbury Academic is an imprint of Bloomsbury Publishing PLC. A different version of chapter 6 was published as Sam Sellar and Kalervo N. Gulson, “Becoming Information Centric: The Emergence of New Cognitive Infrastructures in Education Policy,” Journal of Education Policy 36, no. 3 (2021): 309–26, available at https://www.tandfonline.com.

Copyright 2022 by the Regents of the University of Minnesota
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org