“7. A Turing Intermezzo” in “Humanities in the Time of AI”
7. A Turing Intermezzo
Alan Turing has unfortunately become the hero of our era. He did pioneer the mathematical formalizations that helped build computers after the Second World War. He was certainly not the only, nor the first, researcher to work in that direction at the time; but, in a series of lectures, reports, and public interventions made between 1947 and 1952, Turing structured the conceptual debate around artificial intelligence in a way that is largely maintained today, especially, but not only, within the engineering community. His argumentation is built into the current reality of AI, although the “automatic computing machines” we use are much more powerful than those Turing could contribute to perfecting.
The first fundamental element of this phraseology is its false separation between human and mechanical brains. Throughout his essays, Turing regularly opposes the “human computer” to the “digital computer,” thereby doing as if all wet brains could be said to function in one way and as if machines were completely separate from the “intelligence” of their designers—all assumptions I refute.1 Yet, this strongly marked divide is immediately voided, as digital computers are assembled according to “the analogy with the human brain,” which is “used as a guiding principle,” so that “they could appropriately be described as brains.”2 Turing, it is true, also hints at potential divergences despite the reciprocal analogy; we shall consider them a bit later, but, in fact, they do not alter the fundamental affinity between the mind of automatic calculators and that of humans. All depends on “storage capacity,” a phrase Turing constantly invokes when explaining disappointment with computers.3 The limitations of the machines in the 1940s, he says, are actually due to their “very little storage”; when “a storage capacity of about 109” will be reached in the future, everything will change.4 This view, explicitly rejected by other AI pioneers such as Claude Shannon, animates the twenty-first-century proponents of very large language models.5 (In a widely read paper first made available online in 2019, engineer Rich Sutton could continue to present as “the biggest lesson . . . from 70 years of AI research” that “breakthrough progress eventually arrives” through “increased computation.”)6 In sum, the categorical separation between Homo sapiens and “the” machine (in turn based on a certain comprehension of “the human brain”) is a prelude to their hypostasis.
While Turing’s work favored the advent of cognitive science, the psychological theory he relies upon remains behaviorist. This is why he focuses so much on parlor games in the different iterations of the tests he outlines. We are only concerned in the appearance of a similarity, because we suspend any judgment on interiority. And, just as happened with behaviorism in general, an originally sound epistemological principle (since one can only evaluate external traits, we should refrain from making conjectures on inner functions) is becoming its own conclusion (since we can explain the mind through its external behavior, there is nothing of interest within). In his project for an “education” of the “universal machine,” Turing spells out that the “training of the human child depends largely on a system of rewards and punishments,” which he immediately adapts into a method to improve computers through “two interfering inputs” with “reward (R)” and “punishment (P).”7 Behaviorists showed that conditioning exists, but it would seem difficult to admit today that education is only, or even “largely,” an effect of conditioning. Yet, handmade interventions (“reinforcement”) in the training of our LLMs are done through this same system that also happens to be an essential vector of the ongoing reprogramming of humans through social media. Quite logically, then, the Turing tests assess the faculty of imitation. Imitation is all that is needed: “to show intelligent behavior” equals success in “the imitation game.”8 Through its legacy, Turing’s multifold reduction of the problem, anchored in a both outdated and inaccurate conception of the animal mind, has led to entire lines of research within contemporary AI whose overarching goal is the passing of “the test,” independently of its theoretical relevance and of the supposed intelligence of the machine. For sure, some versions of the test could be passed convincingly, especially by the GPTs that have been engineered precisely to do that (imitation with variability, in order to avoid both randomness and complete predictability). For the record, I also indicate that, for Turing, the main option for an analysis of the mind escaping behaviorism is . . . parapsychology, forming the only “argument” to be “strong” enough to the scientist’s “mind” to potentially displace the final equation of the brain and the computer.9 Barred that, the few differences that Turing enunciates (wit, free will, surprise, humor) may ultimately be ascribed to storage size (another time), to the illusions of consciousness (for liberum arbitrium), or to the role of randomness.
This implied metaphysics involves a political organization where human agents could be retrained—in line with Walden Two, the 1948 novel written by the American pope of behaviorism B. F. Skinner—but through “games” with computers. In his 1947 “Lecture on the Automatic Computing Engine” (ACE), the English mathematician assigns those who “work with the ACE” to either one of these two classes: the “masters,” who build the machine, and the “servants,” who feed it with information. Turing adds that “as time goes on the calculator itself will take over the functions both of masters and servants.” “It may happen however that the masters will refuse to do this. They may be unwilling to let their jobs be stolen from them in this way. . . . I think that a reaction of this kind is a real danger.”10 A few years later, while speaking to a larger audience at BBC Radio, Turing tones down his rhetoric and no longer mentions “servants,” while insisting on the figure of “the intellectuals who were afraid of being put out of job” but “would be mistaken about this.”11 It is patent that the anxiety of “the intellectuals,” a concern we find voiced again nowadays, is justified if “the calculator” can actually “take over the functions” (which, I argue, is not always the case and is dependent on a series of dubious assumptions about the mind), or if the powers that be believe in this fantasy (in a self-fulfilling prophecy mainly derived from Turing), or if we have abandoned our own “mastery” over our own supposed destiny (through our further standardization, for instance). In this respect, we could all become “servants,” no matter what. This would at least grant us the merit, in Turing’s eyes, of not treating computers “as slaves.”12 What should scholars do in the time of AI, according to Turing? “Trying to understand what the machines were trying to say,” a brilliant idea that, at the very last minute, the scientist added to the typescript of his 1951 radio talk.13 Of course, the machines are not trying to say anything; they just have their try at saying things we would recognize.
Undeniably, the crux of the matter is not the power of technique but our own assessment of mental capabilities. Tellingly, Turing, like so many before and after him, is committed to reducing the expanse of the noetic. On the one hand, he does as if the “mechanical brains” were not operating on the basis of encoded instructions but “interpret[ing]” a “sort of symbolic language,” implying that reading is the same as decoding, a misconception reinforced by his practice of war cryptography.14 On the other hand, he essentially denies the existence of anything mental besides algorithmic operations (or telepathy). Ada Lovelace, Lord Byron’s daughter, published a series of observations about Charles Babbage’s “analytical engine” in 1843. Most of them are mathematical, but, in her last note, Lovelace attempts to eschew what she sees as the two tendencies vis-à-vis novelties: that is, “to overrate” and “to undervalue.” To defeat “exaggerated ideas,” she advances that this early computer “has no pretensions whatever to originate any thing.”15 This remark is too negative for Turing, who turns it into “Lady Lovelace’s objection” (an objection it was not), admitting different “variants.”16 Ultimately, his response is: “Who can be certain that ‘original work’ that he [sic] has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known principles.” The interrogation is so rhetorical that it does not even end with a question mark. Turing’s “solution” consists in refuting the existence of the event in humans. He asks whether there is “anything really new” among us and answers no, of course. In underlining that the computer was unable to “originate any thing,” Lovelace was insisting on the operation of set procedures by the machine. By attacking his nineteenth-century predecessor, Turing is therefore implying that all human ideation is tied to the following of orders, commands, and “well-known principles.” The description becomes a prescription.17
A. Please write me a sonnet on the subject of the Forth Bridge.
B. Count me out on this one. I never could write poetry.18
Notes
1. Alan Turing, The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life, Plus the Secrets of Enigma, ed. B. Jack Copeland (Oxford: Oxford University Press, 2004), 391. It might be useful to say that, originally, a “computer” was a person doing mathematical calculations.
2. Turing, Essential Turing, 431, 482.
3. For instance, Turing, 453, 457, 458, 462, 464.
4. Turing, 393, 449.
5. C. E. Shannon and J. McCarthy, eds., Automata Studies (Princeton, N.J.: Princeton University Press, 1956), vi.
6. Rich Sutton, “The Bitter Lesson,” Incomplete Ideas (blog), March 13, 2019, http://www.incompleteideas.net/IncIdeas/BitterLesson.html.
7. Turing, Essential Turing, 425.
8. Turing, 410, 441.
9. Turing, 458.
10. Turing, 392.
11. Turing, 475.
12. Turing, 393.
13. Turing, 475.
14. Turing, 392.
15. Ada Augusta Lovelace, in her translator’s notes to L. F. Menabrea, “Sketch of the Analytical Engine Invented by Charles Babbage,” Scientific Memoirs 3 (1843): 689.
16. Turing, Essential Turing, 455, for this quote as well as all others appearing in the rest of this paragraph.
17. An anonymous reviewer for this book manuscript was quite upset by what I wrote about Turing. Their report put in bold (so that I would definitely understand, I assume) that my point of arrival at the end of this paragraph is “simply false,” that Turing was a subtle thinker, as his “biography” would show, and so on. But, here, I do not care about Turing’s possible creed in reincarnation or extrasensory perception, nor do I wish to shed tears about his tragic destiny or his love of Snow White. I am making two points: Turing very intentionally calls Lovelace’s remark about the early computer an “objection,” which is a tactical move allowing him to turn a factual statement into a polemical argument, whose value would therefore be disputable; in his specific response to this made-up “objection,” and within the particular article I am quoting, Turing “implies” (my term, twice: im-plicare) that, after all, any ideation might be reduced to a combination of transmission (“seed” planting through “teaching”) and rule implementation (“well-known principles”)—he does not state this is so (hence the pseudo-interrogative form he is using), and he may not even believe it (in his own mind, so to speak). Regardless, his counterargument on the topic remains both superficial and rhetorical. This weakness did not stop several generations of thinkers after him to echo his response, almost always in a more assertoric tone.
18. Fictional interaction between a human speaker (A) and a computer (B), based on Turing, Essential Turing, 442.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.