“4. We Are Not AI” in “Humanities in the Time of AI”
4. We Are Not AI
Our complicity with AI is no equality. We are related—but different. As illusory as the claim of total decoupling is the insistent, and false, promise of the functional identity of all minds, organic or not. The brain as a Turing, or as a prediction, machine; the so-called neural networks of a computer; the large language models . . .1 Nevertheless, beyond even our biological fact, we are not computers. Many arguments on the topic have been put forward. At the level of lacks, AI, as we know it, is said to have no emotion, no consciousness, no experience of the world, no judgment of what is right or wrong. This might well be an accurate diagnosis, even for the most advanced transformer—in spite of GPT’s ability, when prompted to provide an outline for this chapter, to summarize for me those very ideas that have been voiced often and markedly repeated in the last moment of media hype about computerized text generators. I do not intend to dispute this list of inabilities, even though I will note that LLMs are quite apt to speak the language of affect nevertheless, that we have no scientific or theoretical agreement for what could be called “consciousness” in humans and other animals, that we also experience the world through culture (and therefore texts), and that the existence of inborn ethics still deserves to be proven.
Another set of distinct features is tied to the current level of generative AI, more specifically. To anyone having interacted with the new tools, four tendencies quickly appear, which have been noted early on and contrast with what we came to expect from human text writers: past a certain threshold, there is a performance collapse (the fragments no longer cohere, entire quotes are being recycled to fill in, non sequiturs appear, etc.); biases are pervasive (originally, the ones coming from the database; now, in their stead or rather in addition to them, those coming from the multiple human interventions during the training session and through built-in rules); events, dates, ideas, and quotes are sometimes made up, in what techies call “hallucinations”; and catastrophic forgetting, when the acquisition of new tasks leads to the loss of previous training, has not been completely overcome. The answer from the makers consists in saying that trillions of parameters, in the future, will take care of most issues. (The same answer is sometimes addressed to the first series of critique, and we know how emergence is being alleged to bet that, in some mythical tomorrow, the right size will be reached, allowing artificial consciousness, or general intelligence, or Ray Kurzweil’s “singularity.”2 It is even less convincing.) At any rate, hallucination, collapse, and bias are not exactly unknown to human speakers, as a brief exploration of social media would amply confirm. Another, more fundamental, aspect could be added. There is a very heavy computational load of the deep neural network to obtain just one word or musical note or series of pixels. The amount of necessary data, as well as the layers of calculation, might look disproportionate if compared to human neural activity. The gap would not be a scale issue but a sign that the cognitive strategies at play are indeed divergent.3
Without losing sight of all these criteria, or some additional questions about privacy and security, I would personally concentrate on a series of seven other features of generative AI. They are interlinked, and I will appeal to them in the rest of the book. First, the LLMs, and anything alike, are anchored in the past. The training corpus strictly defines what will be produced, subjugating in advance any future production to what has been said. Novelty is fundamentally based on composite variations. Second, the generation model is concurrently probabilistic, conferring a higher value to the ordinary. Third, in this context, neither plurality nor what I call “singularity” are performed through texts, images, films, or sounds: impersonal expression is drawn from the reassembling of majority trends in the corpus and the additional supervision of desubjectivized human players. Fourth, integration—such as the consolidation of a general design, the ongoing reinterpretation of the text as it is written, or the overarching trajectory of a theory—is missing from the incremental, token-by-token approach, even when there is no collapse. Fifth, in whatever form, and besides the specific status of large language models, AI is intrinsically algorithmic.4 Decades of research on cognition in animals certainly supports the existence of an algorithmic brain in living agents. But I have proposed, in The Intellective Space and beyond, that automated cognition is exceeded or bypassed through its performance. The creative, intellective capability we can display is not independent from the cognitive ordinary—it overrides it. This is the exception to the norm, the norm where AI is confining ourselves. Sixth, there is, in “us,” the potentiality for a concurring plurality of noetic regimes from which AI only retains one. Finally, a text does not contain meaning, it only provokes it in the reader’s mind—and may have shaped it in the writer’s brain. As for signification, it is locally enacted, while significance is experienced. The text generator just deducts items on the basis of discursive word distribution. The attribution of signification—which is more than mapping, predicting, or extracting vectors—is currently outside of what AI could do. (We, in turn, could interpret the computerized utterances we are being presented with, and one could imagine subsections of future literature departments, for instance, that would be devoted to that task; the novelty should quickly fade.)
On all these dimensions, it is easy to seize how the intellectual activity performed in the humanities could be far from what AI is giving us. At the same time, it is not much more difficult to think of countless counterexamples: bland dissertations, lectures, published articles and books that do not do much more than a transformer. Formulaic prose, descriptions of complex ideas with little interpretation, repetitive patterns of argumentations, the absence of a scholarly voice, applications of previous theories, self-affirmations trapped in ready-made rhetoric, disciplinary tessellation, or irrelevant efficacy are regrettably widespread. Outside of the simple, though valid, biological differentiation between our animal species and our computers, beyond the manifest divergence in terms of implementation, the reason why we are not AI could only be a consequence of what we do—or do not—create.
In its outline for this chapter, ChatGPT is inviting me to reach a culminating point it calls “Embracing Our Humanity” (our words through its own).
A. Write the chapter entitled “We Are Not AI”
B. Chapter: We Are Not AI
Introduction: As artificial intelligence (AI) continues to advance, it is important to emphasize that despite the impressive capabilities of AI systems, they are fundamentally distinct from human beings. This chapter delves into the unique qualities that set humans apart from AI and explores the ways in which our humanity, creativity, and consciousness cannot be replicated by machines.
. . .
Embracing Our Humanity:
- Reflecting on the unique qualities that make us human and the value of preserving our human identity.
- Emphasizing the importance of maintaining a balance between technological advancements and our human essence.
- Discussing the role of AI as a tool to augment and support human endeavors rather than replace or redefine our humanity.
Conclusion: While AI systems continue to evolve and display remarkable capabilities, they can never fully capture the essence of human experience, consciousness, and creativity. We possess a depth of emotions, a capacity for moral reasoning, and a richness of lived experiences that distinguishes us from machines. By embracing and celebrating our humanity, we can ensure that AI remains a tool that serves our needs and aspirations, while we continue to thrive as uniquely human beings.5
I clearly hear the parlance of orthodoxy filtered through the electronic loudspeaker. How far away I wish we were standing from that buzz. We do not have a “humanity” or an “essence” or an “identity” to “embrace” or to “preserve” from autotelic and external “technological advancements.” We should not be “maintaining a balance” and attempting to avoid “redefining” ourselves. Quite the contrary, because “AI” is also the unexpected name for a particular way of seeing the human—as an organized, “unique” “essence” to protect from its outside—we can understand that our mental “tools” and all that comes with them are also a provocation for redefining our noetic expression in, and through, the humanities.
In my critique of the generic humanism parroted by an automated generator, I agree with Catherine Malabou that the eventual “danger of machines is human” and that we cannot neatly oppose “ourselves” to our computers.6 Here ends my convergence with the theses the philosopher introduces in her Morphing Intelligence. After years of metaphysical claims about “the brain,” Malabou, to her credit, finally understood how ineffective was her overinvestment in a biological function of the central nervous system (neural plasticity). Alas, this moment of self-reflection served to install a new phraseology. There is a possible relation between “creativity and routine,” as Malabou now sees it, but understanding this link first requires a tiered theorization of innovation. Instead, just as the industry does, Malabou uses creativity very loosely to refer to strategies at playing go (as evaluated by “experts”) along with apparently any production in “music, painting, literature.” This conceptual fuzziness culminates in an assessment of computerized forms that would not sound exotic in a tech seminar sponsored by Meta: “Their creativity is boundless.” Of course, “they” are pretty heavily bounded by their programs. Then, declaring that “algorithmic calculation . . . is capable of simulating noncalculation” is either nonsensical or ungenuine. The incomputable is simply not computable by a computer, and no “simulation” can alter this. Thus, thinking may be unequal to calculating, and there is no “automatic creation,” strictly speaking, but, at best, automated invention. Moreover, I would not exemplify “qualitative” “noncalculation” with “spontaneity” and “the directness of emotions,” as these are cognitive processes. In the end, Malabou enchains the noetic to the figure of “automatism,” further stating that “the reorganization after the breakdown or interruption reinforces the efficacy of the automatism,” which paves the way to “the reduction of human genius to a series of algorithms.” Except that there was no equation such as brain plasticity equals human genius equals inventiveness equals qualitative equals spontaneity to begin with. As for mental routines, they are also routinely reorganized, and it is arbitrary to suppose that their pruning and consolidation (plasticity) are necessarily a “dialectical” effect of creativity. The last word given to the Aufhebung in “automatism” clearly situates Malabou’s plane of intervention, a gauche postmodern humanism being replaced by the diffuse conflation of all intelligences, under the watch of “international ethical and legal regulation.” This will not be where I’ll go.
Notes
1. Andy Clark has been very successful recently in his rebranding of the mind as a “predicting machine,” a category he explains in different places but crucially in “Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science,” Behavioral and Brain Sciences 36, no. 3 (2013): 181–204, and in The Experience Machine: How Our Minds Predict and Shape Reality (New York: Pantheon, 2023). I had more sympathy for the previous formulation of the hypothesis, under the name of the “extended mind.”
2. Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Viking, 2005).
3. Overall, a good source for the standard list of critiques of current AI is Gary Marcus, who has been very active, and vocal, on different platforms. See, for instance, the book he coauthored with Ernest Davis, Rebooting AI: Building Artificial Intelligence We Can Trust (New York: Pantheon, 2019), and his new book Taming Silicon Valley: How We Can Ensure That AI Works for Us (Cambridge, Mass.: MIT Press, 2024). Some years ago, Jean-Louis Dessalles offered (in French) a robust and encompassing rebuttal in Des intelligences très artificielles (Paris: Odile Jacob, 2019). It should be noted that the industry favors disclosing some limitations over others; see, for instance, the many pages devoted to hallucination, biases, and risks of “harm” in OpenAI et al., “GPT-4 Technical Report,” arXiv, revised version March, 4, 2024, https://arxiv.org/abs/2303.08774, esp. 10–14, 44–60, 80–100. Regarding the computational load, ongoing research efforts try to downsize the number of parameters or to distill the LLMs into smaller versions; see, for instance, Cheng-Yu Hsieh et al., “Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes,” arXiv, revised version July 5, 2023, https://arxiv.org/abs/2305.02301.
4. My friend and colleague Morten Christiansen disagrees with that characterization (email message to author, October 10, 2023). He considers that the self-learning process ends up evading the common definition of algorithm (as a hierarchical set of rules). It is true that, when writing poetry, GPT, for instance, differs from the Eureka machine built in 1845 by John Clark that was outputting Latin hexameters (and similarly differs from the many software that generated poetry in the last decades). It is not programmed for a specific task. This question is intricate, not completely new, and I cannot address it in full here. I will simply say that, in my view, the absence of specificity or the architecture of LLMs do not annul, as such, the prerequired algorithmic structure of their learning, notwithstanding the additional fact that the heavy human-made reinforcement procedures that are found everywhere in current generative AI are also based on simple sets of commands.
5. Prompt A by the author. Text B generated by ChatGPT-3.5, OpenAI, July 2023, edited for brevity.
6. All quotations in this paragraph from Catherine Malabou, Morphing Intelligence: From IQ Measurement to Artificial Brains, trans. Carolyn Shread (New York: Columbia University Press, 2019), 150–51, except “the reorganization after the breakdown or interruption reinforces the efficacy of the automatism” (119) and “international ethical and legal regulation” (158).
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.