What was it about cybernetics, that remarkably cross-disciplinary body of research of the 1940s and 1950s, that so appealed to Silvan Tomkins? Not just Tomkins, of course. Cybernetics fascinated many thinkers in the years immediately following World War II and in the longer postwar moment: mathematicians, engineers, neurophysiologists, anthropologists, sociologists, psychiatrists, philosophers, writers, artists, and musicians of the psychedelic/cybernetic 1960s. A recent upswing in history and criticism has begun to unearth the relevance of cybernetics (and its close cousin, information theory) for the postwar moment, especially for those thinkers associated with canonical French theory (Derrida, Foucault, Lacan, Levi-Strauss, and others) as it has come to be known and taught in the North American academy. From the perspective of these histories, mid-century structuralism appears to have been so thoroughly imbued with cybernetics and information theory that one writer has suggested that “a great deal of what we now call French theory was already a translation of American theory” (Liu, 291) while another has proposed the term “cybernetic structuralism” (Geoghegan, 111). Tomkins contributed to this transatlantic conversation. As we noted in chapter 1, he presented his understanding of the affect system at the Fourteenth International Congress of Psychology in a paper that was published in French in a collection edited by Jacques Lacan. “Le modele que nous présenterions,” wrote Tomkins’s translator, “serait un systeme d’intercommunication qui recoit, transmet, traduit et transforme les messages conscients et inconscients. Quel sont les interlocuteurs et de quoi parlent-ils? Voila la question.” (We present a model of a communications system that receives, transmits, translates, and transforms conscious and unconscious messages. Who or what is communicating and what is being talked about? That is the question.) In Tomkins’s model, the human being becomes a loose, complex assemblage, structured and motivated by information flows and feedback between numerous mechanisms. Lacan must have appreciated how, in this model, the circulation of messages (the letter) becomes the reality of the psyche.
In this section, we draw on our previous discussions of Tomkins’s use of cybernetic ideas (chapters 4 and 7) to unfold his concept of the central assembly. At the same time, we are curious about the broader, somewhat contradictory epistemic and political fates of cybernetics. On one hand, cybernetics appears to be a universalizing, imperializing “Manichean science” (Galison, 232) that evolved directly out of the context of war and is still imprinted with this context. On the other hand, cybernetics is “a form of life” (Pickering, 9), radically open ended and forward looking, characterized by protean application rather than utterly determined by its military origins. By the mid-1960s (that is, just after the publication of the first two volumes of AIC), as enthusiasm ebbed and funding structures disappeared, cybernetics as such became marginalized in the natural and social sciences, while its most significant ideas were integrated into or dispersed among other fields. No doubt, the ongoing difficulty of integrating Tomkins’s work into the theoretical humanities has something to do with his belated commitment to terminology and ideas that came to have a highly ambivalent political and epistemological status. Cybernetics continues to hover in the background of so much discourse on the posthuman, a crucial, formative element in the genealogy of our present moment whose role is just beginning to be understood.
Tomkins was no orthodox cybernetician, if such a thing ever existed. From the mid-1930s to the late 1940s, he worked at the Harvard Psychological Clinic (see chapter 11), just up the avenue from Norbert Wiener at MIT, whose book Cybernetics; or, Control and Communication in the Animal and the Machine (1948) defined the field and its many applications. Neither mathematician (like Wiener and John von Neumann), engineer (like Julian Bigelow), nor neurophysiologist (like Arturo Rosenblueth and Warren McCulloch), Tomkins would have fit more comfortably with the second cluster of participants at the Macy Conferences on Cybernetics (1946–53), those psychologists and social scientists (including Gregory Bateson, Margaret Mead, and others) interested in the value of cybernetic ideas for the human sciences. These ideas included a redescription of goal-directed or purposive behavior in engineering terms and a commitment to a model of circular causality, in particular, the central role of negative feedback in self-correction. Seemingly applicable across enormous domains, these cybernetic ideas raised hopes that a formal, computational approach to complex, reflexive aspects of human phenomena and behavior could be developed. No less functionalist than structuralist, cybernetics engaged the psychologists, psychiatrists, and psychoanalysts who participated in the Macy Conferences by bringing together various psychological approaches to mind and brain: discarding any unified notion of will or intention, cyberneticians spoke the language of behaviorism, yet, at the same time, emphasized unconscious purposes or goals. As Evelyn Fox Keller puts it, cybernetics “aimed at the mechanical implementation of exactly the kind of purposive organization of which Kant had written and that was so vividly exemplified by biological organisms; in other words, a science that would repudiate the very distinction between organism and machine on which the concept of self-organization was originally predicated” (65). Such a non-Kantian, broadly monist approach that spanned the human and natural sciences certainly appealed to Tomkins, whose commitment to nondualist thinking and disposition toward organized complexity (a term coined by Warren Weaver) led him to integrate cybernetic ideas into his theoretical apparatus.
First, and perhaps most significantly, cybernetics assisted Tomkins in conceptualizing the human being as a loose assemblage of interrelated systems. “From the outset,” wrote Tomkins not long after retiring from his university teaching, “I have supposed the person to be a bio-psycho-social entity at the intersect of both more complex higher social systems and lower biological systems” (“Quest,” 308). These distinct systems (biological, psychological, sociological) are not reducible to one another but rather exist in relations of dependence on, as well as independence from, one another. Tomkins insists on a looseness of fit between and within systems, at all scales, especially the biological. His uses of evolutionary theory (see chapter 3 and the interlude on Darwin) undergird this understanding:
The critical point is that the human being has evolved as a multimechanism system in which each mechanism is at once incomplete but essential to the functioning of the system as a whole. The affect mechanism is distinct from the sensory, motor, memory, cognitive, pain, and drive mechanisms as all of these are distinct from the heart, circulatory, respiratory, liver, kidney, and other parts of the general homeostatic system. (319–20)
Note that the human as “multimechanism system” is by no means the perfected creature that we hear about in so many encomiums to the design skills of natural selection. It is something distinctly more hodgepodge, a result of “multiple criteria” for adaptation that result in what Tomkins calls “play,” that is, “a very loose fit in the match between one mechanism and every other mechanism, between the system as a whole and its various environments, and reproductive success” (320). While play within and between systems is crucial, it is nevertheless limited by criteria of survival and reproduction: “although the principle of ‘play’ cautions against the possibility of an ideal fit, the second principle argues for sufficient limitation of mismatch to meet a satisficing criterion, that the system as a whole is good enough to reproduce itself” (320).
The “good enough” assemblage may be one of Tomkins’s most durable ideas. It characterizes not only the human assemblage as a whole but various mechanisms or subsystems as well. Affect, in particular, “is a loosely matched mechanism evolved to play a number of parts in continually changing assemblies of mechanisms” (320). In this context, Tomkins offers, once again, a familiar structuralist metaphor: “It [affect] is in some respects like a letter of an alphabet in a language, changing in significance as it is assembled with varying other letters to form different words, sentences, paragraphs. Further, the system has no single ‘output.’ ‘Behavior’ is of neither more nor less importance than feeling” (320–21). While it has been common to represent the turn to affect in the 1990s as a response to an exclusive emphasis in the theoretical humanities on linguistic signification, it strikes us that any too-rigid opposition between affect, on one hand, and code, language, or signifying system, on the other, has not yet fully taken into account the cybernetic context for structuralism.
By his own account, Tomkins’s conception of the assemblage took shape via the “fantasy of a machine, fearfully and wonderfully made in the image of man . . . no less human than automated” (308–9), which prompted an extended thought experiment (see our brief discussion in chapter 4). Wiener’s writings offered Tomkins “the concept of multiple assemblies of varying degrees of independence, dependence, interdependence, and control and transformation of one by another” (309), which led to his understanding of affect as an amplifying “co-assembly” (309). Again, the image of the human that one gets reading Tomkins is not the streamlined, tightly organized, perfected cyborg but rather “an integrated automaton—with microscopic and telescopic lenses and sonar ears, with atomic powered arms and legs, with a complex feedback circuitry powered by a generalizing intelligence obeying equally general motives having the characteristics of human affects” (1:119). This monstrous “generalizing intelligence” differs from the chess-playing artificial intelligence that would come into the historical foreground just as cybernetics faded into the background. Tomkins turned to the computer as a tool to model personality, not intelligence. In his contribution to the edited volume Computer Simulation of Personality (1963), Tomkins assesses various attitudes toward the computer in seeking a middle ground between those who “love and worship only a machine, because they are alienated from themselves as they are from others” and those who reject the machine, a rejection “based on alienation of the individual from that part of nature which is impersonal” (“Computer Simulation,” 5). Two decades before computers entered the home, Tomkins sought “to be at home with the computer” (7), “neither [to] derogate nor idealize himself or the computer” (7), and recognized the enormous possibilities of automated computation as well as its limits. The computer, he suggests, is “a complexity amplifier” (7) that is conceptually neutral; that encourages creative, constructive thought; and that (perhaps most significantly) “places a premium on clarity. The computer is sufficiently concrete minded, sufficiently moronic, so that the theorist must be meticulous, certain and detailed in how he instructs the computer, whose favorite response seems to be ‘huh?’” (8). Computer simulation, Tomkins argues, is less an instrumental criterion (of intelligence, say) than it is expressive of theory or a vehicle for ideas.
Although Tomkins’s research program did not directly involve the new computers (as far as we know), it did rely on the powerful idea of automated computation and the accompanying cybernetic understanding of communication as control in the human animal. In the last volume of AIC, subtitled Cognition: Duplication and Transformation of Information, Tomkins unfolds what he calls “the second half of human being theory” (4:1), the cognitive system in complex interaction with the motivational mechanisms, the affects and drives (see chapter 14). In a brief preface, Tomkins explains that he wrote most of this volume in 1955 but was distracted by the birth of his child and surprised by “the unexpected riches of affect” (xv), which became the focus of the first two volumes of AIC. He warns, “The contemporary reader may find the bulk of it both new and unfamiliar and old and dated. It was written 40 years ago, and I found little reason to change it. In some quarters it will be as persuasive or unpersuasive as it would have been in 1955” (xv). This last (or is it first?) volume of AIC did little to assist Tomkins’s reputation when it was published in the early 1990s. We are curious about its possible reception now, when ever more embedded digital technologies and exponentially increasing automation capacities are bringing questions of minded machines into the foreground. It’s not difficult to imagine the fictional designers in the HBO television show Westworld, say, consulting AIC as they script personalities for their lifelike, conscious androids.
Consciousness, of course, is the third major term in the title of AIC. Tomkins tackles the topic directly in a chapter on “The Central Assembly: The Limited Channel of Consciousness,” which begins with an evolutionary understanding of consciousness as connected with motility: “We find consciousness in animals who move about in space but not in organisms rooted in the earth” (4:288). The problem, as Tomkins puts it in information theoretical terms, is
the magnitude of new information necessary from moment to moment as the world changed, as the organism moved. The solution to this problem consisted in receptors that were capable of registering the constantly changing state of the environment, transmission lines that carried this information to a central site for analysis, and above all, a transformation of these messages into conscious form so that the animal “knew” what was going on and could govern his behavior by this information. (4:289)
Tomkins defines consciousness in terms of a particular kind of information duplication that he calls transmutation, “a unique type of duplication by which some aspects of the world reveal themselves to another part of the same world” (4:290). Interestingly, Tomkins conceives of this process, by which an unconscious message is transformed into a conscious report, as “biophysical or biochemical in nature and that it will eventually be possible to synthesize this process” (4:290)—consciousness as a biological phenomenon that can, in principle, be fabricated. “Fabricating consciousness is, of course, a very different matter from constructing ‘thinking’ machines. These, we assume, are intelligent but nonconscious” (4:290): Tomkins bypasses the tradition in AI and philosophy of mind that conceives of intelligence solely in relation to complex symbol manipulation in favor of a biological theory of consciousness.
In his (quite technical) review of the neurophysiological literature of the 1950s on central inhibition of sensory information, Tomkins pays particular attention to the cognitive psychologist George A. Miller’s famous paper “The Magic Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” (1956). As engaged as he is by the empirical data, he is not persuaded by the idea of an inherent channel capacity in human information processing. Instead, Tomkins proposes what he calls the “central assembly,” a collection of conscious reports that are functionally related to a central matching mechanism (see our discussion of imagery in chapter 7). What is admitted to the central assembly is “a compromise between centrally retrieved information and sensory input [in which] the relative contribution of sensory and central information is presumed to vary” (4:306). That is, rather than any inherent channel capacity (we are only ever consciously aware of approximately seven discrete objects), Tomkins proposes a highly changeable awareness dependent on competing, multiple variables: “As this assembly is disassembled and reassembled from competing sources, then conscious reports continually change from moment to moment” (4:306). Consciousness, for Tomkins, is a “semistable psychological structure” (4:306) that is constantly being (dis)assembled through a process of central matching. The individual’s awareness is of “centrally emitted imagery” that matches either sensory or memory input or, most commonly, some combination of the two. The changeability of the central assembly is crucial: there is no single channel, no unchanging self that is conscious or that an individual is always conscious of. Instead, the key question becomes, “What are the principles by which a person seeks or avoids information or selects or excludes it?” (4:307). This question of selective attention becomes a special case of motivated behavior.
We can see how intertwined Tomkins’s cybernetic, information-processing account of consciousness is with his affect theory. We can also see the importance of Freud, once again, who, as Tomkins puts it, “revolutionized the theory of awareness by explaining the process as a derivative [of] motivation” (4:312). While he disagreed with the Freudian premise that unconscious wishes underlie all behavior, or even, for that matter, all dreams (many of which he considered to be confrontations with unsolved problems or unfinished business [4:310]), Tomkins nonetheless insisted that “a general theory must bring back to the problem of consciousness the nonmotivational factors that the revolution minimized but without surrendering the gains won by Freud” (4:313). In the 1950s, it was the cyberneticians (the “neurophysiologists and automata designers” [4:313]) who had the potential to bring psychoanalytic and behaviorist-cognitive insights together (“cybernetic bedfellows,” as he calls them [Perspectives in Personality, 153]). This alliance made sense of the role of consciousness, in Freud’s understanding, as “a sensory organ for perceiving psychic qualities” (Interpretation, 407) and, in Tomkins’s cybernetic understanding, as it emerges from the transmutation of selected information. Perhaps the ongoing promise of cybernetic theory, which was also the promise of structuralism in some of its incarnations, lies in how it suspends the opposition between biological, psychological, and sociological explanations for what is (and is not) selected to become conscious. Of course, in suspending these oppositions, cybernetics also risks the imperializing tendencies of a science-of-everything that translates philosophical issues into engineering or design problems. These risks are only more relevant today than they were sixty years ago.
Tomkins discusses the significance of Wiener’s writing on cybernetics for his initial development of affect theory in “The Quest for Primary Motives: Biography and Autobiography of an Idea.” Only an abstract of the conference paper in which he presents an early formulation of these ideas, “Consciousness and the Unconscious in a Model of the Human Being,” has been preserved in Proceedings of the 14th International Congress of Psychology. The paper itself, translated by Muriel Cahen, appeared as “La conscience et l’inconscient representes dans un modele de l’être humain” in La Psychanalyse (1956), edited by Lacan. This material was revised for inclusion in various chapters of AIC1. Tomkins’s interest in computation appears across all the volumes of AIC but most explicitly in two chapters of AIC4, “The Central Assembly: The Limited Channel of Consciousness” (chapter 13) and “The Feedback Mechanism: Consciousness, the Image, and the Motoric” (chapter 14). For more on the computers of the 1960s, see his introduction to Computer Simulation of Personality (1963), a volume he coedited with Samuel Messick; see also his commentary on essays by Gerald Blum and A. R. Luria that appear in Perspectives in Personality Research (1960), edited by Henry P. David and J. C. Brengelmann.
The scholarly literature on cybernetics has been accumulating in recent years. On French translations of “American theory,” see Lydia Liu’s “The Cybernetic Unconscious: Rethinking Lacan, Poe, and French Theory.” On cybernetic structuralism, see Bernard Dionysius Geoghegan’s helpful genealogy “From Information Theory to French Theory: Jakobson, Lévi-Strauss, and the Cybernetic Apparatus.” On the military origins of cybernetics in Wiener’s work on antiaircraft guidance systems and the goal of predicting the behavior of an intelligent adversary, see Peter Galison’s “The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision.” And on cybernetics as an open-ended “form of life” and its many political and aesthetic manifestations, see Andrew Pickering’s The Cybernetic Brain: Sketches of Another Future. For a detailed history of the Macy Conferences and their participants, see Steve Joshua Heims’s The Cybernetics Group. We consulted several other works, including Céline Lafontaine’s “The Cybernetic Matrix of ‘French Theory,’” Evelyn Fox Keller’s “Organisms, Machines, and Thunderstorms: A History of Self-Organization, Part One” and “Part Two,” Heather A. Love’s “Cybernetic Modernism and the Feedback Loop: Ezra Pound’s Poetics of Transmission,” and Christopher Johnson’s “‘French’ Cybernetics.”
For an analysis of the role of affect and intersubjectivity in early artificial intelligence, see Elizabeth A. Wilson’s Affect and Artificial Intelligence. It strikes us that Tomkins’s biological theory of consciousness shares some intellectual filiation with Gerald Edelman’s, especially in its emphasis on neural reentry. See The Remembered Present and Bright Air, Brilliant Fire.