A contemporary reader of Tomkins may find it difficult to reconcile the posthumanist perspectives everywhere on offer in his writing with his blatant humanism. On one hand, we read an account of the human being as feedback mechanism and complex set of interdependent communication systems; on the other, we read bits of biblical exegesis and psychobiographies of the great Russian writers. Nowhere do these apparently contradictory discourses clash more resoundingly than in an early chapter of AIC, “Freedom of the Will and the Structure of the Affect System,” where Tomkins routes the traditional philosophical problem of free will through “more recent developments in the theory of automata” (1:108). Presenting an elaborate thought experiment concerning the design of what we would now call cyborgs or androids, Tomkins develops a concept of affect freedom fundamentally defined in reference to machinic automaticity. In this chapter, we discuss his key idea of affect freedom and, along the way, offer some contexts for understanding the challenge it poses both to conventional humanist and contemporary posthumanist theory.
To begin, it is often helpful to remember that Tomkins was trained in philosophy. He received his doctorate from the University of Pennsylvania in 1934 with a thesis on eighteenth-century ethics (“Conscience, Self Love and Benevolence in the System of Bishop Butler,” written under the supervision of Lewis Flaccus, a philosopher of aesthetics) before pursuing postdoctoral studies at Harvard with the logician W. V. O. Quine, among others. Tomkins moved from philosophy to psychology when he joined the Harvard Psychological Clinic under the directorship of Henry Murray (for more on Tomkins’s work at Harvard, see chapter 11). This transition was not the marked disciplinary shift it would later become. So much early twentieth-century philosophy sought to resolve or dissolve traditional metaphysical problems using tools and techniques of the sciences. In this context, Tomkins’s move to psychology should be understood as his adoption of a more thoroughgoing naturalistic account of the human. But even at its most empirical, his writing remains animated by speculative concerns and can best be thought of as a staging ground for encounters between philosophy, academic psychology, psychoanalysis, and the mid-century sciences of cybernetics and information theory.
Tomkins was particularly excited by the work of Norbert Wiener, whose Cybernetics; or, Control and Communication in the Animal and the Machine (1948) and The Human Use of Human Beings: Cybernetics and Society (1950) offered ideas that applied across disciplinary divides as well as ontological ones. Cybernetics appealed to Tomkins (and other thinkers) because it offered tools to translate metaphysical problems of human being into engineering ones. Specifically, it modeled purposive behavior without idealized notions of will or intention. Evelyn Fox Keller has located cybernetics as a consequence, in part, of “the intense concentration of technical efforts in World War II” from which there emerged “a science based on principles of feedback and circular causality, and aimed at the mechanical implementation of exactly the kind of purposive organization . . . that was so vividly exemplified by biological organisms” (65). She points out that what had been, in the previous century, a productive analogy between biological and mechanical self-regulation became in Wiener’s work a homology or even an identity. Machines and animals enter a new, and highly charged, deconstructive relation, with the mid-century sciences of organized complexity exerting strong pressure on the category of the human and prompting scientists to pose questions like, How can we model human behavior and experience in terms of feedback relations between complex systems? What kind of machine, or aggregate of machines, is the human being? (See chapter 12 for more on cybernetics.)
Like that of other cyberneticians, Tomkins’s thinking is characterized by a strong commitment to complexity. Consider how he begins his discussion of the “pseudo problem of the freedom of the will” by tackling “the conventional concept of causality, which . . . assumed that the relationship between events was essentially two-valued, either determinate or capricious, and that man’s will was therefore either slavishly determined or capriciously free” (1:109). Turning away from this notion of linear causality associated with eighteenth-century mechanism, Tomkins introduces the “complexity or degrees-of-freedom principle,” a formal concept based on statistical mechanics: “By complexity, we mean, after Gibbs, the number of independently variable states of a system” (1:110). In the second, revised edition of The Human Use of Human Beings (1954), Wiener frames the contributions of cybernetics by way of the work of Willard Gibbs and “the impact of the Gibbsian point of view on modern life” (11). A nineteenth-century U.S. mathematician and physicist, Gibbs developed influential mathematical treatments of the laws of thermodynamics that would be taken up by Claude Shannon in his theory of communication. Gibbs also worked on the mathematization of particle distributions, permitting more adequate representations of an observer’s contingent and uncertain (i.e., probabilistic) knowledge. The “Gibbsian point of view,” then, contrasts with the Laplacian worldview in which an observer can, in principle, predict outcomes based on certain knowledge of initial conditions.
Rather than argue against causality or determination as such (as do many voices in contemporary posthumanist theory), Tomkins turns to statistical mechanics to dislink questions of determination from those of freedom: “Two systems may be equally determined, but one . . . more free than the other” (1:110). Of “two chess programs, the one which considers more possibilities before it decides on each move is the freer general strategy” (1:110). In Tomkins’s redefinition, freedom becomes an index to the complexity of a system, that is, to the range and variety of possible responses to environmental conditions. While each of these responses is itself determined, it is not necessarily predictable (a consequence of the complexity of the system). The tools of information theory permit Tomkins to pluralize and to relativize the traditional philosophical dilemma (“The problem of free will can be translated into the problem of the relative degrees of freedom of the human being” [1:110]) and to operationalize the notion of freedom. He proposes to measure the freedom of any given feedback system in terms of “the product of the complexity of its ‘aims’ and the frequency of their attainment” (1:110) and concludes that “a human being thus becomes freer as his wants grow and as his capacities to satisfy them grow. Restriction either of his wants or abilities to achieve them represents a loss of freedom” (1:111).
Philosophically, Tomkins’s understanding of freedom in terms of expanded capacities resembles Spinoza’s writing in the Ethics on affect and the capacity for action (see our first interlude). At the same time, it is difficult not to hear Tomkins’s discussion in its sociopolitical context as an American response to postwar existentialism and the global expansion of consumerism. (It is interesting to note how his writing in the 1970s and 1980s offers alternatives to his earlier emphasis on what he calls the “mini-maximizing strategies of power” [3:243] that underlie this earlier notion of freedom.) Tomkins’s liberal-sounding humanism may be one reason why his writing continues to be difficult to access in a contemporary theoretical scene that often rejects perceived liberalisms. While it would be easy to assimilate his politics with Wiener’s, who tended to oscillate between extremes of optimism and pessimism in promoting a technocratic, cybernetic vision of both self and society, in fact Tomkins was much less worried about the coherence and autonomy of self and less committed to totalizing theories of society. His theory of value is open ended and pluralist to the extreme (“It is our theory of value that for human subjects value is any object of human affect” [1:329]). As a consequence, his humanism is capacious rather than prescriptive, exemplary of a scientific humanism that accepts the species designation for descriptive and investigative purposes.
It is his emphatic insistence on the crucial role for affect in understanding freedom that defines Tomkins’s approach to human being:
The human being is the most complex system in nature; his superiority over other animals is as much a consequence of his more complex affect system as it is of his more complex analytical capacities. Out of the marriage of reason with affect there issues clarity with passion. Reason without affect would be impotent, affect without reason would be blind. The combination of affect and reason guarantees man’s high degree of freedom. (1:112)
We will return to Tomkins’s understanding of the relation between affect and cognition in our last chapter (chapter 14). It is one of his most significant, and least understood, contributions. Far from being seamlessly integrated with cognition, the initial independence of the affect system from other elements of the feedback system accounts for its role in expanding capacities for action. His argument concerns the relation between the affect system and the “transmuting mechanism” that defines consciousness (see chapter 7). Recall that, according to Tomkins, the infant’s distress, anger, and other affects are general motives that do not lead the infant to take any specific, goal-oriented actions. By contrast with the drives, which involve programs that are instrumentally connected to the human feedback system (for example, the infant’s hunger triggers salivation and sucking), the affects “will remain independent of the feedback system until the infant discovers that something can be done about such vital matters” (1:113). This independence means that “most human beings never attain great precision of control of their affects” (1:114). It is the “ambiguity and blindness” (1:114) of the affect system, a consequence of its “imperfect integration” (1:114) into the human being’s feedback system, that paradoxically secures greater degrees of freedom by creating possibilities for learning.
Tomkins’s fundamental point concerns mistake, both cognitive and motivational: “Cognitive strides are limited by the motives which urge them. Cognitive error, which is essential to cognitive learning, can be made only by one capable of committing motivational error, i.e. being wrong about his own wishes, their causes and outcomes” (1:114). Once again, Tomkins recasts psychoanalytic ideas (here repetition compulsion) in cybernetic terms: “the residues of past human learning, our habits, are essentially stored neurological programs which may be run off with a minimum of learning” (1:114). Human beings (and, presumably, other animals) automatize what has been learned. At the same time that this permits them to adapt to changing environmental conditions, it interferes with the ability to change:
Part of the power of the human organism and its adaptability lies in the fact that in addition to innate neurological programs the human being has the capacity to lay down new programs of great complexity on the basis of risk taking, error and achievement—programs designed to deal with contingencies not necessarily universally valid but valid for his individual life. This capacity to make automatic or nearly automatic what was once voluntary, conscious and learned frees consciousness, or the transmuting mechanism, for new learning. But just as the freedom to learn involves freedom for cognitive and motivational error, so the ability to develop new neurological programs, that is, the ability to use what was learned with little or no conscious monitoring, involves the ability to automatize, and make unavailable to consciousness, both errors and contingencies which were once appropriate but which are no longer appropriate. (1:114–15)
It can be difficult to alter old habits precisely because they once worked so well. This is as true of an idiosyncratic piano technique as of the patterns of our loves. “The essential quality of man as we see it is not in the amount of information he possesses but in the mechanism which enables him constantly to increase his freedom” (1:115), and yet the same automatizing mechanism that increases our capacity for action by permitting us to adapt to an environment also interferes with our ability to perceive and respond to a new one. Thus, for Tomkins, human freedom is defined in terms of an automaticity that everywhere both enables and undermines it.
Tomkins’s commitment to understanding human automaticity emerges most clearly in an elaborate thought experiment on “the design of human-like automata” (1:115). Unlike the ideally rational chess-playing machines of the nascent mid-century field of artificial intelligence, Tomkins sought to imagine a full-blooded automaton that “would represent not the disembodied intelligence of an auxiliary brain but a mechanical intelligence intimately wed to the automaton’s own complex purposes” (1:119), a cyborg that resembes those in Philip K. Dick’s Do Androids Dream of Electric Sheep? (1968). There would be much to say about Tomkins’s entertaining, maternal alternative to conventional AI, ranging from his criticism of the automaton designer as “an overprotective, overdemanding parent who is too pleased with precocity in his creations” (1:116) to his fantasy of how humanlike automata would reproduce and create societies. For our purposes, we would briefly point to the crucial place of the affect system in these automata: “there must be built into such a machine a number of responses which have self-rewarding and self-punishing characteristics. . . . These are essentially aesthetic characteristics of the affective responses” (1:117). Tomkins insists that these aesthetic qualities not be defined “in terms of the immediate behavioral responses to it, since it is the gap between these affective responses and instrumental responses which is necessary if it is to function like a human motivational response” (1:118). There are a number of significant gaps in this automaton’s affect system: “There must be introduced into the machine a critical gap between the conditions which instigate the self-rewarding or self-punishing responses, which maintain them and which turn them off, and the ‘knowledge’ of these conditions and the further response to the knowledge of these conditions” (1:118). These various gaps (between the conditions of affect activation/maintenance/deactivation, the automaton’s awareness of these conditions, and its ability to respond once it has become aware) create considerable play in the feedback system as a whole, making it possible for the automaton to make mistakes and to learn. These gaps, at once conjunctive and disjunctive, are conditions for the generality of affect and the particular freedoms of the affect system. (For a discussion of Tomkins’s emphasis on gaps in relation to evolution, see chapter 3.)
We have been selectively summarizing the first half of a long chapter from AIC1, the second half of which consists of a discussion of the varieties of freedom in the affect system, including freedoms of time, intensity, and density; freedom of object; freedom of coassembly; freedom of consummatory site; and others. We find particularly fruitful those of Tomkins’s ideas that seek to update the early grounding of psychoanalysis in nineteenth-century thermodynamics by way of the mid-twentieth-century sciences of cybernetics and information theory. For example, he suggests that whereas the drives function primarily via homeostatic mechanisms that regulate internal environments, “the affect system of man operates . . . within a much more uncertain and variable environment” (1:124) characterized by an abundance of information. He offers several emendations of classical psychoanalysis, proposing that “had Freud not smuggled some of the properties of the affect system into his conception of the drives, his system would have been of much less interest than it was” (1:127). Gesturing toward a revision of the theory of sexual development (the oral, anal, and genital stages), he states that Freud’s emphasis on the sexuality of the oedipus complex “obscured the significance of the family romance as an expression of the more general wishes to be both the mother and father, and to possess both of them, quite apart from the fear which might be generated by a jealous sexual rival” (1:127). Tomkins’s basic point about the transformability of the affects (“it is the affects, not the drives, which are transformable” [1:143]) is part of a reconsideration of sublimation (1:141–43). Most of these emendations are consequences of Tomkins’s emphatic commitment to the freedom of object: “There is literally no kind of object which has not historically been linked to one or another of the affects” (1:133). His discussion of “affect–object reciprocity” (1:133–35) explores phenomena that psychoanalysis describes in terms of the defenses of projection and introjection; here he proposes that the “somewhat fluid relationship between affects and their objects” (1:134) is necessary for knowledge projects of all kinds.
It is fitting that this chapter on the freedom of the affect system ends with a discussion of the restrictions on freedom inherent in the affect system. Tomkins notes that affective responses seem, phenomenologically, to be “the primitive gods within the individual” (1:144) over which humans have little control. He describes this lack of control in information theoretical terms of high redundancy (“If one end of the continuum of complexity is freedom of choice of alternatives, then the other end is redundancy” [1:143]) and speculates about the sources of this high redundancy. These include the evolutionary relation between the affects and the primary drive deficit states (1:144–45), the “syndrome characteristic” of affect (the innervation of all parts at once in affective response; 1:146), the contagion of affect (affect arousal itself arouses more affect; 1:146), and other redundancies. Tomkins’s goal is to sketch a model of the human being that leads to a realistic assessment of how persons can change. But this model is not static: because the conditions for change themselves change, and because humans are so complex, it is impossible to predict how or when conditions may alter, leading to fundamental shifts in relation to freedom. Tomkins’s commitment to automaticity, then, is everywhere accompanied by a commitment to the biological contingency of the human animal. These are exemplary of his scientific humanism.
Our discussion is largely based on “Freedom of the Will and the Structure of the Affect System” (chapter 4 of AIC1). We also consulted some scholarship on cybernetics. For more on the complex category self-organization, see Evelyn Fox Keller’s “Organisms, Machines, and Thunderstorms: A History of Self-Organization, Part One.” On Norbert Wiener and mid-century liberal subjectivity, see N. Katherine Hayles’s “Liberal Subjectivity Imperiled: Norbert Wiener and Cybernetic Anxiety” in How We Became Posthuman (1999). On the relations between cybernetics and deconstruction, see Christopher Johnson’s System and Writing in the Philosophy of Jacques Derrida (1993), and for more on Tomkins’s involvement in the computer simulation of personality, and its broader relations to research in the field of artificial intelligence, see Elizabeth A. Wilson’s Affect and Artificial Intelligence (2010).
For a detailed clinical case history of the affective dynamics involved in the repetition compulsion, see Virginia Demos’s The Affect Theory of Silvan Tomkins for Psychoanalysis and Psychotherapy (chapter 7).
Degrees of freedom (d.f.) is a formal statistical calculation commonly used in psychology experiments to represent how much variance there is in a data set. A larger data set has more degrees of freedom than a smaller data set. Tomkins is using the term to indicate how complexity varies in different systems: some systems (e.g., human beings) have more degrees of freedom (more opportunities for variation) than other systems (e.g., amoebas).