In 2008 the Immortality Institute initiated an online forum to discuss the possibility of a name change. The dilemma was over the word “immortality.” On the one hand, it was said, the word connected the project to an ancient human dream, lending it universal appeal. It also made a bold statement in the cautious world of science, proclaiming that this is not about tacking on a few paltry years, as gerontologists would want, but about stopping aging and conquering death. On the other hand, the term also evoked false religious promises of existence after death.
It was a perfectly secular bind: what was being sought was immortality in a physical sense, that is, within the accepted parameters of a materialist scientific understanding; yet the idea itself seemed to float outside of acceptable scientific and secular parameters, indicating claims and possibilities about the self, consciousness, personhood, matter, and humans that seemed to have become irrevocably bound up with that thing called religion.
The link between immortality and religion had moved another group to initiate a name-change vote a little earlier. In July 2006 the nonprofit Immortalist Society, a close collaborator of the Cryonics Institute, wrote to their members asking them to vote on a name change. After living with the term for over thirty years, the directors contemplated a name change because, as they explained it, “the word ‘Immortalist’ has too many religious connotations [a few are acceptable?] and is a public relations liability.”1
In fact, the use of the word “immortal” has been a mild source of controversy from the beginning of cryonics in the 1960s. Upon its founding, Alcor decided very consciously to forgo reference to immortality and attach the phrase “Life Extension” to its official title: the Alcor Life Extension Foundation. Other groups called themselves by such names as CryoVita and Transtime, focusing on concepts of life and time. Nevertheless, terms like “immortality research,” “immortality science,” and “physical immortality” were commonly used, as exemplified by the Immortalist Society itself and their publication The Immortalist. Besides, the book credited for founding the cryonics movement, Dr. Robert Ettinger’s The Prospect of Immortality, wore the word on its sleeve with pride. Similarly, in 1969 Alan Harrington published a popular book of polemics called The Immortalist, making an unabashed and defiant case for the word because, he argued, the true goal in human history was to transcend our biological limitations, the ultimate limit being death, in order to preserve and extend human consciousness. Later, a number of science fiction novels and nonfiction books featuring cryonic suspension and reanimation, chief among them The First Immortal by James Halperin, became successful in the cryonics and life-extension communities. Many older immortalists told me that The First Immortal either introduced them to the concepts behind cryonics or influenced their decision to sign up with a cryonics organization.2
But as public exposure grew in the new millennium, so did self-consciousness about the public terms by which the various groups would be known. Immortalist projects were beginning to make headlines in more serious places than the “weird news” columns. Public relations became a major focus as media and outside authors began to write about some of these issues in the late 1990s, frequently using the term as a sensational titling strategy (e.g., science writer Stephen Hall’s Merchants of Immortality ). In response, immortalists began to consciously avoid the word. In many discussions and meetings, I heard “immortality” referred to only half-jokingly as the “I” word—the word that could not be spoken to the outside world. The Immortality Institute’s collection of essays, for example, was called The Scientific Conquest of Death: Essays on Infinite Lifespans. Aubrey de Grey started out with the Methuselah Foundation, named after the long-lived biblical figure, before eventually shifting to the more mechanical-sounding “Strategies for Engineered Negligible Senescence” (SENS).
In interviews and online testimonies, many members shared how reluctant they were in their personal and professional lives to associate themselves with the term because of a real fear of stigma and ostracism, including in scientific or secular settings. One member told me, “I don’t mention the word ‘immortality’ around my family.” Another testified, “I’m not stupid enough to use the word ‘immortality’ about anything I do. I prefer not to be seen as a certified wacko.” These were typical accounts, and the apprehension is by and large justified, as the two following stories from email listservs show:
When I mention the name Immortality Institute to my friends (people who are computer science or biotech majors . . . who are really interested in life extension and science) they get turned off immediately because they think it’s some sort of new age group, or a cult, or a pseudoscience group.
I have only mentioned the Institute on one occasion to my colleagues. My colleagues are well-published and recognized in the field of aging. Let’s use Ass/Prof. G.L. as an example. He has published a book [on the medicine of aging] as well as having close to 100 published papers to his name. Before I could even explain what the mission was, the room was full of laughter. I won’t be doing that again in a hurry.
“Immortal” was a catchall, recognizable term, but one with the disadvantage of inviting ridicule or scaring people away. Scientists and regular people were much more open to other terms, such as “negligible senescence,” “longevity science,” “anti-aging research,” or “life extension.” Foremost was “the religious problem”: the word brought up all the religious connotations the very secular and atheist Immortality Institute was trying to avoid; it scared away mainstream folks, made the serious scientists snicker, and instead attracted the kinds of people that life-extension advocates regularly referred to as “the mystical types.” Because immortalist communities want to claim that their project is scientific and materialist, and not dependent on imagined worlds or the “whackadoo,”3 religion becomes the line they draw to separate their projects from that other sort of immortality project.
The Immortality Institute’s online poll and discussion on a name change offered numerous alternatives—over 220, actually. In the end, though, none seemed to win the day and the members voted to retain the name Immortality Institute, but added a subtitle: Advocacy and Research for Unlimited Lifespans.
At the time the general consensus seemed to be that “immortality works fine.” It defined a broad enough concern inclusive of people who were focused not only on a biological approach to immortality but also on an informational one. Equally important, the director at the time, who goes by the handle Mind, explained to me in an email, the sense was that Immortality Institute had already “staked out a spot in the memespace.” Mind emphasized, “we own ‘immortality’ on Google.”4 Who could ask for more?
“Immortality” survived—but only for another year until the online forum changed its website name to Longecity.
As for the Immortalist Society, its earlier effort had ended in a similar outcome. The necessary two-thirds majority required for the name change did not materialize either. Instead, the directors agreed to at least change the name of their publication, The Immortalist. In the first issue of the magazine now called Long Life, the directors wrote:
Yes, the magazine you are looking at was formerly called THE IMMORTALIST . . . after very extensive arguing the I.S. Directors agreed on the magazine name change.5
The ambiguities infusing these two episodes are indicative, on the one hand, of the ways in which immortality has served as a marker in the separation of the religious and secular realms, and on the other, of the extent to which the line between religion and the secular is itself blurry and unstable.
It is often assumed that with the rise of science, scientific materialism, and evolutionary explanations of the origins of life, immortality was simply discarded as a scientific topic. But in fact immortality was a topic of rigorous debate and research from the 1880s through to the early part of the twentieth century within the sciences, resurfacing again recently. In this chapter I will review the ways in which immortality historically emerged and became constituted as a tense object in and around the sciences, including biology, the social sciences, and psychology, sometimes an object of study and fascination, sometimes an anathema.6 I focus on the ways in which the technoscientific and wider cultural contexts informed each other, and show how this object was uncomfortably stretched between finitude and infinitude, marking the possibility of infinite openness against the maddening limits of finitude and death. This is not a comprehensive history or genealogy; rather, I take a selective approach with several emblematic periods and ideas, especially as they might relate to immortalism today. I move from the early European materialist elimination of the immortal and “primitive” soul to the rise of modern psychology in the United States in the 1880s and 1890s; I examine the biological sciences in the period from the 1880s to the mid-1930s, when immortality was indeed a viable research project, with a brief review of its decline with acceptance of the Hayflick limit, at which point decay again became understood as an inevitable part of normative biology; and I detail the banning of cryonics and immortality research from the American Society for Cryobiology as a way to explore scientific boundary work (Gieryn 1983). As science and technology studies scholars have consistently shown, part of the key work of official science and scientists has been the boundary-maintenance work of keeping vitalism, mysticism, supernaturalism, and other versions of the incredible, unpalatable, primitive, and whackadoo out of their domain. Immortality, as the space of whackadoo par excellence, where the boundary between credible and incredible, acceptable and taboo, constantly dissolves, is also the space of science’s boundary-maintenance work.
For secularists, the concept of immortality has served as one of the strongest markers of religiosity from the beginnings of materialism in Europe in the eighteenth century. Under the general category of immortality, secularists lump together a diverse set of concepts, entities, and practices—from ghosts, to resurrection, to rebirth, to possession, to afterlife, to research on planarian worms—whose main unifying relationship is not their relationship to each other as a class but to a materialist ontology that undergirds the space and procedures of secularity, including the teleology that leads from a unique birth to a final death. As Sharon Kaufman and Lynn Morgan have written, “The broad topics of reincarnation and resurrection, along with the particular practices of exhumation and reburial, pose a challenge to our terms beginning and end, and to the discrete, linear, Eurocentric trajectory these terms imply” (2005, 320). Even within the history of Christianity in the West, immortality has taken many different forms, as Carolyn Walker Bynum (1995) documents, from detailed debates about resurrection of the body to the immortality of the immaterial soul to their reunification. In Japan, large cross-sections of the population seem to believe in an afterlife without accepting the existence of a transcendent deity (Lock 2002); in the United States, many younger people who do not believe in the existence of God and many others who do not identify with a particular religion nevertheless believe in some sort of life after death, including a paradisiacal world beyond this one (Pew Forum on Religion and Public Life 2010; also see Gallup and Proctor 1983). Platonic arguments about the immortality of the soul were not “religious” arguments insofar as they did not depend on deities or spirits, though they did posit a world of ideals. Nor does every single set of practices and beliefs that has been categorized under the label of religion place much emphasis on afterlife belief systems. The most common examples are forms of Judaism and Confucianism (A. Segal 2004), but as mentioned earlier there are also non-monotheistic hunter-gatherers like the Hazda who tend to dispense with the usual hullabaloo over burial rites as well as beliefs in continuity of the person after death (Woodburn 1982). Even where there was a structure of judgment after death, the assessment did not necessarily involve a specific deity, as can be seen in the principles of karma in Hinduism or their equivalents in gnosticism or Orphism (Brandon 1967). In other words, the links between afterlife beliefs and theistic beliefs are neither universal nor necessary. The fact that afterlife beliefs7 and religion have been yoked together, even though they do not harbor a necessary connection, only emphasizes the point that the category of religion itself is problematic, as it seems to be cut up in arbitrary ways to fit various assumptions or agendas (Cantwell Smith  1991; Asad 1993).
Similarly, the elimination of the doctrines of the soul and of immortality, far from being a mere side effect of the elimination of a belief in God, was in itself an independent and a crucial step on the way to establishing a materialist or “immanent frame” (C. Taylor 2007), and later a secular space devoid not only of religious dogma but of metaphysics altogether. The series of discussions in England and France that gave rise to a robust materialist conception of the world in the eighteenth century had as much to do with the existence of God as with a separate debate over the concept of the soul and its survival after death (R. Martin and Barresi 2006; Hecht 2003).
Materialist doctrines, broadly speaking, had their strongest foothold in the French Enlightenment and the coterie of philosophers, mathematicians, and writers revolving around Diderot and his Encyclopédie project, culminating in the most renowned early materialist exposition, Systeme de la nature, written by Baron d’Holbach ( 1835), a friend of both Diderot’s and Hume’s.8 What came to be known as French materialism was driven by two goals: first, to explain the world based on simple materialist premises of sense perception and cause and effect; and second, equally important and overtly political, to refute religious doctrines as such, in particular the existence of God and an immaterial soul, and by extension its immortality. In other words, it was a question not only of presenting a materialist argument about the composition of the world but also of preparing that materialism to effectively counter some of the doctrines posited by religion and “spiritualism,” including of the Cartesian sort, which separated mind from its material basis.
Jennifer Michael Hecht’s (2003) detailed account of the Society for Mutual Autopsy, a group of French materialists who were dedicated to dissecting brains after death, illustrates the extent to which early materialist agendas were driven by an obsession with the concept of the soul. On the one hand, the society’s mission was to show that there were correlations between the physical features of a brain and that person’s mental characteristics. On the other, that objective was determined by an ideological goal: to prove the nonexistence of the soul. Hecht suggests that this “deconsecrating project guided a host of endeavours at the dawn of professional science” (2003, 254). Materialist scientists and anthropologists wanted to discredit the sacred and transform it into the profane by taking on specific aspects of what was considered religious or supernatural: religious ecstasy, demonic possession, faith cures, and the soul’s immateriality. The crucial criteria for scientific work included methodological rigor, repeatability, and verifiability, but they also included the production of such knowledge as would “lock out philosophy and religion” (2003, 50). Against the Catholic notion of the sanctity of the whole body, the integrity of which was crucial to resurrection, and against the noumenal soul, they publicly denigrated their bodies after death, calling it “rotting garbage” or “an assemblage of decomposing matter,” positions that were also articulated as anticlerical. Because of their materialist commitments, and therefore their aversion to anything resembling a soul, atheists, freethinkers, and materialists eventually took aim at Plato, Kant, and Descartes, that is, at the metaphysicians or rationalists. Grappling with and proving the nonexistence of souls or soullike concepts—whether or not sourced in religion—has been a boundary-maintenance activity of modern secular authority. At the same time, Hecht shows, they “agonized over the end of the soul and its consequences for humanity” (2003, 2). The creation of the Society for Mutual Autopsy was an attempt to make sense of their soulless death by offering up their own bodies and brains to science and progress, giving them a chance to connect their existence to eternity, or at least posterity.
In early United States history, materialism had significant influence in some circles, around Jefferson for example (Richardson 2005),9 but most commonly it was presented as a threat, a prelude to total moral dissolution, a French corruption that could only be countered by reemphasizing the immortality of the soul. In reaction, antimaterialist and millenarian tracts proliferated through the nineteenth century, including during what historians of American religion call the second and third great awakenings—often blaming the evil on the French. For example, in Immortality of the Soul and Destiny of the Wicked, Rev. N. L. Rice ( 2005) attacks materialism as a French invention and the greatest evil to have been thought up. “In making man a mere material organism and denying his free agency, they furnish the best apology for all wickedness.” The argument was common that sin afflicted every individual and could only be battled with help from theology and the church. The church and its discipline prepared the individual for a proper life, and immortality was a promise delivered upon the completion of a good Christian life. Without that threat and promise, humans would turn to the wickedness inherent to this world.
Among less reactionary religious groups and institutions, there was a frenzy of attempts to examine and grapple with the effects of science on doctrines of immortality. A number of symposia were organized at the turn of the century, including the “Christian Register Symposium on Science and Immortality” in 1887 and another one in Chicago in 1902 called “The Proof of Life after Death: A Twentieth Century Symposium.” In the discussions of these symposia, as in a number of books and articles, it is easy to see the now familiar acrobatics of religion attempting to adjust to a scientific worldview: to either attempt to reconcile the Christian view of immortality with a scientific view of the world by suggesting that their concepts did not contradict each other, that they were merely given to different domains, that science had nothing to say about the spiritual; or else to fall back upon inner experience—including its modern, secular vocabularies—which in the case of immortality meant that a “natural and not a miraculous” order of existence was acknowledged, in the words of Rev. Theodore Munger, since an inner sense of immortality could nevertheless be experienced within the self through the self’s consciousness of Christ (Munger quoted in Leuba  1921, 150–51). Liberal theologians like Friedrich Schleiermacher wanted to reconcile theology with Enlightenment critiques by authenticating religion as a personal and empirical experience (see Proudfoot 1985, xiii). A similar defensiveness plagues current theological accounts of the afterlife that tend to justify immortality doctrines with reference to the increasing authority of doctrines of secular immanence and the biological body. They argue, for example, that belief is justified until disproved (Zaleski 1996), or else they justify belief through indirect evidence, by marshaling data from scientific experiments on near-death experiments and the irreducibility of the mind to matter in order to posit the possibility, even within science, of the existence of a soul (Hick  1994; Zaleski 1996). Even for the religious, discussions about the afterlife and immortality are already entangled in secular discussions about their impossibility.
The naturalist or materialist view on death was not the only or the predominant view, but from the end of the eighteenth century onward, a growing number of people in Western Europe and North America came to live with the fear that both their lives and their selves could or would come to an absolute end with their death (Sloane 1991; Aries 1975). Matthew Arnold, the British poet and schoolmaster, said of his own teachers that they “purged my faith,” and one result was a new relationship to living and dying: “What doest thou in this living tomb?” he queried of humanity, signaling a moment when living in the face of the finality of death was suddenly being experienced as a form of dying, the person—“thou”—trapped in the living tomb of the body. Similarly, Wordsworth, in one of the first secular explorations of memory, nationality, and personal identity, The Prelude, laments that all we’re doing here on Earth is “unprofitably traveling toward the grave.” Narratives of people who had “lost” their faith or been “purged” of it through secular proselytization proliferated. The possibility that there was no afterlife became, if not a certainty, at least a definite consideration that in itself began to shape a particular orientation toward life imagined now as a long death, a living toward dying.
The materialization of the soul was never properly completed. Instead, there arose new concepts that tried to capture or explain various aspects of what once had been delineated by the concept of the soul (see Flanagan 1991). And these concepts were developed through (and in turn generated) new disciplines and modes of knowledge, starting with the social sciences as they separated themselves from the humanities and theology on the basis of the secular differentiation of spheres of value and expertise (Calhoun et al. 2011, 4), followed by new studies of the “mind,” mainly psychology. Descartes was the first to begin to use “mind” in this sense. Though he believed in God and his epistemology ultimately depended on God’s existence, Descartes started using mens, “mind,” in order to avoid using “soul” as his referent and still refer to some of the same capacities and attributes. Søren Kierkegaard turned spirit into self, and William James, who studied the physiology of the brain, developed the term “spiritual self” to denote “inner subjective being.” James approached religious experience as an irrefutable if individual empirical event rather than as a doctrinal or metaphysical proposition; he thus provided religion with an authentic, real, and autonomous domain based on mental states that could be called mystical or religious (also see Coon 2000). This pragmatism did little to troubleshoot problems faced by materialism, however, and mind and matter remained irreducible to each other. One might cite later attempts to occupy the space left by the elimination of the soul: concepts of self, person, and personality in sociology, anthropology, and psychology; and the cluster of concepts glossed as consciousness (awareness, autonomy, subjectivity, capacity to feel pain, etc.) in political science, law, philosophy, critical theory, and neuroscience.
Dilemmas over the proper way to think of persons and their extensions beyond the body (and so beyond death) are caught up in the secular–religious, modern–primitive divides over beliefs, and so it is no accident that the problems of death and immortality appear as crucial pivots in the founding of anthropological theories of culture as well as psychology just when materialism was shaping its own view of a world without a prime mover, without souls, and with a final death as the central fact of life itself. Originally, in eighteenth-century France, psychologie was considered to denote doctrines about the soul, that aspect of being human without which one would not be human. But as psychology began to make claims as a scientific discipline, “soul” was slowly extradited to the domain of religion and there was a clear effort to steer away from such formulations (N. Rose 1996). Prior to 1880 there were no academic departments of psychology in American universities (Fuller 2001). Classes in psychology, if there were any, were usually taught in theology or philosophy courses, often as an extension of Protestant moral theology (F. Fuller 2001, 125). The brain, for its part, was entirely under the purview of physiology departments. Within a decade, however, almost every American university had established a separate department of psychology—to teach a topic now being referred to as “psychology without a soul.” The history of early psychology in the United States clearly traces a shift from the study of mind as soul to a study of the mind without soul—in other words, as a subject within the purview of science. In this sense, the “mind” is a secular concept; it is one of the specifically designated nonreligious placeholders for the discarded idea of soul.
That first generation of American professional psychologists, themselves the sons of Protestant ministers, saw their main task as the separation of the study of the mind from theological concerns (Fuller 2001, 125). It is to one of them, Professor James Leuba, that we owe the first figures recording American attitudes toward an afterlife dating back to the turn of the twentieth century. A relatively influential member of the first generation of American psychologists and the son of religious parents, Leuba had anticipated a career in the ministry but in a crisis of faith realized that he could not reconcile his new scientific understanding of the world and the difficulties of modern pluralism with his traditional religious teachings (Leuba  1921). His “conversion” to psychology was a means of studying human nature more scientifically—and, in his case, to refute religious thought (Fuller 2001, 197). Taking on the soul and immortality was one way to achieve this and, indeed, a crucial one for a discipline like psychology. The nonexistence of the soul was thus not just a fact about reality for early psychology but an elimination project to be carried out.
Leuba sent out a questionnaire to a thousand scientists chosen at random from the publication American Men of Science, the majority of them college and university professors. About half of the respondents said they disbelieved in immortality, had doubts about it, or were agnostic. Leuba presented the same questionnaire to lists culled from the membership of the American Historical Association, the American Psychological Association, and the American Sociological Association, obtaining similar results. Interestingly, of those who said they did not believe in immortality, 45 percent said they nevertheless desired it ( 1921, 261), and overall, a larger percentage said they believed in immortality than said they believed in God, a result that seems consistent with contemporary results.10 Leuba’s own conclusions were twofold. First was a version of secularization theory: Christianity faced a problem of belief as more and more people were abandoning the two most important tenets of the faith—the existence of God and the immortality of the soul, which, it must be noticed, were held independently. Second, he theorized sociologically in a Durkheimian vein that the condition was related to the rise of individualism and the attendant loss of institutional authority, specifically Christian institutional authority.
If psychology’s genealogy runs through immortality via the secularization of a Christian soul, one might say that anthropology’s runs through the secularization of the souls of others. Anthropology begins with E. B. Tylor’s well-known and influential schema of cultural evolution ( 1958), where the animism of “savages”—the belief that a whole range of objects had spirits or souls—is transformed through progressive stages of abstraction into supposedly more complex monotheistic concepts of an immaterial soul and doctrines of retribution. Having conceived of this schema of cultural evolution, based on a Spenserian social evolution model, Tylor sought to explain how a belief in nonphysical spirits could have arisen to begin with. As a materialist, he took it for granted that these entities do not in reality exist because they cannot be observed through sense perception in the same way that objects of science could. As a believer in the psychic unity of mankind, he had to explain the phenomenon by referring to reasoning and experience that could translate across time and culture. He speculated that the idea of spirit arose from dreams and memories of dead relatives—instances in which a primitive mind might take literally a vision of something that, Tylor says, does not exist outside that vision. Apparently confused by the goings-on inside their own minds, “ancient savage philosophers” (12) concluded, according to Tylor, that every person also has a “phantom” or an image. According to this view (again popular today in the cognitive anthropology of religion, not to speak of the world of selfies), the entire apparatus we call religion arose through and stands on an original cognitive error related to death and the absence of the person.
In 1911, James Frazer, whose more famous work connected “savage” doctrines to Christian ones through the category of sacrifice, turned his focus to immortality in a series of lectures at the University of St. Andrews, comprising the first volume of The Belief in Immortality and the Worship of the Dead. Before foraging through his usual sources of travelers, missionaries, proto-ethnologists, colonial administrators, and other Western mythmakers to catalog the range of beliefs, Frazer, who influenced one of the founders of cryonics, dedicates a few sections to speculating on the theme of immortality in the constitution of humankind as a whole. By Frazer’s time, the Tylorean immortality story, with origins in a sort of “primitive” illusion refined by the Greeks and Christians and superseded by modern science, had already become a common explanation of the stubborn fact of religiosity in light of empirical facts and theories of human rationality. Frazer (1913, 29) himself admitted: “It is perhaps the commonest and most familiar that has yet been propounded.” Frazer’s goal was to argue that beliefs were not consistent across cultures and, more, that Christians and moderns still harbor illusions of continuity. He stated his central question thus: “How does it happen that men in all countries and at all stages of ignorance or knowledge so commonly suppose that when they die their consciousness will still persist for an indefinite time after the decay of the body?” (1913, 26). That the opposition of the illusions of the mind and putrefaction of the body should be the binary through which the question of immortality, of life, death, and the afterlife, gets raised is itself a modern way of thinking about the issue—after all, in many places putrefaction is only a part of the material trajectory of the body on its way to clean skeletal remains (Bloch 1971; Seremetakis 1991) and so is considered part of the continuity, rather than rupture, of the relations between the living and the dead, part of the process of engaging with the reality of death and the ensuing grief.
At any rate, Frazer’s answer to his own question regarding immortality was like Tylor’s regarding religion: it is some sort of cognitive error based on the illusion that dead people are not dead because they appear in dreams. This not only requires the assumption that people, mainly nonwhite people, confused dream with reality, but is also a way of drawing and monitoring the boundary of the unreal and the real and designating who belonged on which side, with rational white Euro-Americans having the most proper access to reality. Between Tylor and Frazer, immortality had become racialized as a key marker of primitivism for moderns—and so, again, nineteenth-century anthropologists prove themselves better as mythmakers of modernity than as chroniclers of other peoples in other places.11
It was Durkheim who, in his distaste for psychological explanations, first pointed out that the argument about dreams and the dead presupposes what it sets out to explain, that is, the mysterious idea of the spirit or deity: How did these memories and dreams get transformed into concepts like spirits and/or gods? As Durkheim asked in his critique of Tylor, couldn’t they just have been viewed simply as dreams or memories? Or did the dreamers have what Durkheim called “the faculty . . . of adding something to the real” (1995, 469)? Though Durkheim did not take on immortality directly, he suggested that it was the endurance of society or social forms, preceding and surviving the individual, that materially or literally represented immortality. Society was the soul.
Durkheim’s student Robert Hertz (1960) further developed this sociological hermeneutics of afterlife and immortality. While examining what he saw as the mystery of the double burial, Hertz argued that the practical reasons of hygiene are insufficient to explain a second burial. The belief that the soul is free to depart the body only once the bones, and nothing but the bones, are left would not necessitate a second burial, especially one including such pomp and cost. Death and the afterlife must be understood not in terms of beliefs but as a process dealing with the social person. The person is never simply a person, but a member of a group, formed by the group, invested in by the group. “The society of which that individual was a member formed him by means of true rites of consecration, and has put to work energies proportionate to the social status of the deceased: his destruction is tantamount to sacrilege” (1960, 77). Thus, he concluded, “when a man dies, society loses in him much more than a unit; it is stricken in the very principle of its life, in the faith it has in itself. . . . Because it believes in itself, a healthy society cannot admit that an individual who was part of its own substance, and on whom it has set its mark, shall be lost forever” (79).
Hertz suggests that a person’s mortality is to be reckoned with as neither a biological nor an individual psychological event but as one that threatens the continuity of the social group. Consequently, he comes to understand burial rites as a process addressing these two problems: the gradual replacement of the person as a social member within a permanent social structure, and healing the wound in the image society has of itself as something lasting or permanent. After a death, society must “regenerate” itself. Hertz’s argument, therefore, is that “in establishing a society of the dead, the society of the living recreates itself” (1960, 72).12 He concludes: “Society imparts its own character of permanence to the individuals who compose it: because it feels itself immortal and wants to be so, it cannot normally believe that its members, above all those in whom it incarnates itself and with whom it identifies itself, should be fated to die” (77). In moving away from false belief to explain the problem of death and immortality, Durkheimian functionalism reified society as a transcendent agent (it feels, it incarnates, it believes).
As the afterlife was transformed into and theorized as social continuity, the psychology of grief (as opposed to the ontology of ghosts), the survival of one’s “works” or a person’s life project (Aries 1975; Walter 1997; Lifton 1975), the affective and practical ways in which contact between the dead and the living persist within secular settings came to be generally ignored or sequestered. Tony Walter (2005) has astutely outlined some of these persisting influences in the modern secular world between “the world of the living and the world of the dead.” Contrary to the common assertion that “the western mourner has little to do but feel the grief and reconstruct a life without the dead” and that in the “modern West” “the dead are cut off from the living, with no traffic between the two worlds,” he writes, there is “considerable traffic, and several professions make a living out of directing the traffic, or perhaps more accurately, as messengers or telephonists, bringing information from the dead to the living” (2005, 407). The traffic is managed formally and institutionally by the army of counselors, clergy, and psychologists administering to mourners after a death, as well as networks of organ procurers and recipients, coroners and medical examiners, lawyers, pathologists, and even police officers and spiritualist mediums. These specialists, Walter suggests, are mobilized to “mediate” between the living and the dead, an activity opposed to the dogmas of both the church and of secularism, for which there is nothing after death to mediate with (2005, 406). In fact, Walter argues, these specializations may have arisen precisely as a response to the structural separation of the living and the dead in a modernized, urban setting, where most dying takes place away from home and in hospitals.
The externalization of aspects of personhood onto digital media—digital personhood—has opened up new secular “afterlife” practices and analyses, even new immortality regimes, concerned with the legal and personal dilemmas of continuity. Questions raised by the maintenance of Facebook profiles after death, avatars that can continue tweets on your behalf, the management of digital estates and identities, and a gamut of other recent afterlife practices have brought these questions to the fore (e.g., Kneese 2018), producing impure domains in which continuity of the person after death cannot automatically be relegated to the dustbin of religious illusion or humanist metaphorization.
Despite the range of psychological, anthropological, and sociological engagements with notions of immortality and forms of continuity, in a secular immanent world in which the biological body appears as the fundamental locus of existence—a biocentric world in which the meaning of life is reducible to the condition of the body (Wynter 2003; Anidjar 2011; Jasanoff 2018)—questions of immortality rebound inevitably back to biology, at different scales, from the cell to the body to the population (Palladino 2016). Biological immortality was a live debate in early evolutionary biology in the 1880s and emerged as a productive scientific object through the 1930s. On the one hand, the body itself, resolutely biological and finite, was filled with time via measures of cellular duration and turned into a timekeeping device through which biological time became a concept with its own measures aside from chronological time. That is, time was transformed from a notion in physics and metaphysics to a process in biology. On the other, rather than making death a condition of evolution and a necessity of biology, evolutionary theory opened up the possibility that biological death was not inevitable or natural and that biological entities could live on indefinitely. Biological finitude and infinitude thus rose in tandem from the same horizon, the limits of the former constantly conjuring hopes of the latter.
A German evolutionary biologist following fast in the footsteps of Darwin, August Weismann was among the most famous and controversial figures to posit that death was a contingent effect of natural selection, “secondarily acquired as an adaptation,” and thus not an ontological necessity. Making the evolutionary argument, however, triggered “the most difficult problems in the whole range of physiology—the question of the origin of death” (1891, 20). If it was not in the nature of living things to die, then why would biological death come to exist at all? In a famous 1881 lecture called “The Duration of Life,” Weismann concluded that “life is endowed with a fixed duration, not because it is contrary to its nature to be unlimited, but because the unlimited existence of individuals would be a luxury without any corresponding advantage” (1891, 25). For evidence he marshaled examples from “low organisms” that “do not die” but may be “destroyed.” The crucial example came from protozoa that multiply by dividing, not reproducing. Division gives rise to new individuals but does not result in the death of any, prompting him to muse: “This process cannot be truly called death. Where is the dead body? what is it that dies? Nothing dies; the body of the animal only divides into two similar parts, possessing the same constitution” (26). In a lecture a decade later, the biologist Alfred Binet rewrote that protozoan proposition into the more felicitous formulation: “In multiplication by division there are no corpses” (1890, 22).
Weismann’s “most difficult” question regarding the origins of death then arises in this manner: How did complex species lose the capacity to not die or the capacity for unlimited reproduction? Weismann’s answer, based also on the work of other biologists on reproduction,13 depended on the remarkably prescient idea that differentiation necessitated a separation between germ cells and somatic cells, such that germ plasm or reproductive cells continue the work of reproduction, whereas the others take on the work of specialization. Somatic cells lost this capacity to reproduce indefinitely in order to be able to develop into specific things, to gain distinct and complex capacities. The trade-off meant that their reproductive capacities got limited to a “fixed number.” As a result, the organism’s specialized parts could get damaged in the long run, and the accumulation of damage is not advantageous. “Normal death could not take place among unicellular organisms, because the individual and the reproductive cell are one and the same: on the other hand, normal death is possible, and as we see, has made its appearance, among multicellular organisms in which the somatic and reproductive cells are distinct” (1891, 29).
Weismann’s theories of division and the continuity of germ plasm in time, beyond individual deaths, were controversial. Other biologists kept trying to refute or prove them, and a series of experiments on unicellular organisms and their ability to persist in time ensued from the 1880s on (see Binet 1890; Calkins 1914). There was no final agreement, with most experiments into the 1920s apparently showing that protozoa would eventually die, though after hundreds of divisions, and at least one showing that they would continue indefinitely if the nutrient fluid is cleaned and replenished regularly. Reviewing the empirical evidence, Binet assessed Weismann’s theory based on a philosophical notion of identity: the concept of continuity is warranted only if “in division a very small number of elements is replaced.” By this criterion, Binet concluded, Weismann’s theory could neither be proved nor refuted according to the “observed facts” (1890, 37).
Nevertheless, from Weismann on, immortality as a concept and research object in biology got established in terms of the contingency or nonnecessity of death. In arguing that death was not to be explained by internal cellular and physiological mechanisms, that it was not inherent in biology as such, Weismann was also pointing to a crucial and ongoing problem in biology. In the proliferation and continuation of life, what unit of analysis was paramount: the population or species, the individual organism, or the germ cells? (See Palladino 2016 for an important discussion.)
Weismann’s ideas on immortality resonated beyond the realm of biology, getting the attention of people such as Sigmund Freud as well as James Frazer. In his work on immortality discussed above, Frazer makes the astounding suggestion that in what he called “savage” culture, humans were all naturally immortal but were subject to sorcerous intentions or accidents that could be ascribed to such intentions. Immortality, then, was the original human idea, with the necessity of death only added on later. He disapprovingly links that mode of thought to modern evolutionary ideas. Liberally quoting Weismann and referring to Alfred Wallace’s view that death is not a “natural necessity,” Frazer writes wryly:
Thus it appears that two of the most eminent biologists of our time agree with savages in thinking that death is by no means a natural necessity for all living beings. They only differ from savages in this, that whereas savages look upon death as the result of a deplorable accident, our men of science regard it as a beneficent reform instituted by nature as a means of adjusting the numbers of living beings to the quantity of the food supply, and so tending to the improvement and therefore on the whole to the happiness of the species. (1913, 86)
Frazer’s humanist consternation must be understood in a context where mortality and immortality were heated topics of debates and research as science was transforming ideas about the living organism, by theories like Weismann’s in one vein but also in what Martin Pernick (1988) calls the “decentralization of the person.” By the early twentieth century, work on organs as separate, replaceable entities was well on its way, and Jacques Loeb’s conceptualization of biology through an engineering ideal (Pauly 1987) had gained major ground. The first eye transplant came as early as 1905, and by the 1920s the biologist, eugenicist, and Nobel Prize winner Alexis Carrel was preserving organs outside bodies using a perfusion pump that would become the model later used for preserving organs before transplant. The pump was designed by and coproduced with Carrel’s partner at the Rockefeller Institute, Charles Lindbergh, the first man to fly across the Atlantic. It was their common interest in “immortality” (and eugenics) that provided the grounds for their collaboration.
Indeed, Carrel’s most famous experiments consisted of his work with “immortal” chicken heart cells he isolated and maintained for twenty years in a flask, claiming they divided and reproduced as long as they were given the necessary nutrients. The chicken’s cells thus outlived the chicken. The chicken tissue experiments were an attempt to prove Carrel’s hunch that immortal cells did exist in nature and that rejuvenation was possible (Landecker 2007; Friedman 2007). In a Time magazine article from 1925 we read this intentionally jarring juxtaposition in the headline “Science: Physical Immortality.” The article is about Professor Carrel’s experiments as reported to the magazine by Professor Green returning home to Manhattan after a stay in Europe, where he examined Carrel’s experiment with chicken heart tissues. Professor Green was reportedly so excited that “he could not keep talking with a ship-news reporter,” exclaiming, “Dr. Carrel introduces immortality in a physical sense. It is there before your eyes, and so long as this tissue is nurtured and irrigated it will live. It cannot die.”14
The mainstream press was effusive about the prospects of immortality. The New York Times headline called it a “Miracle” that “Points Way to Avert Old Age,” while The World’s Work was more symbolic: “Flesh That Is Immortal.” Carrel’s experiments led to an outpouring of applause from colleagues in the field of tissue culture and inspired a series of—what else—science fiction ruminations, another case of science fused with science fiction.
In this mechanization of the body what was also at stake was time, and Carrel’s concepts were influenced by Henri Bergson’s ruminations on time. Time was being linked to the body, but by making time an interior unfolding, Bergson did not intend to bring it into the realm of natural laws or of subjectivity. To the contrary, he was working explicitly against the Kantian locked-in syndrome, where the exterior world of matter was “forbidden” to the interior world of perception and mental goings-on. Duration and the processes that gave rise to it were not an effect of mind, of a closed-off perception; they were accessible by mind, by a kind of “introspection”—which Bergson termed “intellectual auscultation” ( 2007), the latter being an older medical term for the action of listening to internal organs via a stethoscope. Bergson actually wanted to give time over to the realm of biology, which he seemed to think of as a special kind of matter. His biology was not the usual biology of mechanistic parts and measures, of a finite, determinable causality. His kind of biology, which would be derogated as vitalism, had as its object living forms whose main characteristic was that they were not determined, that they continually gave rise to new possibilities. This meant that the future of biological forms was unknowable, unpredictable, not contained in the prior moment or event. Biology was not a series of states that were fixed by their past or by human-derived physical laws; they were states that were “constantly becoming” and “not amenable to measure” ( 1950, 231). Freedom and measure were antagonists, as were duration and time, the latter being subject to divisions that were measured, the former being an indivisible flow.
Bergson’s distinction between duration and successive time, interior motions and exterior measurements, is mutually exclusive—one was fabricated, the other real; one was full of happenings, rhythmicity, and interconnected pasts, the other was “homogeneous” (in his own, rather than in Benjamin’s later, words). Despite disagreements with his contemporaries, famously with Einstein, Bergson was symptomatic of his age. In an age in which time was no longer unified and relativity was just being validated in physics, Bergson’s opposition between duration and time was reproducing an already-existing bifurcation, a secular disconnect between individual finite time and an ongoing universal time (a bifurcation explored fully in chapter 4).
In the multiple ways in which biology, death, and time became entangled and the debates about the naturalness of death or not dying got framed, another revealing and crucial figure was Raymond Pearl, an American friend of Carrel, a zoologist, biometrician, statistician, geneticist, and longtime professor at Johns Hopkins, where he also founded the important journal Human Biology. A eugenicist initially interested in heredity and population genetics, Pearl started his work with planarian worms and then later on the egg-laying capacities of chicken. But it was his work on the scientific study of population growth and statistical and experimental biology that got him noticed. In 1919 he joined the newly established School of Hygiene and Public Health at Johns Hopkins as professor of biometry and vital statistics, where he began his pioneering work on rates of life and death and the prolongation of life. He is the originator of what has come to be known as the “rate of living” hypothesis—the theory that life duration is a function of energy expenditure, or, put in Victorian social and moral terms, the faster you live, the sooner you die. Among the range of positions he held was a two-year stint (1934–36) as president of the American Association of Physical Anthropologists; he corresponded with Franz Boas, Alfred Kroeber, and Bronisław Malinowski.
For Pearl, the prolongation of life was a fundamental concern of all medical science and biology. “The fundamental purpose of learning the underlying principles of vital processes,” he writes in The Biology of Death, is “that it might ultimately be possible to stretch the length of each individual’s life on earth to the greatest attainable degree” (1922, 17). What the “greatest attainable degree” might be was the key unanswered question. Was there an internal limit to life, to “life duration”? More precisely, was there a final upper limit to the duration of an individual body? Or was biology plastic?
Answering that was Pearl’s lifework. He carried out one of the first comprehensive longitudinal studies of longevity in humans and worked on life tables, survivorship (he did the first study linking cigarettes and early death), risk, and ultimately life duration—which he preferred over the term “longevity.”15 Synthesizing Weismann’s theories and other new work on germ cells, the work of his first mentor, H. S. Jenning, on “infusoria,”16 and aspects of evolutionary theory, Pearl noted that not only had germ cells to be “immortal” but that—unlike what Weismann held—regular cells could potentially live forever too. His conclusion came out of the hypothesis that “the processes of mortality are essentially physico-chemical in nature, and follow physico-chemical laws” (1922, 51). In part these formulations were ways for Pearl the eugenicist to counter the claims of social reformists who attributed everything to social events and thus advocated for social remedies to illness (Ramsden 2002). But he also was making a philosophical point: both immortality and mortality could be conceived of within nature, according to natural laws, and need not be taken as processes and possibilities belonging to theology and animated by a mysterious vitalism or some notion of the soul, against which he would soon rail.
Although his main efforts directed him to the finitude of biology—inherent vitality, he would write, is of the “nature of a constant for the individual” (1928, 127), that is, the upper limit of the length of life for a species and an organism was relatively fixed barring external factors—Pearl did not consider death as an inherent function in nature. Drawing on Weismann, he made of life an abstraction unified over time by an overarching process:
A break or discontinuity in its progression has never occurred since its first appearance. Discontinuity of existence appertains not to life, but only to one part of the makeup of a portion of one large class of living things. This is certain, from the facts already presented. Natural death is a new thing which has appeared in the course of evolution, and its appearance is concomitant with, and evidently in a broad sense, caused by that relatively early evolutionary specialization which set apart and differentiated certain cells of the organism for the exclusive business of carrying on all functions of the body other than reproduction. We are able to free ourselves, once and for all, of the notion that death is a necessary attribute or inevitable consequence of life. It is nothing of the sort. Life can and does all the time go on without death. The somatic death of higher multicellular organisms is simply the price they pay for the privilege of enjoying those higher specializations of structure and function which have been added on as a side line to the main business of living things, which is to pass on in unbroken continuity the never-dimmed fire of life itself. (1922, 42)
Particular life-forms (an organism or a species) are painted in as formal manifestations and as specialized portions of life itself, whose fire has not been extinguished since its first appearance on Earth! Underlying Pearl’s focus on life duration one can excavate an explicitly secular set of tensions between the materialist finitude of the organism and the possibility of breaking or denying those limits. Relying on a transcendent abstraction, life as a form, he nevertheless had to be careful to ritually ward off mysticism from his version of continuity in the undying infinitude he called life. The first few pages of The Biology of Death are dedicated to a fiery secular sermon against the soul and spiritualism. Humans have always tried to prolong life by natural and supernatural means, Pearl starts. But advances made by the former would always appear slight—on the order of days or years, at best, which is an excruciatingly narrow result. “When conceived in any historical sense,” that is, compared to the depth of biological time and the infinite historical timeline, “man’s body”—that is, the body without the soul—“plainly and palpably returns to dust, after the briefest of intervals, measured in terms of cosmic evolution” (1922, 17). No wonder, then, that humans devised another means for “infinite continuation” by conjuring up that “impalpable portion of man’s being which is called the soul.” This illusion “has permitted many millions of people to derive a real comfort of soul in sorrow, and a fairly abiding tranquility of mind in general from the belief that immortality is a reality.” But this also had the “evil” effect of “opening the way” to “recurring mental epidemics of that intimate mixture of hyper-credulity, hyper-knavery, and mysticism” (18). Pearl felt his era, far from being a secular turning point, was suffering “the most violent and destructive epidemic of this sort which has ever occurred.”
As a book, and perhaps as a science, the biology of death necessitated a salvo against the theology of the soul, because grappling with infinitude, immortality, and continuity, even on supposedly biological grounds, risked bringing science too close to the transcendence it was trying to shed. The elimination of the notion of the immortality of the soul from the minds of men, the Enlightenment’s desouling project (Flanagan 2002), was a key part of the development of the scientific project; thus any Methuselah would have to be created via the internal or finite goings-on of biology. That these biologists concerned with time, aside from being secularists, were also avid eugenicists should not be a surprise (Pearl abandoned his views, but Carrel went on to work alongside Vichy France on biological superiority). Evolution had embedded within it a notion of time in its expositions on descent and generation. In the progressivist vision, changes were improvements that were linked in the European mind to race. White Europeans were considered the most advanced race and thereby had a claim on the future, being at the tip of the arrow of biological time. In other words, staking a claim on the future via biology was precisely what eugenics was about, joining the progress of history to the progress of biology. Even as science has changed, it is important to keep this in mind while thinking about immortality and the ways in which claims are made on the future and who makes those claims, a topic I will deal with in the final chapter.
Once Carrel’s experiments seemed to scaffold the theoretical ideas of Weismann, biologists happily took up the assumption that cell immortality was a fact and continued attempting to produce immortal cell lines. Cell immortality remained an exciting dogma, accepted to such an extent that when cells did die in labs, as they often did, their death was attributed to mistakes in the technique by scientists themselves, not to the mortality of the cells (Hall 2003, 26). In the end, the opposite turned out to be the case. It was Carrel’s cells that had remained “alive” due to experimental sloppiness: feeding the cell cultures with nutrients taken from freshly killed chicken embryos, Carrel’s lab was accidentally supplying fresh new cells to the culture, creating the illusion of immortality in cells. This “sociology of mass delusion,” as Hall (2003) calls it, held sway for almost half a century.
At the very end of the 1950s, the biologist Leonard Hayflick showed that cells divided a finite, not infinite, number of times. Hayflick was convinced that cell death was not a function of experimental sloppiness but intrinsic to cell biology itself, or “senescence at the cellular level.” His first attempt at publishing the results of his observations was famously rejected, but eventually, after publication of his paper in 1961, it was Hayflick’s conclusions that became accepted in cell biology (Palladino 2016; M. Cooper 2006)—“we overthrough [sic] that dogma,” Hayflick exclaimed triumphantly later (Hayflick 1998). Hayflick showed that most normal cells—with the exception of cancer and germline cells—divide up to about sixty times, after which they stop dividing and wither away. This limit is known as the Hayflick limit. Hayflick’s discovery put an end to the dream that cells were inherently immortal, and the common understanding became that immortality “was the hallmark of the pathological cell line” (M. Cooper 2006, 1), such that aging and its outcome, death, were inevitable processes of normative biology. The cell immortality myth and research into immortality turned into research into cell death and eventually the mechanisms of “molecular disorder” (Hayflick 1998) and what came to be known as apoptosis, or cell suicide, all quite contrary to the enthusiasm over immortality.
Nevertheless, the isolation and reproduction of cells outside the body and inside a laboratory indicated at one and the same time the technological ability to intervene into cells and the concomitant transformation of cells into technologies, of life into artifice and back, indicating that the difference between the two was itself an artificial one, meaning that cellular life was open, potentially subject to manipulations that could extend its functions indefinitely—a possibility that gave rise to biogerontology, which, unlike regular gerontology’s soft and social approach, focused on senescence at the cellular level. As Melinda Cooper (2006) documents, tissue engineering as the key vision of biogerontology led around the turn of the twenty-first century to the reinvention of immortality through embryonic stem-cell research, making the category of potentiality, with all its undetermined and open future possibilities, the central idea of the research—as well as its capitalization. In a way, that was an echo of Weismann’s idea that specialization leads to the introduction of death into the natural world, whereas the pluripotency of embryonic stem cells prior to specialization could help revolutionize biology, and especially rejuvenation techniques. The promise was that death could again be delayed indefinitely if pluripotent cells could repair the damage accrued by specialized cells. Thus, techniques that demonstrated pluripotency at the cellular level promoted the transformation of the present into an imagined or desired but not inevitable future form, resurrecting an older immortalist imaginary.
Whackadoo Science and Boundary Work
Immortality as a concept and a research object has had a fraught contemporary history—at times being stigmatized, at others carrying forth a central hope, at times denoting primitive error, at others carrying modern promises, at times bearing a secular promise, at others threatening it with atavism. Because of its blurry status, the boundary work of science has remained vigilant around immortality and its associated terms.
The tensions between cryonics, marginalized as a “taboo science” (Frickel et al. 2010), and cryobiology, the cold science, provides an interesting case of such boundary work. The principle that cold preserved flesh had been posited at least since Francis Bacon was suddenly struck by a bolt of inspiration while traveling with the king’s physician on a snowy day. He jumped out of the coach, found a woman who sold chicken, bought and disemboweled one, and stuffed it with snow to see how it would fare in the long run. The chicken may have kept well but it’s been said that the cold made Bacon come down with a bad case of pneumonia which soon thereafter killed him, a martyr to his own scientific methodology. Other seventeenth-century scientists became fascinated not by preservation for consumption but by the effect low temperatures seemed to have on the life and death of organisms. In 1670 Henry Power reported success in freezing and reviving vinegar eels, and a decade later Robert Boyle, apparently inspired by Bacon’s experiment, noted similar success in experiments on frogs and fish (Thomson 1964; Parry 2004). At the time, Boyle and others noted two problems with freezing. The first was their inability to control, reproduce, or maintain low temperatures artificially. The second was that certain frozen goods, such as plants, fruit, vegetables, organs, and organisms, seemed to be damaged after thawing (Parry 2004; Leibo 2005).
The first problem would be solved in the nineteenth century with the development of refrigeration, used for storage and transportation, particularly of meats. The liquification of gases achieved by several chemists at the end of the nineteenth century would change the landscape of low-temperature experiments and usher in the era of cryogenics proper, meaning the study of the behavior of matter at ultralow temperatures. Among these chemists was a Scot, James Dewar, who liquefied hydrogen and developed a vacuum-insulated storage capsule that could contain liquefied gases without allowing them to rapidly evaporate (Parry 2004; Leibo 2005). Today’s cryogenic storage devices are called dewars.
Although experiments on the effects of cooling on biology continued to take place in the early twentieth century, the research was not formalized until a Roman Catholic priest began his experiments on the survival rates of frog spermatozoa and yeast in the 1930s (Schmidt 2006; Parry 2004). Born in a Swiss mountain village, Basil Luyet immigrated to the United States in 1929 with doctorates in biology and physics from Geneva (Schmidt 2006), later taking up a position at St. Louis University and setting up the American Foundation for Biological Research, a laboratory designed to study low-temperature phenomena, where he conducted the majority of his work on cryopreservation (Leibo 2005; Schmidt 2006). As it was for some of his seventeenth-century counterparts, for Luyet cryogenics was a way into a larger existential question that arose at the intersection of his scientific and priestly duties: What is life? In the modern era that question has been scientifically, philosophically, and anthropologically as generative as it has been confusing. For Luyet, death was the way in: “To study and to know life I started to study death since death is the destruction of life,” he said in an interview (Mulvenna 1974).
Tacking back and forth between the metaphysical or existential questions and the material experimentation, Luyet observed that life could indeed be restored in some cases, that animation could cease and restart, and so he introduced a new in-between concept into the study of life, “latent life,” and the prospect of restoring “animation” to this state. The problem Luyet faced was the main problem of cryogenics, namely, the damage done to cells as a result of ice formation during the cooling process. This destroyed a great majority of cells (though not all), and as a result tissue and organs did not thaw viably. Luyet speculated on some solutions: rapid cooling, dehydration, and the use of certain materials as “protectants,” all of which he mentioned in his comprehensive 1940 study of cryogenics, Life and Death at Low Temperatures, a book that established him as the “father of cryobiology.” Although Luyet’s proposed solutions would prove to be in principle the right ones, in practice he himself never solved the problem of freezing injury and cell damage.
An experimental mishap involving chicken semen helped solve it. Spermatozoa had become the focus of low-temperature research since the 1930s, when two biologists experimented extensively with rabbit semen. In the late 1940s the Frenchman Jean Rostand used glycerol to freeze and thaw frog sperm but did not test its subsequent fertility (Leibo 2005). The main interested parties in these techniques were the various food industries, which understood the potential for controlled fertilization and artificial insemination. In 1947 the poultry industry in the UK charged Dr. Christopher Polge with the task of developing sperm preservation and artificial insemination techniques for fowl (Leibo 2005, 357). The team Polge worked with at Mill Hill included Alan Parkes, who would coin the term “cryobiology,” and Audrey Smith, who would come to be known as the “mother of cryobiology.” They were later joined by James Lovelock, who is responsible for accidentally inventing the microwave oven, and for developing what is today known as the Gaia hypothesis. Polge had his chemicals shipped to him at the Mill Hill labs and went about freezing chicken semen using what he had always used in such experiments as a protectant: fructose. Previous experiments with semen had yielded very poor results. After thawing, no more than 5 percent of the cells remained viable. Yet all of a sudden, Polge and his colleagues were repeatedly obtaining a 50 percent rate of viability. Having exhausted the old supply of fructose, the team turned to a brand-new supply. When the rate immediately dropped back down to the expected 5 percent, they deduced that something must have happened to the old bottles of fructose. The last few milliliters remaining in an old bottle were sent to a lab, where analysis showed they had been filled with glycerol, not fructose. The bottles had been mislabeled. By accident glycerol became the first effective cryoprotectant.
The accident revolutionized the practice of storing and reviving cells. These could now be frozen and stored at ultracold temperatures for very extended periods before being revived. Today’s tissue and sperm banks, along with practices of artificial insemination, embryo preservation, and assisted reproduction, owe their existence to these developments in cryogenic science. So aside from being the main experimental “subjects” of cryobiology, spermatozoa also have been the main beneficiaries of low-temperature research, as shown by the proliferation of human sperm banks and the extensive use of cryopreserved semen in animal husbandry.
In the 1950s, Audrey Smith, James Lovelock, and Alan Parkes turned their attention to freezing and reanimating small mammals. Using glycerol as a cryoprotectant, they cooled twenty golden hamsters down to a colonic temperature of -5°C for fifty to seventy minutes before rapidly thawing them with an apparatus made of material from a military radar and reassembled by Lovelock. Of the twenty frozen hamsters, seventeen survived the thawing process, with seven of these dying within twenty-four hours of reanimation, two within ten days, and eight living out the rest of their days close to their species’ normal life span (Parry 2004). Kept at a colonic temperature of only -5°C for no longer than an hour or so, the animal was not frozen through and through. That is, not all the water in the animal’s body had turned to ice—intracellular ice formation having been identified by then as the clear culprit in causing cell damage during the freezing process. In subsequent experiments in which they tried to keep animals frozen for more than seventy minutes, the animals rarely recovered. They also discovered that animals frozen at ultralow temperatures did not survive. They concluded that “suspended animation” (their term), or the long-term storage of animals and their subsequent revival, had no prospects (Parry 2004, 404)!
But these experiments were enough to convince Robert Ettinger, a high school physics teacher, that suspending human life was in theory possible, the difference being a matter of complexity and scale, not of principle. Indeed, once Ettinger circulated his first manuscript on the topic, it was enough to convince the pioneering cryobiologist Jean Rostand, and Rostand contributed an optimistic preface to Ettinger’s book The Prospect of Immortality. “We don’t have long to wait before we shall know how to freeze the human organism without injuring it,” Rostand wrote. “When that happens, we shall have to replace cemeteries by dormitories, so that each of us may have the chance for immortality that the present state of knowledge seems to promise” (Rostand 1965, 9).
Rostand was an exception in his unabashed enthusiasm for cryonics, but the budding fields of cryobiology and transplant surgery had at least a warm relationship to cryonics in the early days. The new fields did yield some cryonics advisers and advocates serving well into the 1970s, one of them even claiming that he was on his way to the full freezing and reanimation of canines.17 A few cryobiologists and transplant surgeons advised cryonics pioneers, and even served on cryonics organization boards, and at least one cryonics presentation was made at an annual Society for Cryobiology meeting. The first exchanges of letters with the Society for Cryobiology appear to have been cordial and warm. Later, some cryobiologists accepted research grants from members of cryonics organizations.18
Over time, however, cryobiology’s official position hardened into an unreceptive stance. According to one account, by the 1970s, cryobiologists who were serving on cryonics scientific advisory boards had been “approached by one or more of their colleagues in the Society for Cryobiology and pressured to resign their positions” (M. Darwin 1991a, 7). By the 1980s, on the heels of a series of cryonics public-relations disasters, there was open hostility against cryonics at the society’s annual gatherings. Mike Darwin, a leading cryonics pioneer and Alcor member, called this a “cold war” between cryobiology and cryonics, blaming the Society for Cryobiology for taking steps “to destroy cryonics.”
In 1982 the Society for Cryobiology passed a new bylaw explicitly denying membership to anyone engaged in “any practice or application of freezing deceased persons in anticipation of their reanimation.”19 The original proposal for the bylaw was much more harshly worded, specifically citing cryonics and calling it a fraud.20 But the most revealing aspect came from the preamble from the board justifying the bylaw. It sets up a familiar opposition between cryonics research and real science by placing the former into a religious domain, classifying what it calls “cadaver freezing” as “an exercise of faith, not of science.” Yet based on the board’s own description, it is hard to understand the difference between the research agendas of cryobiology and those of cryonics:
The Board also recognizes that the goals of cryobiology include not only achieving an understanding of freezing injury and its avoidance but also applying this knowledge to the preservation of cells, tissues, organs, and organisms. A future achievement may well be successful mammalian cryopreservation. However complex the social consequences of such a development might be, this is no basis for discouraging research in cryobiology. The cryopreservation of biological systems remains a legitimate scientific endeavor which the Society for Cryobiology is chartered to support.
. . . There is no confirmed report of successful cryopreservation of an intact animal organ. It can be stated unequivocally that mammalian cryopreservation cannot be achieved by current technology.
Nonetheless, certain organizations and individuals are advocating that person be frozen subsequent to death on the premise that science may ultimately develop the capability both to reverse the injury of freezing and to revive the cadaver. The Board does not choose to involve itself in a discussion of the degree of remoteness of this possibility. The Board does, however, take the position that cadaver freezing is not science. Freezing and indefinitely storing a cadaver is not an experimental procedure from which anything can be learned. The knowledge necessary for the revival of whole animals following freezing and for reviving the dead will come not by freezing cadavers but from conscientious and patient research in cryobiology, biology, chemistry, and medicine. The sole motivation for freezing cadavers today is the remote hope on the part of individuals that this may be a means of avoiding death.21
It concluded: “The Board finds human cadaver freezing to be at this time a practice devoid of scientific or social value and inconsistent with the ethical and scientific standards of the Society. The Board recommends to the Society that membership be denied to organizations or individuals actively engaged in this practice.”
The board claimed that “successful mammalian cryopreservation” is a “legitimate scientific endeavor,” and at the same time it asserted that since “mammalian cryopreservation cannot be achieved by current technology,”22 research into human cryopreservation is not scientific. But logic is not what pushed the argument through. The argument was driven by several other assertions in the text outlining secular and social taboos: that cryonics is faith, not science; that cryonics is a fraudulent activity because it accepts money; and that cryonics is simply a means for individuals to avoid death. Each of these assertions appeals to a particular history. The first assertion, linking cryonics and faith and opposing them to science, plays into the history whereby immortality, death, and related fields have been largely given over to the domain of religion. The second assertion plays on cryonics’ history of public-relations disasters and the mishandling of cases. The third assertion, that cryonics is an attempt at “death denial,” plays into the death acceptance movement emerging at the time.
To trace the history of cryonics and immortalism in general is also to follow the ebb and flow of their status as both a social taboo and a taboo science, or what some scholars in science and technology studies have called “undone science” (Hess 2007; Frickel et al. 2010), referring to scientific research that is deliberately left out of the agenda. Undone or taboo science is when a particular topic of research, the production of a particular kind of knowledge, is marginalized or sequestered. In the era of private funding, the confluence of elite institutions and corporate interests, rather than the public interest, determines what sort of research is considered fundable or doable, and these often defend established markets, despite risk factors or better alternatives. For instance, over the past century the chemical industry has repeatedly sidelined research into nonchlorine alternatives to chlorine use and production, despite documented (by some researchers) damages to the environment and to the health of people exposed to chlorine (Frickel et al. 2010). Social movements and public organizations often try to direct attention to or away from these abandoned or taboo areas, as is the case with research on animals, which has been limited by the animal-rights movement, and stem-cell research, which was banned and/or limited thanks to a coalition of movements in the United States.
In the case of cryonics, the Society for Cryobiology, riding the wave of social taboos, explicitly produced a taboo research zone. On at least one occasion during my fieldwork, cryonics organizations approached a university to collaborate on a research agenda funded by cryonicists themselves. The program would examine some aspect of research relevant to cryonics and would be a part of existing departments and not a separate program on cryonics. Though the university began the conversation, it also ended it relatively quickly. Uncharacteristically, they turned down the offer of money. After poking around, a legal advocate involved in the negotiations made this claim to me: “I found out that they turned it down because a couple of professors that they talked to did not want to be associated with cryonics because they don’t think that it’s real science, and I think that that’s a shame. I always heard that university campuses were a place where real research can be done regardless how crazy the idea is but I’m wrong. I know that.”
Mishandled cases in the early years along with the gory public reputation of cryonics have had much to do with its sequestration. But organ transplantation was in a similar position in its early days, with many naysayers and appalled populations. In its later days it was beset by numerous cases of fraud, lack of consent, and abuse within established medical settings—from the gigantic case involving thousands of organ-stripped and discarded corpses at UCLA (Nelkin 1998), to organ black markets in the United States and abroad (Scheper-Hughes 2000), to the trade between organ procurers and coroners for cornea removal and tissue extraction (Timmermans 2006, 232–44). Although trading organs or tissue is a crime, the National Organ Transplantation Act allows for a “handling and processing fee.” By 2006 the handling and processing burgeoned into a $500-million-a-year tissue trade (Timmermans 2006, 243).
Organ transplantation involves many of the same scientific, medical, and cultural processes that are involved in cryonics, from the most general one of “redefining death”—successfully achieved, in the case of organ transplant—to the specifics of organ cryopreservation and transportation. While some rising cryobiologists who have made contributions to the field and are also avid cryonicists have maintained a public distance from their cryonics activities for fear of jeopardizing the credibility of their lab-based cryobiology research,23 cryobiologists with long involvement in cryonics have developed what appears to be one of the more advanced and least toxic cryoprotectant solutions, M22, at a research lab called 21st Century Medicine (21CM), funded in part by cryonics advocates. M22 is currently being used in both mainstream organ preservation and for cryonic preservation at Alcor (Fahy et al. 2004). Dr. Greg Fahy, who has been interested in cryonics since his high school days, has been a bridge between cryonics and cryobiology. Additionally, over the years 21CM has developed a number of other products (e.g., ice-blocking agents) some of the research for which has been published in the journal Cryobiology (Wowk et al. 2000). Back in 1981, Fahy was also one of the first cryobiologists to offer “vitrification”24 as a solution to the problem of ice formation (M. Darwin 1981). Given the interests and backgrounds of the people involved, these products were in a very direct sense the results of cryonics research—which therefore can be said to have been making contributions to cryobiology at large and to general medical science by extension.
The Eternal Returns of the “I” Word
My goal is not to advocate for the legitimacy or desirability of cryonics but to show how cultural notions of immortality, especially its persistent relation to religion, have historically influenced the scientific research agenda specifically and orientations toward continuity more generally. Materialism buried the soul and its immortality, thereby creating a marker for the divide between religion and the properly scientific. That boundary is largely managed within what I have been outlining as a secular tension: immortality is science, immortality is faith. In that divide, other categories have been implicated, such as mind and person, human and nonhuman, as well as cells and species and society.
Immortality flows and eddies across domains because it runs through forms of continuity and projected futures that are unstable and sometimes ephemeral and for which materialism must constantly invent new procedures and measures, as well as new (soulless) entities as the key, often abstract, often transcendent, units of continuity. Immortality has to be warded off, but it keeps coming back, as much in science (the realm of the nonmetaphorical!) as in social and cultural domains. It keeps being reinvented through the practices of science and the concepts that drive them, through materialism itself, through the unsettled tensions between it and rationalism, which does not assume the material reducibility of mind, and finally through questions about continuity: What is it that has, in fact, continued in time (life itself, but not individual species, according to Pearl and Weismann, e.g.)? What is it that must be continued? The self? Life? Society? Nation? Human species? Human consciousness? DNA? Genetic information? Progeny? Great works? Pyramids? Ghosts? You? What is your unit of continuity?