Skip to main content

Toward a Living Architecture?: Introduction

Toward a Living Architecture?

Introduction

Introduction

Evolutionary Architecture—Genetic Algorithm—Generator—Evolutionary Computation—Complexity Architecture—Architectural Genetics—Embryological House—Generative Components—Generative Design—Breeding Architecture—Genetic Architectures—Architectural DNA—Morphogenetic Design—Morpho-Ecologies—Architecture of Emergence—Self-Organizing Architecture—Autopoiesis of Architecture—Parametric Design—Protocell Architecture—Generative Architecture

The above list, drawn from architectural history of the past fifty years, offers a composite overview of the field referred to most commonly as “generative architecture.” The latter phrase has gained sway owing to its general reference to the use of digital computational tools to generate architectural form. Although digital tools have been integral to and inseparable from the founding and development of generative architecture, other modes of using digital technologies in architecture exist beyond those associated with generativity. Generative architecture is thus a subset, albeit a prominent one, of what architectural historian Mario Carpo terms, in the title of the collection he edited, The Digital Turn in Architecture, 1992–2012.[1] What binds the above disparate architectural approaches into an identifiable genre, therefore, is most often a certain computational approach. Yet frequently, the final products share a common aesthetic, one that entails an interconnected proliferation of component-based forms that morph through different curvatures, resulting in a stylized organic appearance.[2] Two well-recognized multimillion-dollar built structures created using generative techniques are the Beijing National Stadium (“The Bird’s Nest,” $423 million USD) and the Beijing National Aquatics Center (“The Water Cube,” $140 million USD), both from the 2008 Olympics (Figure I.1).

The use of computational tools and algorithmic structures to generate solutions to complex problems and to create forms is not unique to the discipline of architecture. While arising out of cybernetics and then computer science, generative techniques are used as well by scientists, engineers, linguists, musicians, and artists. For comparison, art historian Philip Galanter defines “generative art” as “‘any art practice where the artist uses a system, such as a set of natural language rules, a computer program, a machine, or other procedural invention, which is set into motion with some degree of autonomy contributing to or resulting in a completed work of art.’ The key element in generative art is the use of an external system to which the artist cedes partial or total subsequent control.”[3] Architects using generative software—such as Generative Components or genetic algorithm–based plug-ins like the Galápagos feature in Grasshopper with Rhino—also cede partial control to their computers. After establishing the basic parameters, fitness criteria, and approach to a particular problem, they sit back and wait for the computer to generate a population of solutions, from which the architect then selects and refines chosen designs. Often, computers generate solutions that an architect would not have imagined, so generative design is frequently viewed as human–computer collaboration.

Figure I.1. Beijing National Aquatics Center, Beijing, China. Building designed by PTW Architects with CSCEC, 2008. Photograph by Daniel Case (CC BY-SA 3.0). Beijing National Stadium. Building designed by Herzog & de Meuron, 2008. Photograph by David Dong (CC by 2.0). By mimicking the geometry of soap bubbles and the structure of a bird’s nest, while using ETFE plastic and tons of steel, these buildings reference ideas of “self-organizing” pattern formation in nature while relying heavily on advanced technologies of computation and engineering.

But as the opening list clearly demonstrates, much of the terminology associated with generative architecture sounds very biological. While this is partially due to the “gen-” root in all variations of “generative” and “genetic,” the commonalities between computation and biology in fact run deep. After all, in computer science this overlap is the conceptual origin for the pursuit of artificial life, as well as the root of techniques of evolutionary computation and genetic algorithms, which generate solutions based on the principles of neo-Darwinian biological evolution. Yet, as architect Karl Chu points out, “The meaning of both terms, genetics and gene, are sufficiently abstract and general enough to be used as concepts that have logical implications for architecture without being anchored too explicitly to biology. Implicit within the concept of genetics is the idea of replication of heritable units based on some rule inherent within the genetic code,” he writes. “Embedded within the mechanism for replication is a generative function: the self-referential logic of recursion. Recursion is a function or rule that repeatedly calls itself or its preceding stage by applying the same rule successively, thereby generating a self-referential propagation of a sequence or a series of transformations. It is this logic encoded within an internal principle, which constitutes the autonomy of the generative that lies at the heart of computation.”[4]

Chu’s explanation of the generative leans heavily toward the computational realm while abstractly relying on principles of biology. Other architects affiliated with generative architecture, however, are promoting “genetic architectures,” which in this case refer not just to techniques of evolutionary computation but also to the literal use of genetic engineering to grow living buildings. Alberto Estévez, director of the Genetic Architectures graduate program at the Escuela Arquitectura (ESARQ) of the International University of Catalunya in Barcelona, describes his goal as “the fusion of cybernetic–digital resources with genetics, to continuously join the zeros and ones from the architectural drawing with those from the robotized manipulation of DNA, in order to organize the necessary genetic information that governs a habitable living being’s natural growth, according to the designs previously prepared on the computer.”[5] While such a vision’s practical realization is debatable, it reveals a fundamental conflation and coordination of the technologies used in both architecture and synthetic biology. Estévez boldly states, “The architect of the future will no longer direct masons but genetic engineers.”[6] Others, like Marcos Cruz at the Bartlett School of Architecture, University College London, look to integrating the techniques of tissue engineering. They point to living works like Victimless Leather (2007) by artists Oron Catts and Ionat Zurr as prototypes for architecture of the future (Plate 1). As these examples show, the lines between computation, architecture, and biology begin to blur, for all three disciplines address potentially overlapping aspects of generativity.

One final thread interwoven into the above list of terms associated with generative architecture is the language of complexity theory, including self-organization, emergence, and autopoiesis. While complexity may seem like an outlier to the nexus of computation, biology, and architecture, in fact it is integral to current understandings of the generation of pattern and structure in inorganic, organic, and cultural dynamical systems. Furthermore, the historical development of complexity theory overlaps significantly with that of cybernetics and generative architecture, arising in the late 1950s and early 1960s. Complexity theory offers a means to understand and simulate the organization of matter from disorder or chaos into order, using the mathematics of nonlinear nonequilibrium dynamic systems. Such systems include the weather, traffic, the economy, the growth of the internet, social behavior patterns such as flocking and swarming, and life itself, for organisms are open systems that continually exchange energy and materials with their environment to fend off equilibrium, which is death. Architects are adapting the process of self-organization of natural systems as a means for pattern and form generation in architecture. This framework helps theorize the proliferating interconnected component aesthetic of generative architecture that has resulted from the limited bed size of 3-D printers, although recently, the scale and types of materials used for 3-D printing are increasing. For example, Michael Hansmeyer’s and Benjamin Dillenburger’s Digital Grotesque II (2017) for the Centre Pompidou was printed using additive manufacturing in synthetic sandstone at a scale of over fifty cubic meters.[7] Although it was printed in sections (maximum eight cubic meters per print), they scaled the sections to pallets for transportation and the carrying capacity of four humans. The increasing capacity of 3-D printers makes the modular design of large numbers of components less obligatory moving forward, a prospect that will likely shift the general aesthetics of generative architecture. Yet when this limitation held, generative architects explained the component-based approach by turning to the definition of self-organization, which posits that multiple components interacting with one another locally according to rules, without reference to a central control, produce emergent patterns at the next higher level of their organization. Architect Michael Weinstock, in his article “Morphogenesis and the Mathematics of Emergence” (2004), urges architects to integrate these mathematical processes into architectural and urban systems design, so that architecture more quickly becomes “intelligent” with responsive emergent forms and behaviors that demonstrate higher levels of complexity.[8]

Given these major themes in generative architecture, this book critically examines and unravels this complicated nexus of architecture, computation, biology, and complexity. In doing so, it offers a conceptual scaffold for parsing various goals of those working under the rubric of generative design, which is often confusing because of the overlapping terminology across these disciplines. As the title Toward a Living Architecture? implies, its overall narrative moves from the computational toward the biological and from current practice toward visionary futures. It addresses architects’ dreams of generating buildings from living cells or “protocells,” demystifying the scientific advances necessary to shift these dreams from the realm of science fiction to reality, finding that for many reasons their visions are unlikely to be realized. To what ends, then, is this rhetorical biologization of architecture working, besides serving as a gloss of “sustainability” over high-tech avant-garde architecture-as-usual? Sustainability, in its most basic definition, implies maintaining ecological balance and functionality for generations to come. The book ultimately positions generative architecture as one of many arenas today where “complexism” as our current scientific ideology has become a player affecting the broader debates over design, production, and consumption and the economic and environmental effects of this cycle.

How complexism is functioning now as an ideological foundation for generative architecture serves in some ways as contemporary, scientifically updated parallel to the argument that my earlier work made about how eugenics functioned ideologically in relation to streamline design. In Eugenic Design: Streamlining America in the 1930s (2004), eugenics functioned as the reigning scientific ideology at work across many cultural domains.[9] It influenced and in turn was promoted by the material and rhetorical strategies used by designers to justify their architectural and design style as appropriately modern. Today, complexism offers generative architecture a naturalizing scientific framework that, reciprocally, is then reified by generative approaches across disciplines, not just in architecture, utilizing its concepts of self-organization, emergence, contingency, and the ever-onward march of increasing hierarchy and complexity. Yet, while Eugenic Design was primarily historical, this current project on complexism and generative architecture is more contemporary criticism than history, more science studies than history of science. Only one very short section of this book addresses interwoven historical threads at the origins of cybernetics, systems theory, and generative architecture that placed them early on in conversation with each other (see the Appendix). Let me be clear that my intent has never been to write a history of generative architecture. The majority of this book focuses on predominant themes that arise from the intersections of contemporary science, computation, and generative architecture in the last fifteen to twenty years, with special emphasis on the scientific portion owing to three of the book’s major themes. These critically examine (1) the roles of complexism in discourses of generative architecture, (2) some generative architects’ claims to be promoting environmental “sustainability” in their work, and (3) others’ claims to be moving architecture toward newly designed and engineered living materializations.

Appropriately then, my current methodology integrates a new approach to design studies and criticism, one that uses common practices and insights from science and technology studies (STS) alongside my earlier interdisciplinary approach merging archival research and wide reading across disciplines combined with visual and rhetorical analysis. In particular, STS methods utilizing participant observation as an “ethnographic” approach at multiple sites for studying and working alongside both scientists and architects came naturally for this project. I put “ethnographic” in quotes here because doing design ethnography is not my primary intent, but just one of the general modes of research I used for this project. This stemmed directly from the type of funding I received to pursue the bulk of the research. Two Mellon Foundation fellowships—a Penn Humanities Forum Postdoctoral Fellowship at the University of Pennsylvania (2008–9) and a New Directions Fellowship (2011–13)—supported twenty-five months of interdisciplinary postdoctoral study with release from teaching, in the fields of generative architecture, physics (self-organization and complexity), philosophy of science (emergence), epigenetics, and evolutionary biology.

In many cases, I was privileged to study with the very scientists and architects whose work I wanted to better understand. For example, at Penn I participated in Jenny Sabin’s and Peter Lloyd Jones’s LabStudio seminar “Nonlinear Biological Systems and Design,” where I was a student alongside the others, being introduced for the first time to generative software and scripting, theories of epigenetics, and lab protocols. While I did not join one of the groups for the studio projects, in every other way I studied under Sabin and Jones, even if at the same time I critically observed and questioned the underlying assumptions at play in the course. At the same time, conversations outside the classroom over the nine months clarified the course material and LabStudio’s broader research aims; these served as a second major source of information about their collaborative work. A few years later in 2011, I became a graduate student and participant observer for one term in the Emergent Technologies and Design program at the Architectural Association in London, including the preliminary Boot Camp, Michael Weinstock’s Emergence seminar, and George Jeronimidis’s Biomimicry studio. Upon returning to UC Davis in January 2012, I participated in physicist and mathematician James Crutchfield’s two-quarter graduate seminar “Natural Computation and Self-Organization: The Physics of Information Processing in Complex Systems,” as well as in philosopher of science James Griesemer’s seminar on “Philosophy of Emergence.” I co-led a faculty and graduate student discussion group on “Self-Organization and Evolutionary Biology” with members from physics, evolution and ecology, entomology, philosophy, cultural studies, and anthropology. Finally, I spent time in independent study the following year with Eva Jablonka, University of Tel Aviv, studying epigenetics, and with Evelyn Fox Keller, Massachusetts Institute of Technology, studying self-organization. In all these situations, while I was occupying the role of a student, I was also still professor, historian, and critic, so the positions I occupied in relation to those with whom I studied were multifaceted—studying under but also studying across.

This educational mode of research does not quite fit the standard rubric for STS scholars, who are often trained in the social sciences and versed in the scientific theories and practices prior to being on-site at one or more laboratories. But this experiential, observational approach poses a new method for design studies scholars and critics, affording the chance to learn not only about the intersections of design with contemporary science, but also about the material and rhetorical practices of design within the classroom and the studio. Few precedents for design ethnographies exist, although versions influenced primarily by anthropology are becoming increasingly common. Recent examples include the anthology Design Anthropology: Theory and Practice (2013) and Swedish Design: An Ethnography by Keith Murphy (2015), as well as the project Experimental Collaborations: Ethnography beyond Participant Observation, headed by Adolfo Estalella, Andrea Gaspar, and Tomás Sánchez Criado.[10] Even fewer design ethnographies are influenced by STS rather than anthropology, one of which is Albena Yaneva’s recent book Made by the Office for Metropolitan Architecture: An Ethnography of Design (2009).[11]

STS approaches matter for understanding aspects of generative architecture not only because practitioners claim the terminology of the “laboratory” for the studio but also because of their heavy reliance on different scientific theories and practices. STS approaches matter as well for design studies that engage with issues of design in relation to science, the environment, and “sustainability.” These methods and the knowledge they have produced offer means to better understand the science and weigh the environmental impact of different modes of design production. Like most academic disciplines, STS has transformed significantly since its inception in the 1970s from “first generation” STS to “second-” and now “third-generation” approaches. A short review of these is useful for helping those in design studies consider a variety of STS approaches and some of their implications.

First-generation STS scholars, under the influence of Bruno Latour and Steven Woolgar’s pathbreaking Laboratory Life, stressed ethnographic observation within a single laboratory to reveal, as their subtitle stated, The Construction of Scientific Facts (1979).[12] While this approach carried the powerful punch of destabilizing the authority of scientific fact-hood as objective knowledge of reality, its focus within a single laboratory ignored the larger influences of social forces outside the laboratory as having influence on scientific knowledge-making. Furthermore, the entrée of social constructivism into the realm of science, as distinct from the arts and humanities, brought with it certain challenging implications. For example, if scientific knowledge is a social construction, then why should the educated public believe scientists over creationists on such topics as evolution and global warming? Second-generation STS scholars of the 1990s and 2000s (including George Marcus, Jan Golinski, Annemarie Mol, Stefan Helmreich, and Christine Hine, among others), under the influence of comparative anthropology, began to practice multisite ethnographies focused on more than one scientific laboratory, in part to engage social forces outside of and in between laboratories as active participants in the making of scientific knowledge. This approach forced researchers to pay closer attention to their own relations to their subjects, which differed from site to site. This self-consciousness resulted in the idea of not studying up or studying down but rather studying across. Self-conscious research under this model, which examined simultaneously different “worlds” and spaces and the social forces at play within and between, often did so to a particular end, engaging an activist cause addressing particular concerns. This brought STS scholarship into closer engagement with problems affecting different groups, a major one of which is environmental devastation.

I have already alluded to the influence of this approach in my own research in terms of my multiple roles as student, observer, peer, professor, historian, and critic at different academic institutional sites. The idea of studying across applies to more than just one’s positional relations as an observer, however. It can also be a self-conscious mode of analysis that treats theory as transversal, as socially constructed concepts that cut across different lines of thought and different worlds while producing measurable material effects. In this study, I approach complexity theory by studying across in this latter sense. I recognize complexity as a scientific theory of natural processes of organization, but I also analyze it as an ideology of complexism that is infusing many fields, including generative architecture. It thereby influences disciplinary practices that produce real material and environmental effects.

One example of this mode of studying across from STS scholarship that is particularly useful as a parallel approach to my own study is Helmreich’s essay from 2011, “Nature/Culture/Seawater.” He explores seawater theoretically and materially: as the spatial divider of “nature” and “culture” historically under colonialism as that which falls in between during travel; as the flux and flow between concepts and materializations of nature and culture, some of which are leading to rising sea levels; and as the site of major oil spills, understood both through scientific data and computer simulations. He uses what he calls working “athwart theory,” a conceptual approach of studying across, thinking of “theory neither as set above the empirical nor as simply deriving from it but, rather, as crossing the empirical transversely.”[13] With regard to seawater, he writes, “I am interested in how simultaneously to employ water as a theory machine, when useful, and to treat both water and theories as things in the world,” drawing attention to their materializations. “I think of this approach as operating athwart theory: tacking back and forth between seeing theories as explanatory tools and taking them as phenomena to be examined. Such an account does not separate meaning and materiality, since such sequestering only reinstalls a pre-analytic nature/culture.”[14] He considers “theory (and, for that matter, seawater)” to be both “at once an abstraction as well as a thing in the world; theories constantly cut across and complicate our descriptive paths as we navigate forward in the ‘real’ world.”[15] He thus accepts and interprets seawater as both a “theory machine” and as known materially through the tools and methods of science.

Like seawater, complex systems are positioned both at and beyond the nature/culture binary. Human “cultural” systems like urban traffic, the Occupy movement, or generative architecture are interpreted as complex systems just as often as are physical and biological “natural” systems, such as the weather or an ant colony. Yet all of these are “natural” material systems that produce transformative, measurable environmental effects. At the same time, complexity functions as a “theory machine” driving scientific research questions and experiments as well as ideologically shaping how academics in many fields interpret the systems they study or create. My study of generative architecture and complexity theory therefore works “athwart theory,” tacking back and forth between complexity as an explanatory tool and ideology and as a material phenomenon to be explained, one that has environmental effects as real as we can be sure of as an oil spill in seawater.

How do we know about the material effects of oil spills, seawater, or complex systems, if scientific knowledge is a social construction? Third-generation STS scholars—including Helmreich, Harry Collins, and Paul Edwards, among others—are carrying forward second-generation activist concerns while facing directly the difficulties posed by first-generation scientific social constructivism. They pragmatically differentiate scientific theory-making from political and environmental policy-making. While scientific theories are necessarily socially constructed, scientific infrastructure also has a material basis that tends to reify past accepted knowledge. Yet, they argue, the knowledge that mainstream scientists provide is the “best knowledge” we have about the material world.[16] Pointing out the social construction of science was not intended to be “anti-science,” Collins states, but rather was meant to serve as a caution to scientists to not promise more than they could deliver.[17] So when it comes to policy-making, political and environmental leaders should rely, Collins and Edwards argue, on mainstream scientific knowledge generated by the largest group of respected scientists or “experts,” rather than on knowledge promulgated by fringe theories.[18]

Edwards’s book A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming (2010) specifically tackles the profound difficulties of acquiring knowledge about climate change and global warming. One process he explores is how historical climate data, which originally was only national and not global and was based on earlier scientific technological infrastructures, must be revisited again and again upon the advent of new knowledge-making tools and theories in order to incorporate its information into current climate modeling. Yet, this process of revisiting data and method again and again, in combination with looking at scientific evidence of climate change from multidisciplinary perspectives including those the earth and environmental sciences, along with the stringent thoroughness and integrity of the Intergovernmental Panel on Climate Change, leads to a growing certainty of the probabilistic likelihood that our knowledge of climate change is the “best” it can be. Edwards concludes his book, “We have few good reasons to doubt these facts and many reasons to trust their validity. The climate’s past and its future shimmer [probabilistically] before us, but neither one is a mirage. This is the best knowledge we are going to get. We had better get busy putting it to work.”[19]

In our current era of climate change and global warming, design studies scholars can benefit from the methods and knowledge produced by STS as they observe and critique efforts in design and architecture aiming for “sustainability.” STS methods can help design studies move beyond decades of exploring design predominantly as social construction, to add to this understanding insight into how theory inspires creative practice while also moving transversely across domains, producing real material effects. Design production, hand in hand with consumption, utilizes vast amounts of the earth’s materials and energy, effecting not only climate change but also direct environmental devastation. Architecture does, too; a common statistic is that buildings in the United States consume approximately 48 percent of the energy the nation produces and release about 45 percent of the nation’s carbon dioxide emissions.[20] It is thus from the vantages of second- and third-generation STS insights and scholarship that important contributions for design studies arise. Potentially useful methods, some of which are already beginning, include participant observation and ethnographic study of studios and classrooms; moving beyond the nature/culture binary; multisite analysis; careful consideration of one’s relationship to different sites; studying across rather than up and down; working “athwart theory” to consider theories—scientific or otherwise—as explanatory tools and also as shapers of material effects; directing studies with an activist’s eye for change; and integrating into policy and design ideation and production the “best” scientific knowledge we have.

These approaches lay a methodological foundation for this particular study in a number of ways, as explained above and as demonstrated throughout the book. One criteria by which I evaluate generative architects’ claims of sustainability is by considering life cycle analysis and embedded energy, a strategy that environmental architecture and design educators and analysts use but which apparently has largely gone by the wayside as something that somehow did not catch hold. I am unwilling to let it go, for it offers arguably the best tool for considering the environmental and (socio)material effects of design and architecture, which depend on the life cycle of the materials included in all forms of design ideation and production. With regard to a specific life cycle analysis of generative architecture, the closest this study comes is a brief synopsis of the life cycle of transistors, silicon chips, and computers at the end of chapter 2 as well as brief mention of the greenhouse gas output and embedded energy in some materials used in well-known buildings. I use this tool because, like Edwards, I accept that climate change science is some of the best and most important knowledge we have, and I believe that most, maybe even all, forms of architecture should integrate this knowledge into architectural curricula, as well as all phases of design ideation and production. Yet I am not an absolutist, and I accept that architecture has critical significance beyond its environmental impacts.

I therefore will clarify that I do not consider environmental pragmatism to be the end-all and be-all of architecture and design or design studies, exclusive of its aesthetic and cultural representations and creative expressions. Architecture functions holistically: materially, environmentally, culturally, socially, aesthetically, economically, politically, et cetera. It can be celebrated and critiqued for functioning well but doing so only partially. It is when some architects directly claim that their work promotes “sustainability” that they specifically invite themselves into the arena of life cycle analysis. A number of historical vernacular examples demonstrate that it is possible to use relatively low amounts of energy in design production with relatively low hazardous environmental outputs while also aptly expressing culture and aesthetics. The true challenge here lies not in architecture alone, per se, but in its practitioners’ and educators’ acceptance of the capitalist modes in which architecture functions in the global economy: promoting continual economic growth through continual harvesting and processing of new materials, largely disregarding reuse and historical preservation, choosing highly processed rather than minimally processed, high-embedded-energy rather than low-embedded-energy materials, and continuing to relish the aura of and economically reward avant-garde “starchitects” rather than those making more holistically considered decisions. I therefore choose to evaluate generative architecture in relation to the sciences that its practitioners reference—complexity theory in general, including self-organization, emergence, and natural computation, complex biological systems, genetic and tissue engineering, and synthetic biology; as well as in relation to the sciences that some of them ignore, namely, epigenetics, climate change, and global warming. This valuation is based on my acceptance of what I think is the best knowledge we have.

In addition to posing an approach to design studies informed by STS methodology and insights, this book also offers the first significant critical interrogation of the major scientific themes at play in generative architecture. These include, in addition to complexity, self-organization, and emergence: natural and material computation; morphogenesis, evolutionary development, epigenetics, and evolutionary computation; biosynthesis in biological systems design; tissue and genetic engineering; and synthetic biology in its two forms, as “bottom-up” protocell research and as “top-down” bioengineering. The book therefore functions in part as a primer on contemporary biological sciences for those interested in architecture who may know less about biology. I historically contextualize contemporary theories by placing them in relation to predecessor theories of the twentieth century: Lamarckian and neo-Darwinian evolution, eugenics and early genetics, the modern synthesis, and Richard Dawkins’s selfish-gene theory. This foundation provides a knowledge base for readers—and for prospective students of generative architecture—to better assess the claims of generative architects with regard to scientific approaches and their visions of the near future. It also clarifies the confusing discourses of generative architecture caused by overlapping terminologies from the disciplines of complexity theory, biology, computer science, and architecture. Words like “gene,” “DNA,” and “biocomputation,” which mean one thing in the discipline where they first arise, do not necessarily translate into other disciplines carrying the same meaning. For example, the words “gene” and “DNA” in the writings of architects almost never refer to the molecular substances in cells. Yet such language has wooed more than one aspiring graduate student into an architectural movement that was far less biological and more computational than he or she had expected. This book therefore demystifies the terminology of generative architecture by identifying, as best as possible, when architects use terminology to refer to computational versus biological processes.

Besides these contributions, two other major reasons motivate this study. The first is to ascertain what practitioners mean when they claim some version of “sustainability” as an effect of their biologically inspired approaches. Does simply referencing some aspect of biology—like coining the word “morpho-ecologies” to describe organic-looking, high-performance, high-tech, big-data but generally small structures—suffice to deem a process and its resulting structures “sustainable”?[21] Achim Menges, who coined the word “morpho-ecologies” in 2004, and his collaborator Michael Hensel both decry the shallowness and small-mindedness of most “sustainable” or “green” architecture. They aim instead for a “more developed paradigm” integrating form and function and connecting the structure to its environment. To generally evaluate how successful they are in comparison to, say, approaches in vernacular architecture utilizing traditional layers, screens, and semipermeable shading structures, the simple concept of “life cycle analysis” offers a guide for critique. Life cycle analysis is useful to designers and people who care about doing less harm to the environment. It helps them choose materials and production processes that require less energy input, produce less toxic outputs, and can more quickly and readily decompose and return to full ecological use at the end of life. It is difficult to deny that the actual energetic and material formations of architecture, biology, and computation can be very different and can produce very different environmental effects. This is another reason why clarifying the terminology matters. Most digital technologies and works of generative architecture—for that matter, most works of contemporary architecture—do not fare very well.Introduction

Consider another example of “sustainable” generative architecture. Looking to the future, architect Rachel Armstrong aspires to create “genuinely sustainable homes and cities” using what she calls “protocells,” which are actually pre-protocells since the first protocell has not yet been created.[22] These pre-protocells are tiny sphere-like droplets with semipermeable lipid membranes that coagulate in a beaker at the boundary between olive oil and water, with the addition of a few other chemicals. Architect Philip Beesley, at the University of Waterloo, crafts exquisite digitally designed and manufactured container systems as installations for their display (Plates 2 and 14). Pre-protocells have been created by origin of life and artificial life researchers trying to re-create how cells may have first formed in the earth’s early marine environment. Some pre-protocells can precipitate tiny calcium carbonate particles by taking carbon dioxide out of the air, so Armstrong imagines “protocell” cities made with calcium carbonate built from “the bottom up” serving as carbon sequestration zones.[23] Given the scale differential and other feasibility issues between pre-protocell secretions in a beaker and an actual city, her claims sound fantastic, yet her 2009 TED talk “Architecture That Repairs Itself?” as of this writing, has been viewed more than a million times.

Only Michael Weinstock and Alberto Estévez openly scoff at the concept of “sustainability.” Estévez sports the familiar architect-as-god / genetic engineer mentality, writing, “The new architect creates nature itself. Therefore, there is no point in being environmentally friendly since we are about to recreate the environment anew.”[24] Weinstock more interestingly bases his rejection of sustainability on his deep belief in complex systems theory, which posits processes where abrupt phase changes reorganize systems far from equilibrium into ever more complex systems. Weinstock thinks “sustainable” architecture is just a “Band-Aid” on a complex system on the verge of phase change, as current environmental and economic conditions are tending toward global collapse (he might say reordering).[25] Accordingly, students in the Fall 2011 EmTech program had to beg the tutors to teach Ecotec, the then-current software used to analyze sun angle at different latitudes throughout the year in order to design for energy efficiency. Possibly owing to Weinstock’s direction, the tutors did not think graduate architectural students needed to learn it. At the Architectural Association and other architectural schools around the world, graduate programs teaching “green” or “sustainable design” have been historically separate from those teaching generative design. A simple perusal of every issue of AD (Architectural Design) on these topics from the 1960s to the present reveals this split as well until only recently. Students entering programs teaching generative architecture should know where their professors and program generally stand on the issue of architectural design in relation to environmental concerns, as should clients hiring the professors’ architectural firms for design work.

The second major reason motivating this project besides a critical analysis of sustainability discourse is the importance of historically contextualizing generative architecture in relation to its precedent, eugenic design of the interwar period. I do so to caution against the eugenic thought that is embedded into today’s genetic algorithms and some aspects of genetic engineering. The similarity of the language of generative architecture to that of the 1930s designers that I critiqued in Eugenic Design: Streamlining America in the 1930s is remarkable. Simply seeing the words “architecture,” “design,” “genetic,” and “morphogenetic” together on the covers of early major publications in the field—such as Genetic Architectures (2003) and the AD issue “Emergence: Morphogenetic Design Strategies” (2004)—stunned me. Inside this AD issue, the Foreign Office Architects (FOA) diagram of their firm’s work in the form of a “phylogenetic tree” (Plate 3) reads as a new version of Raymond Loewy’s evolution charts of the 1930s (Figure I.2).[26] The name of FOA’s 2003 exhibition at the Institute for Contemporary Art in London, Foreign Office Architects: Breeding Architecture, simply reinforces this. The language of evolution, phylogenesis, species, breeding, genotype, phenotype, DNA, and fitness optimization pervades generative architecture, much as it did eugenic design. Despite so much being familiar, however, new words are in the mix—computation, algorithm, emergence, self-organization, complexity. These point to significant differences between contemporary science and architecture and that of the 1930s, although Martin Bressani and Robert Jan van Pelt’s essay “Crystals, Cells, and Networks: Unconfining Territories,” written for The Gen(H)ome Project at the MAK Center (2006), argues that lebensraum German design under the Third Reich proceeded hand in hand with the state’s eugenic program in the annexed territories based on natural design principles, both organic and inorganic.[27] I therefore highlight when today’s discourses and approaches resemble those of eugenics to bring a critical awareness to its re-occurrence in generative architecture, for it is also re-occurring in the form of contemporary eugenic sociopolitical policies and medical practices.

Figure I.2. Evolution charts, by Raymond Loewy, 1934. Provided by Loewy Design LLC, http://www.raymondloewy.com. By depicting historical changes in product design in the form of linear evolution charts, industrial designer Raymond Loewy suggests that clean modernist design was produced by natural progression of biological evolution rather than by designers working in particular contexts.

Although the word “eugenics” became taboo after the Holocaust and has been largely forgotten by the general public, its ideals and even some of its practices persist. Instead of “race betterment” and “positive eugenics” that tried to increase the number and quality of “fit” citizens, people now speak of “designer babies” and “enhanced” or “disease-free” humans. As regards “negative eugenics” that aimed to decrease the number of “defective” or “unfit” humans, the U.S. state sterilization laws that inspired Germany’s Law for the Prevention of Hereditarily Diseased Offspring (1933) remained on the books and in practice until as late as the 1980s. In 2013, North Carolina became the only state to pass a law offering reparations to living victims who were sterilized involuntarily, although of the nearly eight hundred people who applied only about a quarter have been approved.[28] As North Carolinians were debating the law, CNN investigated California’s sterilizations, asking why California was not also considering reparations when it was the state that involuntarily sterilized the highest number of citizens, nearly a third of the national total of around seventy thousand people.[29]

Yet just one year after CNN ran its piece, California news media broke a story investigated by Justice Now about the forced sterilization between 2006 and 2010 of 148 female prisoners in the state.[30] In 2014, governor Jerry Brown signed into law SB 1135 banning prisoner sterilizations.[31] Other modes of limiting the reproduction of people with qualities not desired by a state (or its politicians) have been proposed recently as well. In 2008, Louisiana state senator John LaBruzzo proposed that poor women be paid $1,000 to voluntarily be sterilized. The Times-Picayune reported, “LaBruzzo said he worries that people receiving government aid such as food stamps and publicly subsidized housing are reproducing at a faster rate than more affluent, better-educated people who presumably pay more tax revenue to the government.”[32] And in 2008, the United Kingdom passed the Human Fertilisation and Embryology Act, which forbids medical personnel from implanting embryos with “hereditary disease” into women using in vitro fertilization. The law classifies deafness as a “defect” and “disease” and forbids a parent selecting a “deaf” embryo, ascertained through pre-implantation genetic diagnosis, even if the parent is deaf.[33]

These current instances of eugenics in the political and medical realms might feel very distanced from the practices of generative architecture and design. But when architectural forms are generated using genetic algorithms, the logic of design and production is almost identical to that of eugenics. Consider this description offered by Keith Besserud and Joshua Ingram, of BlackBox Studio at Skidmore, Owings & Merrill, in their paper “Architectural Genomics” presented at the 2008 ACADIA conference Silicon + Skin: Biological Processes and Computation:

  1. 1. Define the fitness function(s) . . . what performative metric(s) is(are) being optimized?
  2. 2. Define a genome logic (the number of characters in the genome string and the relationship between the individual characters and the expression of the design geometry)
  3. 3. Randomly generate an initial population of genomes
  4. 4. Test the fitness of the designs that are generated from each genome
  5. 5. Identify the top performers; these will become the selection pool for the next generation
  6. 6. From the selection pool build the population of the next generation, using methods of cloning, cross-breeding, mutation, and migration
  7. 7. Test all the new genomes in the new population
  8. 8. If the performance appears to be converging to an optimal condition then stop; otherwise repeat starting from step #5[34]

From defining “fitness” and embedding it in a “genome,” to evaluating individuals in populations against the fitness criteria to “identify the top performers” as the “selection pool,” to breeding only the top performers with each other thereby eliminating all “unfit” designs from future populations, to aiming overall for the “optimal condition,” this logic is virtually identical to eugenics of the interwar period. LaBruzzo, too, expressed the same aim that during the Great Depression had exerted a powerful appeal: more of the “fit” and less of the “unfit” to supposedly save state funds. The differences between Besserud and Ingram’s description and the ideals of eugenics in the 1930s are in the medium and location of design and production, in silico rather than in vivo; in the kinds of traits or parameters being optimized, architectural ones rather than human or cultural ones; and the extent to which eugenic opinions, sociopolitical policies, and medical practices are enacted publicly in our time.[35] Genetic algorithms should therefore be referred to as “eugenic algorithms” (EuAs) in order not to evade consciously recognizing this increasingly internalized and common mode of thought—for example, the ways that listeners of Pandora streaming service, which is driven by the “Music Genome Project,” optimize their playlists with votes up or down.[36]

Chapter Overview

As previously stated, the book’s overall narrative moves from the computational toward the biological and from current practices to visionary futures. The first half focuses more on ideals of complexism in generative architecture and the second half addresses more of the biological aspects, from actual scientific experimentation to architects’ dreams of generating buildings from living cells and pre-protocells. The book explains the scientific reasons why these dreams from the realm of science fiction are unlikely to become reality based on today’s knowledge.

At one end of this spectrum, some generative architects might not ever think about using biological materials for building; they use biology only as an analogy or inspiration for computational techniques. They do not work in scientific laboratories, and may even be offended that I include those who want to grow living buildings under the label “generative architecture.” At the opposite end of the spectrum—those who dream of mixing up pre-protocell concoctions and pouring them on the earth to have buildings slowly materialize—are those whose practice need not involve computers at all; theirs is more akin to cooking and chemistry, some of which they do mix up in laboratories. This is not to say computers and generative processes are not involved in the envisioning process. As the image of Philip Beesley’s Hylozoic Ground shows (Plates 2 and 14), everything surrounding the beaker is digitally designed and manufactured, albeit hand-assembled, for every one of Beesley’s installations, and I think it is unquestionably characterized as generative architecture. And then there are those in the middle, whose work transitions between the purely computational and biological: Jenny Sabin and Peter Lloyd Jones, as well as others involved with LabStudio, David Benjamin and Fernan Federici, and Oron Catts and Ionat Zurr. These truly interdisciplinary teams work both in scientific laboratories and in the practices of architecture, design, and art, and perhaps owing to their serious exploration of these two domains, their work is the most astute as well as the most scientifically informed and up-to-date. As will become clear, Estévez rhetorically positions himself in this middle arena, yet in fact, his publications do not address the results of laboratory experimentation and reveal little current knowledge about the science he promotes.

My goal in including all these concepts, practices, and visions in the same book—one titled Toward a Living Architecture?—is to explore, by comparison, the meanings of their shared rhetoric as well as the interrelations and disjunctions of their practices and the larger questions raised by doing so. I expect that the crucial differences in media that these practitioners use, or envision using, for architecture and the contexts that inform their function will help distinguish different practices and disciplines that are being theoretically conflated or merged under contemporary materialist philosophical concepts. I hope this facilitates a deeper and more critical discussion about the ways that humans and their architecture affect the environment. I do not think that merely talking about and claiming “sustainability” as has thus far been manifested through generative architecture is anywhere near substantial enough. Shifting from steel, concrete, and glass in the shape of organic forms, to turning buildings into living organisms or living organisms into buildings, is not a good answer either for many reasons. My hope is that this book brings some clarity to a murky terrain to allow for more informed discussions and well-considered practices.

Chapter 1 opens with an exploration of the most fundamental concepts of complexism, those of self-organization and emergence, particularly with regard to how they interest certain architects. By definition, emergence is closely tied to self-organization. These terms are therefore explored in this chapter based primarily on Weinstock’s overall argument in The Architecture of Emergence: The Evolution of Form in Nature and Civilisation (2010). The chapter begins with the Boot Camp project that EmTech students, including myself, were assigned in 2011 to introduce ideas of self-organization and emergence in architecture. The brief called for creating a design using multiple components that connect locally following rules to “emerge” into a “global” structure. This fundamental design approach contributes a particular aesthetic to “emergent architecture” that extends far beyond EmTech. The chapter then analyzes Weinstock’s writings on emergent architecture in relation to his dismissal of “sustainability,” in order to contrast his approach with emergent architecture as proposed by Menges and Hensel under the term “morpho-ecologies.” Because Weinstock, Menges, and Hensel publish together often, many in the field see them as part of a single school of thought, but in fact their views differ significantly. This chapter therefore parses some of their major differences, including Menges and Karola Dierichs’s research into aggregate architecture in relation to the ideas of self-organization and self-assembly, and Menges and Hensel’s support for heterogeneity in contrast to Patrik Schumacher’s call for a monolithic parametric homogeneity.

Chapter 2 compares generative architects’ ideas of “material computation” with a number of very similar-sounding concepts—natural computation, biocomputation, biomolecular computation, protoputing, and programming matter. It does so to clarify the potential confusion surrounding these terms in relation to one another and to point out where they overlap. “Material Computation” is the title of a special issue of AD in 2012 edited by Menges, who currently directs the Institute for Computational Design at the University of Stuttgart. He is an expert in wood, particularly in terms of current scanning and digital technologies that offer detailed and precise information about wood and its material performance under many different conditions. Many of the pieces he and his students create are wooden designs that intentionally feature its dynamic, performance-based, “emergent” properties. I therefore interpret his use of “material computation” in multidirectional comparison: first, with a few of the self-assembling and aggregate structures he includes in the journal issue, those of Dierichs and also Skylar Tibbits, of the Massachusetts Institute of Technology; and second, with the term “biocomputing” used by architect David Benjamin, founder of the firm The Living and an instructor at Columbia University’s Graduate School of Architecture, Planning, and Preservation. Benjamin and his collaborator, synthetic biologist Fernan Federici, recently published a chapter called “Bio Logic,” where they put forward the term “biocomputing” in contrast to “biomimicry.” I explain what their version of biocomputing means, since there also is a scientific process called biomolecular computing that is developing at the intersection of computer science, engineering, and molecular biology.[37] This field is “known variously as biocomputing, molecular computation, and DNA computation,” molecular bioengineer Pengcheng Fu writes, so therefore it is important to understand the different meanings given to these very similar-sounding, even identical, terms.

Additionally, Menges’s phrase “material computation” sounds very much like a related concept in complex systems theory, what physicist James Crutchfield and others refer to as “natural computation.” Menges likely chose the term he did since so much of his, Hensel’s, and Weinstock’s ideas are shaped by complexity theory. I took Crutchfield’s graduate physics course Natural Computation and Self-Organization at UC Davis in 2012, and in this chapter, I use my research into the self-organization of a particular biological system, as measured through Crutchfield’s technique of “computational mechanics,” as an example to highlight the differences between “material computation” and “natural computation” and to point out the different ways that “computing” and computers are used in these different disciplinary contexts. Studying the same topic—tendril coiling—from different disciplinary perspectives (biomimetic architecture, mathematics, biology, physics, computation) offered deep insights into the differences of terminology and tools of analysis used across different fields. I end the chapter by discussing the actual life cycle materiality of computers as a fitting contrast to “material computation”: the increasingly rarer earth substances from which they are made, the high embodied energy in transistors and chips that are integral to every digital technology today, and the toxic effects of their production on workers and the environment.

The third chapter covers the history of ideas of morphogenesis and evolutionary computation as related to each other and to generative architecture. The branch of computer science referred to as evolutionary computation draws its analogies from biological theories of morphogenesis and evolution, and its techniques are the source for the “generative” computational portion of generative architecture and design. As a whole, however, evolutionary computation contributes far more to studies of artificial life and general complex problem-solving in many different fields than it does to biology. On the one hand, architectural students and those interested in generative techniques should understand how and where evolutionary computation fails to mimic biological processes, in order not to presume that by using generative techniques they are somehow creating biologically relevant design. On the other hand, for those interested in biology, developments in knowledge of processes of biological morphogenesis (aka biological development from embryo to adult) and their relationship to evolution have made the last couple of decades the most exciting in over a century of pursuing answers to some of biology’s biggest questions. Although embryology and evolutionary studies began side by side in the late nineteenth century, during most of the twentieth century during the development of genetics they functioned as separate fields. The recent theory known as evo-devo (evolutionary developmental biology) has brought them together again, and just as it is prompting new research in biology, it is posing a new approach in evolutionary computation known as “computational development.” Additionally, new knowledge of epigenetics is transforming understanding of how organisms interact with their environment to regulate gene action appropriately and responsively, sparking new conversations about the ongoing relevance of Lamarckism and encouraging the development of epigenetic algorithms. While I learned about evo-devo at both Penn and EmTech, only at Penn was epigenetics discussed. Later, independent study with geneticist and philosopher of science Eva Jablonka convinced me of the importance of this new field of scientific knowledge that is virtually ignored by generative architects.

While some generative architects are creating genetic algorithms integrating principles from evo-devo, to my knowledge none besides John Frazer has recently tried epigenetic algorithms as a tool for architectural morphogenesis. Because buildings, like organisms, interact with their environments in multiple ways, perhaps these approaches could push architects to think of morphogenesis not just as something that happens during the design phase in silico but also during the life of the building, including not just the “ecology” of humans but ecology writ large. After all, other theories of morphogenesis—beginning with D’Arcy Thompson’s pathbreaking foray On Growth and Form (1917) and Alan Turing’s prescient essay “The Chemical Basis of Morphogenesis” (1952)—are influencing generative architects.[38] Mid- to late twentieth-century neo-Darwinian theories of genetics offered the theories on which genetic algorithms were structured, and now generative architects are adapting evo-devo, as explained in Sean Carroll’s Endless Forms Most Beautiful (2005), for computational design strategy. It is crucial to realize, however, that these approaches in architecture mostly contribute to “the evolution of things,” as noted in a 2015 article in Nature, through digital design and manufacture.[39] But the developments in biological theories are relevant to those architects serious about biological sciences or envisioning growing living buildings, as the second half of the book discusses.

This leads directly to the fourth chapter, “Context Matters: LabStudio and Biosynthesis,” the first to focus more on biological experimentation than computation. Jenny Sabin and Peter Lloyd Jones have been two of the most prominent figures in generative architecture to understand and integrate the most recent “postgenomic” knowledge of epigenetics and systems biology into their work and teaching.[40] Jones’s first lectures in their co-taught 2008 seminar, Nonlinear Biology and Design at Penn, introduced epigenetics and the ways it is transforming knowledge of gene regulation and systems biology. He shared ideas from Eva Jablonka and Marion Lamb’s important book Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life (2005), as well as the work of his postdoctoral mentor, Mina Bissell. Bissell’s groundbreaking work on cancer morphogenesis focuses on epigenetic triggers and controls of the disease, particularly with regard to the extracellular matrix that surrounds cells in tissues. The experiments that Sabin and Jones prepared for interdisciplinary teams of graduate students in their seminar focused on cell behaviors in different matrix conditions, in order to develop tools to see and identify healthy and unhealthy behaviors appearing through cell patterning in surface design, grouping, and motility. This is the only chapter in the book that focuses on the work of just one collaborative entity, since for many years they were the only team working in generative architecture that included a molecular biologist / biomedical researcher; the only team that taught the most current scientific theories pertaining to biological systems, although their close collaboration ended in 2011 when Sabin relocated to the Department of Architecture at Cornell University; and the only group that paired graduate students in architecture with those in molecular biology, making teams do research in both a scientific laboratory and a computational studio. The chapter evaluates some of the contributions of LabStudio’s efforts both for biomedical research and for architecture, as manifest in Sabin’s recent accomplishments.

Generative architects who want to grow living buildings using either tissue or genetic engineering most certainly need to understand postgenomic theory, for it seriously complicates their endeavors. Chapter 5 examines the goals of those architects envisioning or claiming to want to work with genetics and living cells or flesh as their media for architecture. These prominently include Alberto Estévez (ESARQ, Barcelona), Marcos Cruz (Bartlett School of Architecture, UCL), Matthias Hollwich (Penn Design and Hollwich Kushner), and, for a short while, SPAN Architects (Matias del Campo and Sandra Manninger), who collaborated with mechanical engineer Wei Sun of Drexel University’s Lab for Computer-Aided Tissue Engineering. Their visions are examined in relation to the technologies they propose to use—usually tissue or genetic engineering. The chapter opens with a discussion of Catts and Zurr’s Victimless Leather, shown at the Museum of Modern Art in the exhibition Design and the Elastic Mind in 2008. Both before and after this exhibition, architects upheld Catts and Zurr’s work as the prototype of a future living architecture. For example, Cruz with Steve Pike published “Neoplasmatic Architecture” in AD (2008). They invited Catts and Zurr to contribute an article, and then later invited Catts to come to the Bartlett to guest lecture on how to apply tissue technologies to architecture. Overall, though, architects completely miss Catts and Zurr’s critical views of these technologies. The chapter summarizes the limitations of living buildings in general, as well as limitations of scale in tissue engineering and the new field of 3-D “bio-printing,” less often referred to as “organ printing.” It introduces the difference between methods of tissue and genetic engineering, as distinct from synthetic biology. With regard to genetic engineering, I critique the video Econic Design: A New Paradigm for Architecture by Matthias Hollwich, Marc Kushner, and their students in 2008, as well as the descriptions of genetic architecture put forward by Estévez.

The last chapter addresses generative architecture and design in relation to the two branches of synthetic biology, research aiming to create protocells “bottom-up” as distinct from “top-down” engineering of synthetic life forms and products referred to here as “engineering synbio.” “Protocell” architecture, as Armstrong calls it, returns us full circle to the initial ideas of self-organization and self-assembly, as it proposes to allow molecules in different solutions to form cell-like entities that secrete calcium carbonate to make buildings and cities. The most prominent promoters of “protocell” architecture are Rachel Armstrong, Neil Spiller, and Nic Clear, all with the Advanced Virtual and Technological Architecture Research Group at the University of Greenwich, with their visions materialized through the work of artist Christian Kerrigan and architect Philip Beesley. Interestingly, Kerrigan’s renderings for Armstrong’s Future Venice project contradict some of her primary assertions. This raises questions about their collaboration and how the piece was intended to be received. Kerrigan’s renderings suggest that theirs may be a work of “critical design” of the sort promoted by Anthony Dunne and Fiona Raby at the Design Interactions Department, Royal College of Art, but no one thus far has interpreted Armstrong’s work in this way. Rather, she is usually given credit for being a medical doctor—implying she must understand biology—and is revered by many young architectural students for a “sustainable” vision that is not architecturally or scientifically credible.[41] For his part in “protocell” architecture, Beesley’s digitally designed and manufactured sculptures create stunningly beautiful and thoughtful interactive environmental installations that happen to house “protocell” flasks. His work raises more interesting questions about “hard” artificial life than about protocells, which are characterized as a future form of “wet artificial life.”[42]

Figure I.3. Hy-Fi, by David Benjamin / The Living with Ecovative, 2014. Winner of MoMA PS-1 Young Architects Program, 2014. The bricks in this structure are created out of mushroom mycelium and corn stalks.

The book concludes by considering new work in biodesign and in design using engineering synbio, which I refer to as “synbiodesign,” first explaining the major differences between genetic engineering and engineering synbio. David Benjamin, who collaborated with synthetic biologist Fernan Federici in 2011, also worked with the new company Ecovative to grow mushroom mycelium bricks for his installation Hy-Fi, which won the 2014 MoMA Young Architects Program at PS-1 (Figure I.3). Ecovative built on the preceding work of San Francisco artist and designer Phil Ross (Figure I.4). Ross is developing his company Mycoworks through the biotechnology-focused IndieBio start-up program in the Bay Area. For the last three years, he has been working at the lab of Stanford bioengineer Drew Endy, one of the founders of the field of synthetic biology. Ross knows that it is very difficult to genetically engineer fungi; they resist genetic manipulation, reverting to their age-old form and function. This has prompted him and his collaborators to work on engineering the media in which the mycelium grows, to shape the resulting composite into a new, low-energy, low-cost material that can serve ubiquitously for design.[43] The chapter explains both the appeal and the critiques of engineering synbio, which has a significant distance to go to live up to its most basic claims. If it does, along with genetic engineering it comes closest of all the methods discussed in this book to opening new modes of practicing eugenics. It is thus on a discordant tone that the book ends, weighing the former difficulties against the stark contrast that exists between Benjamin’s Hy-Fi and Ross’s relatively low-tech mycelium-based designs in comparison to the high-tech methods of generative architects of the computational variety—say, Menges, Hensel, or Schumacher.

Figure I.4. Walnut Legged Yamanaka McQueen, by Phil Ross, 2012. Fungus and wood. Created at the Workshop Residence in San Francisco with the help of Michael Sgambellone, Caitlin Moorleghen, and Peter Doolittle. Courtesy of Phil Ross.

Next Chapter
Chapter 1
PreviousNext
The University of Minnesota Press gratefully acknowledges financial support for the publication of this book from the Office of Research and College of Letters and Science, University of California, Davis.

This book is freely available in an open access edition thanks to TOME (Toward an Open Monograph Ecosystem)—a collaboration of the Association of American Universities, the Association of University Presses, and the Association of Research Libraries—and the generous support of the University of California, Davis. Learn more at the TOME website, available at: openmonographs.org.

Copyright 2018 by Christina Cogdell.

Toward a Living Architecture? is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
Powered by Manifold Scholarship. Learn more at manifoldapp.org