Chapter 2
Material Computation
These terms in current usage in architecture, engineering, and the sciences—material computation, natural computation, biocomputation, and biomolecular computation—are ambiguous about their subject or object, about what is being computed or doing the computing and whether components are interacting of their own agency or being designed to act according to scripted rules.[1] This same ambiguity pertains to self-organization, not only because the above terms describe processes often categorized as self-organizing but also because a similar vagueness surrounds self-organizing components’ agency, the identity of the “self,” and the origin of the rules supposedly being followed. These ambiguities may bother few scholars and in fact may even serve as a stimulus for research. For example, the interdisciplinary journal Natural Computing defines the term as “computational processes observed in nature, and human-designed computing inspired by nature,” which might also be called biomimicry.[2] The cross-fertilization of ideas between the natural and computer sciences undoubtedly sparks interesting and productive research questions and new methodological approaches to understanding. Furthermore, an increasing number of scholars subscribe to what is called nano-bio-info-cogno (or NBIC) technological convergence, which is based on the idea that at root, all things are computational and that technologies using computational tools (information technologies) across the physical, biological, and cognitive realms will bring these disciplines much closer together. From this perspective, “natural computing” makes computing seem, well, natural—meaning commonplace and everyday—in addition to pervasive across the material world.
Yet, one major goal of this book is to demystify the rhetoric of complexity in generative architecture in order to ascertain when in fact architects are talking about biology and when they are talking about computation or architecture. This is because significant disciplinary as well as energetic, material, and environmental differences exist between these domains. This is not done to promote a return to disciplinary isolation, for that would limit the kinds of questions scholars tackle that produce knowledge and change. The goal is to make visible the differences that the rhetoric masks so that architecture students and anyone who cares about the environment can make educated, conscious choices about the approaches and technologies they support. Additionally, in our interdisciplinary world, words that are spelled the same or sound similar often carry different meanings that in any specific instance depend on the discourses and meanings of the field in which they appear. “Natural computation” is not necessarily the same thing as “material computation,” nor as “biocomputation,” “biomolecular computation,” and “programming matter.” In general, “natural computation” is used by physicists and complexity theorists, “material computation” by generative architect Achim Menges and his circle, “biocomputation” by generative architect David Benjamin and his synthetic biologist collaborator Fernan Federici, “biomolecular computation” by interdisciplinary computer scientists with biologists and engineers, and “programming matter” by designer Neri Oxman along with materials scientists and synthetic biologists.
The difference between the two interpretations of “natural computing” offered by the journal of the same name—as technologies mimicking nature and as nature computing itself—is significant, and it points to a key issue with all these related phrases. It also is the reason for this chapter. In the former, computer-based technologies are doing the computing with analogous processes to natural systems; note that digital technology is the subject that is doing, as directed by humans. In the latter, natural systems are just being themselves, without hardware or software, and we just describe their normal processes as “computing.” Nature, living and nonliving, is the subject of the doing. Humans are differentiated from nature here not because humans are not natural or are not animals, but because humans are the audience reading this book who can choose to alter their actions based on what they learn. Furthermore, nonhuman nature does not make digital technological hardware and software without human direction and provision of parts. A materialist philosopher might take issue with my differentiation of these, saying it is all matter so why distinguish whether living, nonliving, hardly processed, or many-times-processed by humans? I distinguish because human processing of matter requires energy and materials and produces environmental waste and pollution, much of it seriously harmful. Since architecture is responsible for a significant amount of environmental damage, to hide the differences between what is heavily processed by humans (technology) and what is minimally processed (nature) does a disservice to humanity and other species. I therefore strive to distinguish biology from architecture from computation to clarify which materials and productions processes are being promoted. The chapter concludes with an elucidation of the materiality of computation—a short description of the life cycle of digital technologies, as generalized by a transistor, microprocessor, and personal computer, along with their embedded energy, wastes, and pollution.
Material Computation
Since the early 2000s, Menges’s research has focused on the material performance of building materials. Although architects have a lengthy history of using computers in their practice and generative architects have been increasingly pursuing parametric design for the last fifteen years, few have considered the materials from which buildings are made as anything but subsidiary to processes of digital design. Computers have been used to aid in the production of geometric form and structure, to calculate engineering loads for particular known materials under different forces, and to integrate such features as architectural program and cost in multi-objective optimizations. Menges and others, like Open Source Architecture’s Aaron Sprecher at McGill University’s LIPHE (Laboratory for Integrated Prototyping and Hybrid Environments), are forging the paths to add to these the development of computational approaches to integrate microscale material properties and related building-scale performance capacities as “generative drivers in design computation.”[3] For example, LIPHE is developing new optimization models for the robotic production of large-scale prototypes that integrate “material and physical behaviours within simultaneous design and operative decision-making.”[4] For Menges’s approach, at the outset one of the parameters factored into the digital design process is information about materials in order that its potential can inform possible outcomes along the way, including component geometry, the manufacturing process—laser-cutter for 2-D materials, or computer numerically controlled (CNC) or additive manufacturing for 3-D ones—and the structure’s assembly logic. All of these are interrelated, as Menges aptly demonstrates through his numerous experimentations with wood (Figures 1.9 and 1.10).
Wood has inherent material properties that many industrially produced architectural materials do not have. The latter are quite homogenous in their composition, either at the level of molecular structure (steel and glass) or in terms of the uniformity of aggregate distribution in a composite such as brick, concrete, or plaster. Wood, however, is a biological composite composed primarily of cellulose and lignin, built up through yearly accretions of different kinds of cells with different chemical makeups in layered arrangements. The tissue that forms early in the annual growth cycle is referred to as earlywood, and that which comes later in the season, latewood.[5] This tissue differentiation leads to the property of wood known as anisotropy, which means it has different capacities when measured and cut in different directions. Wood is also irregular, growing with different branching developments based on the specific environmental context of the tree relative to the local availability of sunlight. Furthermore, because wood is elastic, when it is cut in consideration of the grain (either parallel to or cross-grain), different amounts of bending curvature become possible owing to its anisotropic nature. Thickness of the cut of wood also affects its bendability, with thin sheets bending more readily than thicker pieces. These lend themselves to different tools for manufacturing: laser-cutters for thin flat sheets and robotic CNC milling for thicker beams. Bending of wood can also be triggered post-construction by the presence of humidity, as its cells retain their hygroscopic capacity to absorb water from the atmosphere long after the tree has been cut down.[6] Finally, hardwoods and softwoods have distinct properties and structural capacities, with hardwoods being preferred in contexts that require the greatest structural strength.[7]
Wood is increasingly a material of choice for many architects and builders owing to what Menges calls its “environmental virtues.” He notes its biological source and the fact that the growth of trees is powered by the sun through photosynthesis, producing oxygen after absorbing carbon dioxide from the atmosphere. Overall, wood possesses a very low embodied energy and imparts a “positive carbon footprint,” even in consideration of “today’s heavily industrial wood processing.” “The production of a panel of a given compressive strength in wood requires 500 times less energy than in steel. Thus wood is one of the very few highly energy-efficient, naturally renewable and fully recyclable building materials we currently have at our disposal,” Menges writes.[8] The climate impact of its use is therefore far superior to that of steel, despite the extra effort it takes to understand the properties and maximize the performance of each piece of wood.[9] Additionally, use of wood for its hygroscopic and anisotropic properties to open and close panels based on humidity further offers what Menges calls “climate responsiveness in architecture.” This avoids all technological equipment (e.g., sensors, motors) to move building parts, relying instead on the “no-tech capacity already fully embedded in the material itself.”[10] Unfortunately, the latter capacity depends on the wood remaining unsealed.
For all the above reasons, in order to input information about all these properties of wood and their effects on performance, geometry, manufacturing, and assembly, each unique piece of wood to be used in a structure requires digital analysis, owing to its species and individual cellular properties, irregularities, grain, cut, and thickness. One of Menges’s initial forays into digitizing each piece of wood for integration into generative design was at the Microstructural Manipulations studio he led at the Harvard Graduate School of Design in 2009. Menges and his students experimented with removing part of the earlywood to lessen the mass and lighten the load. To do this, they scanned each piece of wood to be used, conducted finite element structural analysis on it to ascertain the load on each piece, ran an algorithm on the scan to identify the earlywood regions as distinct from the latewood, and then laser-cut away much of the earlywood. The final structural outcomes confirmed their hypothesis that earlywood plays an insignificant structural role in wood’s performance capacity and therefore can be technically eliminated. Menges’s research has led him to conclude that “conceiving the microscale of the material make-up, and the macroscale of the material system as continuums of reciprocal relations opens up a vast search space for design, as most materials are complex and display non-linear behaviour when exposed to dynamic environmental influences and forces,” such as gravity, wind, and humidity. “Computation allows navigating and discovering unknown points within this search space, and thus enables an exploratory design process of unfolding material-specific gestalt and related performative capacity.”[11]
Thus, the most basic meaning that Menges imparts to his oft-used term “material computation” refers directly to this mode of integrating digital information about materiality at the outset of a parametric design undertaking, in order to fully integrate material identity, capacity, and performance into all aspects of the design process. This is the title he gave to his guest-edited issue of AD in 2012, “Material Computation,” which integrates quotes from scholars familiar with complexity theory that imply alternate interpretations of “material computation” from what Menges usually means. For example, Menges’s introductory essay to the issue opens with this quote from architectural theorist Sanford Kwinter. “No computer on earth can match the processing power of even the most simple natural system, be it of water molecules on a warm rock, a rudimentary enzyme system, or the movement of leaves in the wind.” Switching notions of “computation,” he continues, “the most powerful and challenging use of the computer . . . is in learning how to make a simple organization model that is intrinsic about a more complex, infinitely entailed organization.”[12]
Menges, too, repeats this alternate meaning, which is much closer to “natural computation” of the physical and complexity science variety. “Computation, in its basic meaning, refers to the processing of information. Material has the capacity to compute. Long before the much discussed appearance of truly biotic architecture will actually be realised, the conjoining of machine and material computation potentially has significant and unprecedented consequences for design and the future of our built environment,” he writes. Note that here he clearly differentiates machine computation from material computation, while still conjoining them as if domain differences are not a barrier. More often than not, though, he simply uses the term “material” computation to refer to either, which leads to ambiguity, making it seem like he is talking about nature when he is not. “In architecture,” he continues, “computation provides a powerful agency for both informing the design process through specific material behaviour and characteristics,” using his usual meaning, “and in turn informing the organisation of matter and material across multiple scales based on feedback with the environment,” using the alternate meaning.[13] He links the latter to both inorganic and organic processes in nature. “Physical computation is at the very core of the emergence of natural systems and forms,” he writes, referencing self-organization. He describes how evolutionary biologists are now beginning to integrate physical forces into their theories: “It seems that the more we know about the genetic code the better we understand the importance of physical processes of material self-organisation and structuring in morphogenesis.” Since he invited physicist and complexity writer Philip Ball to contribute to the issue, he summarizes Ball’s contribution: “Ball introduces a range of pattern formations in both living and non-living nature, and explains how they can be surprisingly similar because they are driven by analogical processes of simple, local material interactions, which he describes as a form of physical computation that gives rise to material self-organisation and emergent structures and behaviours.”[14] Throughout the issue, other contributors also use this terminology of natural or physical computing. Michael Weinstock and Toni Kotnik state that “materials have the inherent ability to ‘compute’ efficient forms,” and Karola Dierichs, with Menges as collaborator on her study of aggregate architecture, repeats this, writing that “aggregates can physically and continuously re-compute structural and spatial characteristics.”[15]
One architectural means of modeling material computation apart from digital methods is through what Frei Otto, Menges, and others after Otto refer to as physical “form-finding.” Form-finding entails the use of physical models—physical in the sense of tangible and not digital, and physical in the sense of physics, pertaining to load under the force of gravity. Classic examples of physical form-finding are Spanish architect Antoni Gaudí’s upside-down hanging models that he used to “compute” the curvatures of his highly original, tree-inspired arches at the Sagrada Familia church in Barcelona. Gaudí tied strings or chains of appropriate length to one another in the same pattern so that the lines of force would be distributed through the stone arches in the cathedral, and weighted each string appropriately with the proportional load each arch would carry. The precise catenary curves that the model “found,” despite being upside down relative to the cathedral’s upward orientation, indicated the form that would be structurally sound when built right-side up.[16] Otto, who directed the Institute for Lightweight Structures at the University of Stuttgart where Menges is now, specialized in lightweight membrane and cable tension structures and developed techniques for form-finding in tension systems. “In order for a membrane to be in tension and thus structurally active,” Michael Hensel and Menges write, “there needs to be equilibrium of tensile forces throughout the system. . . . Membrane systems must be form-found, utilising the self-organisational behaviour of membranes under extrinsic influences.”[17]
The processes of modeling used by many architectural students today demonstrate both physical and digital form-finding, which for membrane systems occurs through “means of dynamic relaxation.” “Dynamic relaxation is a finite element method involving a digital mesh that settles into an equilibrium state through iterative calculations based on the specific elasticity and material properties of the membrane, combined with the designation of boundary points and related forces,” Hensel and Menges explain.[18] Software for finite element analysis of both tension and compression structures made of different materials used today are ANSYS and Strand 7, both of which were taught during the introductory term in 2011 at EmTech. The back-and-forth iteration between physical modeling and digital form-finding is fundamental to techniques of generative design, and both are referred to by the term “material computation.”
“Material computation” is also used by Menges and Dierichs regarding aggregate architecture, whose granular components are not connected to one another at all except through friction and gravity (Figures 1.11 and 1.12). “Whereas assembly seeks control on the level of connections between fixed elements, aggregation focuses on the overall system behaviour resulting from the interaction of loose elements,” Dierichs and Menges write. “In contrast to assembly systems, aggregates materially compute their overall constructional configuration and shape as spatiotemporal behavioural patterns, with an equal ability for both: the stable character of a solid material and the rapid reconfigurability of a fluid.”[19] They refer to using both “material and machine computation,” the latter taken from the field of geo-engineering that has developed software to simulate granulate behavior.[20] “Computation denotes . . . the processing and gathering of information. . . . Material and machine computation are based on a common computational model, of information input, information processing and information output,” they write. “Material computation thus denotes methods where a physical substance is set to produce data on the system in question. The computation is based on the innate capacities of the material itself.” In contrast, “machine computation describes methods using a specifically developed algorithm that can be executed by a machine, such as a personal computer.”[21]
Dierichs and Menges note that architecture throughout time typically has been “one of the most permanent and stable forms of human production. As a consequence it is commonly conceived as precisely planned, fully defined and ordered in stable assemblies of material elements.” Surely this is due to its function of sheltering and protecting rather than threatening human life. However, perceived stability, they claim, is an illusion, for over time, buildings succumb to entropy and decay, sometimes even quite rapidly if humans are the agents demolishing them before rebuilding. They believe this cycle is accelerating although they do not say exactly why; perhaps it is due to the ongoing pursuit of economic growth through land development and redevelopment, perhaps to shoddier construction techniques in recent times with faster building decay or perhaps to increasing cycles of complexity shifting through collapses and reorganization toward higher complexity.
That Dierichs and Menges may be thinking about complexity theory as a primary motivation for researching aggregate architectures is evidenced by a number of factors. First, aggregates do offer a very tangible realization of shifts between stability and instability, equilibrium and nonequilibrium, as forces shift their state from as if solid to as if liquid, a quality undesirable for human habitation. They provide a clear visual analogy of phase or state-space change, far more easily than does stable habitable architecture. Second, they reference the idea of “self-organised criticality,” which in complexity theory refers to systems that mathematically have an attractor to a critical point that triggers a phase transition. The concept originated in 1987 with a paper published in Physical Review Letters that used as a key example a model of the changing form of sandpiles, which slowly accrete and then, reaching the critical point, transform to an avalanche.[22] Dierichs and Menges intentionally integrate this potential for self-organized criticality into their aggregate designs through either “strategically program[ing it] into the system during the initial pouring process,” or inducing it at a later stage. They can design these points of self-organized criticality into the system because they are designing the components and can vary their geometries and how they “grab” one another, and because they are pouring these out with a six-axis industrial robot that can precisely deposit particular components in particular paths. They do this to “trigger the transformations from one spatial condition, structural state, and environmental performance to another.” “A certain area in the aggregate might be modulated to a certain effect in quite a controlled manner, yet this interaction can trigger more emergent phenomena in the wider aggregate field.”[23] Thus, stability is only temporary in aggregate architectures, implying that if one were to use this technique for habitable structures, the occupants would just have to go with the flow. This is obviously a dangerous proposition that runs counter to the primary goal of habitation, so a designation as sculpture or pure research seems more appropriate than does architecture at this point. In this case, the allure of complexity theory is clearly so strong that it has become the attractor, pulling architects away from consideration of the primary function of architecture.
Dierichs and Menges also utilize another term related to material computation to refer to their design process for aggregate architecture, one that evokes the work of generative designers Skylar Tibbits and Neri Oxman, both at the Massachusetts Institute of Technology (MIT). All four at times talk of “programming matter,” which might be thought of as the next step after material computation and machine computation. If we know how material physically computes itself (e.g., how components grab and pack and stack), and if our computational tools can precisely manufacture and pour these components in exact spatial locations, then we have the resultant capacity to design and develop “specific material behaviour through the calibration” of components at the macroscale or of particles and molecules at the microscales. Dierichs and Menges refer to aggregate architecture therefore as “programmed macro-matter,” although they hope that “in the future, particles could, however, also be produced through a process of self-organisation based on physical behaviour similar to that of snow crystals.”[24] Both Tibbits and Oxman contributed articles to Menges’s issue “Material Computation,” with Oxman’s explicitly titled “Programming Matter.”
In it, Oxman argues for a new method of material science and design that is very similar to the aggregate architectures approach except that she seems to hope for bonds to connect the different substances laid down robotically. She describes how nature does not often produce homogenous materials, but rather produces “functionally graded materials” that together produce different properties at different scales. She also states that in nature, “it is often quite challenging to distinguish between structural and functional materials, as most biological materials such as wood, sponge, and bone can be both structural (supporting the branches of a tree or the body) and functional (pumping water up to the leaves or storing energy), with different scales for these different roles.”[25] She calls attention to the anisotropy of wood, which is a functionally graded material. “In the fields of material science and engineering, the concept of anisotropy is tightly linked to a material’s microstructure defined by its grain growth patterns and fibre orientation,” she writes. “Functionally graded digital fabrication . . . enables dynamically mixing and varying the ratios of component materials in complex 3-D distributions in order to produce continuous gradients in 3-D fabricated objects.” This approach “expands the potential of prototyping, since the varying of properties allows for optimisation of material properties relative to their structural and functional performance, and for formal expressions directly and materially informed by environmental stimuli.”[26] She intends to use this organizational technique at the macro-level as a “design strategy leading away from digital form-finding to trait-finding and the potential programming of physical matter.”[27]
In contrast to Oxman, Tibbits’s design work in this area focuses not on the design of materials but rather on the design of components that “self-assemble” under extrinsically imposed forces. His aim is the transformation of the outdated construction industry using a method that he thinks will work across many scales, from the biological to the “largest of infrastructures.”[28] In contrast to the old established method of “taking raw materials, sending them through a machine or process that is inherently fighting tolerances, errors, and energy consumption to arrive at a desired product, we should be directly embedding assembly information into raw materials, then watching as the materials assemble themselves. This process is self-assembly and it is the future of construction,” he asserts.[29] He identifies self-assembly as the construction method of biology “from our body’s proteins and DNA to cell replication and regeneration,” adding that it contributes to the capacities for “self-repair for longevity, self-replication for reproduction, and growing or mutating new structures.” Relying on digital design and fabrication and “smarter systems of assembly” will permit us to “build structures more adaptable to the current demands of our society and environment.” “These new possibilities of assembly must rely on smarter parts, not more complex machines,” he writes. “This is self-assembly where our parts build themselves and we design with forces, material properties and states, where construction looks more like computer science or biology rather than sledgehammers and welders.”[30]
Tibbits’s recipe for a design of self-assembly includes “four simple ingredients: 1) simple assembly sequences; 2) programmable parts; 3) force or energy of activation; and 4) error correction and redundancy.” He uses DNA as an example of the first, which requires components that respond to simple instructional algorithms like “on/off, left/right/up/down, etc.” He wants these algorithms to be able to “construct any desired 3-D structure. Luckily, through algorithms like Hamiltonian paths and Euler tours (various ways to draw a single line through an arbitrary set of points), it has been demonstrated that any given 1-D, 2-D or 3-D geometry can be described by a single sequence or folded line.”[31] The second ingredient builds on the first. “Just as DNA has base pairs, or proteins have discrete amino acids with unique shapes, attractions, and rotation angles, we need to design systems with simple yet smartly discrete (and finite) elements. These parts should be able to have at least two states and should correspond to the instruction sequences; for example, on/off or left/right/up/down, etc.” As these parts aggregate and interconnect, “every joint should be able to freely switch between states depending on each step in the instructions. This means we are looking to build structures from simple switches; each switch can be activated to change from one state to another depending on its placement or relationship to an external condition.”[32] His goal of programming parts, therefore, is accomplished by embedding into the parts their own instructions for assembly. He quotes Neil Gershenfeld of MIT’s Center for Bits and Atoms: “The medium is quite literally its message, internally carrying instructions on its own assembly. Such programmable materials are remote from modern manufacturing practices, but they are all around us.”[33]
Consider, for example, Tibbits’s piece Logic Matter, whose component design allows the addition of more components to different faces in order to build the form in different ways; this process is accomplished by human hands, which are included in some of the published pictures of the system.[34] The units work “hand-in-hand with the user to store assembly information, build and compute on next moves, check previous moves, and assemble digital discrete structures in 3-D.” What is described as a collaborative process here between humans and components subsequently is described as a component-directed process: the components “inform the user to act upon them, or actually generate [their] own sequence of instructions for the next build.”[35] A process with reversed agency is at work in Tibbits’s Biased Chains, which like the Euler tour can fold from a one-dimensional chain into a three-dimensional structure “simply through the act of shaking.”[36] “Once the sequence of units is assembled, the user simply shakes the chain, adding stochastic movement and energy that automatically switches each of the units into the correct orientation to successfully build rigid structures.” Although he states that this system utilizes “passive energy . . . effectively letting the materials build themselves,” this can only be taken as true if it is from the perspective of the units or the chain. Things “self-assemble” only through the addition of external force, in this case, a human being—or, he proposes in his conclusion, an earthquake—actively shaking the designed components.[37] Similarly, in Tibbits’s more recent project Fluid Assembly: Chairs (2014), done at MIT’s Self-Assembly Lab, unique components dropped into a tank of turbulent water self-assembled into a chair over a seven-hour period, utilizing the energy from the water’s propulsion to move and jostle until they found their correct places to attach (Figure 2.1). Without the energy injected into the tank, however, such self-assembly is highly unlikely.
Tibbits calls these forces the “muscles of the system,” and while he hopes that “our industries should ultimately be moving towards sustainable and energy-producing, rather than energy-consuming, systems,” he notes that robots rely on electricity to power their motors and gears. His list of “passive” energy sources for “self-assembly” include “heat and expansion/contraction of materials, fluids and capillary action or hydraulics, pneumatics, gravity, wind resistance, shaking, pre- and post-tension or compression members, springs, and a plethora of other opportunities.”[38] His idea of passive energy sources for component deployment therefore excludes the energy involved in getting the components into the context where these “passive” forces can then do their work, for example, getting component assemblies in place to drop them from helicopters so that they arrive on the ground as three-dimensional “disaster relief” structures. This evasion of the broader systemic forces at work is a strategy common to many industries and designers who want their systems to seem more sustainable than they are, if considered from a broader perspective of their life cycle. In a related manner, Tibbits’s fourth ingredient calling for building with “redundancy and interconnectedness” as a means of “error correction” demands more materials and more components, which would matter less if his components were biological rather than synthetic ones produced using advanced digital technologies. Perhaps, ultimately, biological components are his goal, for he sees our future as “one where our structures build themselves, can compute and adapt on demand, and where assembly looks more like biological processes than construction sites.”[39] If so, it is a goal shared by others working in the area of generative architecture but at the more biological and “genetic” end addressed in the second half of this book, such as those collaborating with synthetic biologists as Benjamin. It is to Benjamin and Federici’s concept of “biocomputing” that we now turn in our exploration of material computation.
Figure 2.1. Fluid Assembly: Chair, by MIT’s Self-Assembly Lab with Baily Zuniga, Carrie McKnelly, Athina Papadopoulou, and Skylar Tibbits in collaboration with Arthur Olson and Autodesk Inc., funded in part by MIT’s International Design Center, 2014. This project entailed the design of uniquely differentiated parts that fit only one place in the final chair. Over seven hours, turbulent water in an aquarium jostled the parts until they joined together into the structure. Tibbits refers to this as “autonomous assembly” and states that it “points towards an opportunity to self-assemble arbitrarily complex differentiated structures from furniture to components, electronics/devices, or other unique structures.”
Architectural Biocomputing and Scientific Biomolecular Computing
In 2011, funded by a collaborative grant from the National Science Foundation of the United States and the Engineering and Physical Science Research Council of the United Kingdom, Benjamin teamed up with synthetic biologist Federici—formerly at Cambridge University for his doctoral work, now director of the Synthetic Biology Lab at Pontificia Universidad Católica in Chile—as part of the Synthetic Aesthetics research program. This program paired artists, designers, and architects with scientists and social scientists to investigate how cross-disciplinary alliances and shared methodologies might help reconceive the potentials of the new science of synthetic biology before its methods and questions become entrenched in the older habits of other disciplines. Federici’s research at Cambridge focused on patterning in complex systems, particularly Turing patterns which produce spots and stripes on animals. He is an expert image-maker of biological patterning using confocal microscopy, which he and Benjamin used for their study. Benjamin, on the other hand, is an expert scripter, adept in the use of a variety of generative approaches, as well as the 2014 winner of the Museum of Modern Art’s PS-1 Young Architects Program for his sustainable structure built from mushroom mycelium and corn stalks (Figure I.3).[40]
Benjamin and Federici titled their project “Bio Logic” when they published it in Synthetic Aesthetics: Investigating Synthetic Biology’s Designs on Nature (2014).[41] This title resonates with their term “biocomputing,” which reads both ways like “material computation” and “natural computation.” It refers to ways that biological materials and organisms compute structure and form, as well as to ways that computers can be used to model, simulate, and explore biological processes. The field of engineering synthetic biology (referred to here as engineering synbio) bridges both of these meanings. It uses computers to design DNA strings that are produced synthetically, or alternately, one can order “biobricks” that already have a particular known DNA sequence that computes a particular function.[42] After scientists insert this into bacterial cells (the most frequent choice) that then incorporate and replicate the DNA sequence, the resulting cells ideally demonstrate the desired function.[43] So, computers analyze and design the sequences that the cells then compute into particular outcomes. Furthermore, the rhetoric of engineering synbio metaphorically conceives of cells as computers and DNA sequences as strings of information, to the extent that synthetic biologists model their disciplinary approach on the circuitry of electrical, mechanical, and computer engineering. A cell is called a “chassis” that carries “devices” made up of genetic “circuits.” The meanings of “biocomputing” therefore are multilayered, conceptual, and procedural, as well as having the ability for either biology or computers or both to perform the act of computing.
As Benjamin and Federici’s project title and the term “Biocomputing” imply, the duo chose to use the tools of both disciplines to attempt to discern the “bio logic” of plant xylem structure and pattern. “The process of pattern formation in xylem cells can be seen as a ‘morphogenetic’ program—it renders form (structural support) in response to the physical conditions of the environment. . . . This process lacks any external guidance for construction and depends on local molecular interactions,” they state, referencing self-organization. They therefore view the “morphogenetic program of xylem pattern generation” as a “biological design program” that they aim to uncover in order to make it useful to designers.[44] In essence, they were in search of the presumed biological algorithm for the structure of xylem formation.
Benjamin and Federici began with actual biological samples, slicing the vascular tissue of an artichoke stalk into numerous thin, relatively 2-D, slices that Federici photographed. These images were then loaded into architectural software by Benjamin and layered in order to digitally re-create the 3-D form of the original plant tissue. They then conducted experiments using differently shaped nonvascular cells of a transgenic strain of Aradopsis thaliana, adding a chemical that induced the overall formation of the xylem pattern but using the differently shaped cells of this species. The slicing and photographic and virtual reconstruction processes were repeated. Benjamin then compared the two virtual models and used the software application Eureqa to derive the mathematical equation common to both sets of data from the virtual reconstructions. They then used that equation to generate new structural forms in different boundary conditions, in essence using the biological algorithm—assumed to be the same as the derived mathematical equation—as a tool for novel designs.[45] Finally, they attempted to scale this equation-based pattern to actual architectural scale, although in this case that amounted to the scale of 3-D printed models and virtual “full size” renderings. Benjamin found that what may be “optimal” in nature to the particular context of the growing plant may be “suboptimal” for architecture at a much larger scale. In such a suboptimal situation, generative approaches using optimization can evolve the biological forms into ones suited to the needs and scale of architecture.[46]
Benjamin and Federici’s experiments demonstrate both meanings of “biocomputing” and presumably point to a deep process at work in biology and computation that is responsible for the generation of form and structure. Like Jenny Sabin and Peter Lloyd Jones, they dismiss the superficiality of the type of biomimicry that merely represents biological forms in architecture—say, in the structure and patterns of a facade or floor plan. Rather, they aim to discover the presumed algorithms at the root of form generation and integrate their process and principles into resultant designs. Sabin and Jones refer to this process as “biosynthesis,” Federici and Benjamin as “biocomputing.” For both, it is process that matters, encoded as algorithm, rather than the creation of particular organic-looking shapes. In this sense, their approaches have moved beyond the formalist focus of early twenty-first-century generative architecture.
For clarification, it is important to briefly compare Benjamin and Federici’s biocomputing to “biomolecular computing.” While this may serve to allay confusion for those who come across the latter term in other contexts, it may also just add to the interdisciplinary mash-up that most of these terms reflect. It may even spark new approaches for architects and designers as aspects of “biomolecular computing” resemble projects already discussed in the material computation section. Furthermore, biomolecular computing–based design projects have recently been included in highly visible design exhibitions, pointing to the likelihood of further cross-disciplinary developments.
In 2007, Pengcheng Fu of the Department of Molecular Biosciences and Bioengineering at the University of Hawaii, Manoa, published a review of the field he calls “biomolecular computing.” He describes it as an interdisciplinary venture at the intersection of engineering, biological science, and computer science, also known as “biocomputing,” “molecular computation,” and “DNA computation.”[47] As early as 1959, theoretical physicist Richard Feynman proposed the idea that “single molecules or atoms could be used to construct computer components.” This idea was developed further since the 1990s into techniques using DNA to store information and perform computational tasks and even to solve difficult and classic mathematical problems like a “seven-node instance of Directed Hamiltonian Path (DHP) problem,” otherwise known as the “Traveling Salesman problem.”[48] “Many properties that biological organisms often possess are highly desirable for computer systems and computational tasks, such as a high degree of autonomy, parallelism, self-assembly, and even self-repair functions,” Fu writes. His comment indicates the hope of scientists working in this area to improve the performance of and create new systems for computational tasks using biological organisms. As the last section in this chapter shows, the current approach to constructing computers has serious environmental consequences. So, the possibility in the future of having biologically based or biological computers could perhaps remedy the industry’s current damaging environmental effects, depending on the rest of its infrastructure. It may also raise ongoing difficult ethical questions about manipulating living organisms for human tasks, but this of course is not new.
Fu describes some of the different accomplishments using DNA to solve both complicated mathematical search problems and arithmetic problems. DNA here is not performing a genetic role inside of a cell, but rather it is simply a string of four molecules that bind to one another selectively. Scientists design these strings of A, C, T, and G molecules so that they function combinatorially to create multifaceted structures with numerous vertices and edges, with embedded path directionality.[49] When Leonard Adleman solved the Traveling Salesman problem in 1994, he discovered significant advantages of using DNA over traditional silicon-based computing.[50] “The advantages of Adleman’s method were that massive parallelism and super information contents were achieved. The reactions contained approximately 3 × 1013 copies of each oligo,” referring to the DNA strings, “resulting in about 1014 strand hybridization encounters in the first step alone. In these terms, the DNA computation was a thousand-fold faster than the instruction cycle of a supercomputer,” Fu writes. Adleman also found that the information storage density of DNA was “billions of times denser than that of the media such as videotapes that require 1012 nm3 to store one bit. In other terms, one micromole of nucleotides as DNA polymer can store about two gigabytes of information,” leading to DNA’s use as a database. “Lastly, Adleman noted that the energy requirement for enzyme-based DNA computing is low: one ATP pyrophosphate cleavage per ligation that gives an efficiency of approximately 2 × 1019 operations per joule. By comparison, supercomputers of that time performed roughly 109 operations per joule.”[51]
Describing a design strategy that sounds very similar to that of Tibbits, Fu elaborates on the use of biomolecular computation in self-assembling systems. “Parallel computation can be enhanced by [a] self-assembling process where information is encoded in DNA tiles. Using sticky-end associations, a large number of DNA tiles can be self-assembled,” a procedure referred to as the “Wang tiles” or “Wang dominoes” after Hao Wang’s work from 1961. “Wang tiles are a set of rectangles with each edge so coded (for example, by color) that they can assemble into a larger unit, but only with homologously coded edges together. It was shown mathematically that by barring rotation and reflection, any set of such tiles could only assemble to cover a plane in a finite pattern that was aperiodic, i.e., the pattern was not repeated,” such as occurs with Penrose tiling. Aperiodic tiling differs from periodically repeating patterns such as those that form crystal structures. “It was further shown mathematically that the assembly of a set of Wang tiles into a unique lattice was analogous to the solving of a particular problem by the archetypal computer, known as a Turing machine,” Fu recounts. “In other words, self assembly of DNA materials with the architecture of Wang tiles may be used for computation, based on the logical equivalence between DNA sticky ends and Wang tile edges.”[52]
Fu cites the work of Paul Rothemund, senior research professor in neural systems and computer science at the California University of Technology, whose work curator Paola Antonelli included in Design and the Elastic Mind at the Museum of Modern Art in 2008. Rothemund designed DNA sequences to fold into decorative triangular and snowflake patterns and smiley faces, in essence using DNA as a material for artistic representation. The wall text at the exhibition contextualized Rothemund’s work as an example of self-organization that could lead to a new approach for architecture being built from the nanoscale using “bottom-up” techniques.[53] Clearly, DNA can be used to create two-dimensional patterns (drawing smiley faces) as well as three-dimensional structures (Adleman’s work). As the wall text vaguely implied, will architects then want to design self-assembling buildings using DNA as the structural material? This question brings us back to the scaling problem that Benjamin and Federici touch on, but in even murkier terrain since they were working with plant xylem structures that actually do hold up plants owing to combinations of cellulose and lignin rather than just the nanoscale molecular bonds of DNA. Furthermore, the amount of time it would take to assemble a DNA- or molecule-based building would be enormous if biomolecular computing experiments stand as a relevant example. Such a ridiculous proposition would surely stem from the ongoing deep and widespread fanaticism with DNA as a semimystical “code of life” rather than from any known structural properties of this molecule for architectural purposes.[54]
Fu describes the drawbacks of biomolecular computation—namely, they are onetime calculations that require lengthy setup, are very slow to process, and are prone to error. He writes, “Typically, implementation of an algorithm to solve a computational problem itself may take several days or even weeks. When a new initial condition needs to be tested, the same period of time is required for another run of calculation. Therefore, it is inconvenient and expensive to implement the biocomputing experiments which require repeated improvement processes.”[55] Fu hopes that research in synthetic biology can remedy some of these problems, although an article from 2015, “Computing with Synthetic Protocells,” states that “protoputing” (add that term to the growing list at the start of this chapter) can produce only one machine and solve one problem at a time.[56] Yet, architects’ and designers’ interests may still be piqued; some certainly are already fascinated with “protocell” architecture. After all, one of Benjamin’s graduate students in his Architecture Bio-Synthesis studio at Columbia University’s Graduate School of Architecture, Planning, and Preservation, Mike Robitz, proposed “a future where microorganisms take over the role of data storage in place of computers.” Robitz’s project, called Googol Puddles, was featured by curator William Myers in the recent exhibition and catalog BioDesign: Nature, Science, Creativity (2012). Myers also included the Bioencryption project that also uses DNA as a data storage and encryption device, designed by the student team from the School of Life Sciences, Chinese University of Hong Kong, which won the gold medal in 2010 at the International Genetically Engineered Machine competition.[57]
Natural Computation and Computational Mechanics
The foregoing examples of material computation, programming matter, biocomputing, biomolecular computing, and protoputing demonstrate different interpretations and techniques of computing at play in the broader arena of generative architecture and the scientific disciplines on which it draws. We began with how materials compute their own structures and forms at both the micro- and macroscales, for Menges is interested in fully integrating digital information about material structure and behavior into parametric design. This means of course that he is in turn using computers to materialize architectural structures. He and Dierichs move that process up one notch, so to speak, exploring not how materials like wood or metal compute at the cellular or molecular level, but rather how components and aggregate architectures compute in the face of changing environmental dynamics in relation to points of self-organized criticality. Oxman shifts the discourse to the computationally designed and manufactured production of new, functionally graded composite materials, whose anisotropic composition and layering should permit new structural performances that are designed into them at the outset. Like Menges and Dierichs, then, Tibbits moves Oxman’s process up a step to the design of components (and not the materials making up the components) whose “self-assembly” method and resultant 3-D structure is incorporated into their morphology at the very start. In turn, Benjamin and Federici focus more on methods to discern biological structure and form generation in order to make this process useful to architects. By digitally comparing data taken from two living samples—xylem formation in artichokes and induced xylem formation in Arabidopsis—they derived an equation that they believe mathematically expresses the common biological growth pattern or structure. Biomolecular computing, on the other hand, is not biological at all in terms of involving living cells. Rather, it uses the common biological molecule DNA that has particular binding properties, arranged into 2-D or 3-D forms, to solve mathematical problems. Molecules in pre-protocells, which are also not living, function similarly in “protoputing.”
Of the above approaches, the closest to what physicists and complexity scientists call “natural computation” is the growth of the plants in Benjamin and Federici’s study and the first approach of Menges, who focuses on the material functioning of cut wood.[58] Wood after all is a biologically produced material and carries in its structure the expression of the nonlinear dynamics through which it was formed. This can be seen as a form of memory, which is a characteristic of nonlinear dynamic systems to which information theory can be applied, as argued in 2001 by physicists and mathematicians James Crutchfield and David Feldman.[59] Crutchfield defines natural computation as “how nature stores and processes information”; the memory that he and Feldman refer to is the storage of information. To this Crutchfield adds, “How nature is structured is how nature computes.” He then links these two theorems by defining “Structure = Information + Computation.”[60] The difficult process of “detecting randomness and pattern” or structure that “many domains face” translates into a need to measure “intrinsic computation in processes” and ascertain new “insights into how nature computes,” he writes.[61]
What follows, therefore, is a brief overview of how Crutchfield explains the core concepts and history of theories of complex dynamical systems, both through accessible publications and in his graduate course on Natural Computation and Self-Organization (NCASO) taught at UC Davis. These stand in contrast to the general terminological critique offered in the introductory chapter to this book with reference to ideological complexism, although those general comments still pertain. From this overview, it becomes readily apparent that tools used by a UC Davis and Santa Fe Institute physicist and mathematician to characterize complexity, structure, and natural computation are not the same ones as those referenced by generative architects. The contrast illuminates the differences in approach taken by those in different disciplines and adds layers of depth to even the superficial differences suggested by terms like “natural computation” and “material computation.” These differences matter because they point out the mostly rhetorical role that complexity theory currently plays in generative architecture. If architects are truly serious about understanding the dynamics of complex systems in order to make use of the ways in which order is produced in natural systems for design purposes, this approach offers intriguing possibilities despite its difficult and time-consuming nature for those unfamiliar with it. While the benefits of having architectural structures that move or grow—or in other words, actually behave as complex biological systems—are debatable, processes that occur within and around the contexts of buildings certainly do exhibit complex behaviors. A short summary of Crutchfield’s course then suggests novel means for understanding complex dynamical systems for those working in generative architecture and design. On the other hand, if architects primarily want to mimic natural geometries, this level of understanding of a system’s dynamics is likely extraneous.
Crutchfield relies on dynamical systems theory, information theory, and a technique he has developed known as computational mechanics. Together, these three offer means to ascertain and measure both dynamical structure and chaos. In his 2012 article in Nature Physics, “Between Order and Chaos,” Crutchfield connects the dots between these approaches while summarizing some of the key tenets of complexity theory viewed through the lens of their historical development. “We know that complexity arises in a middle ground—often at the order–disorder border,” he states, which is the systemic zone in which pattern or structure often becomes most interesting. “Natural systems that evolve with and learn from interaction with their immediate environment exhibit both structural order and dynamical chaos.” Crutchfield posits that “order is the foundation of communication between elements at any level of organization, whether that refers to a population of neurons, bees, or humans. For an organism order is the distillation of regularities abstracted from observations.” But, a “completely ordered universe . . . would be dead. Chaos is necessary for life.” Natural systems, therefore, “balance order and chaos” and “move to the interface between predictability and uncertainty. The result is increased structural complexity” that “often appears as a change in a system’s intrinsic computational capability.” “How can lifeless and disorganized matter exhibit such a drive [toward increased structural and computational capacity]? . . . The dynamics of chaos, the appearance of pattern and organization, and the complexity quantified by computation will be inseparable components in [this question’s] resolution,” he writes.[62]
Crutchfield’s NCASO course, in which I participated during the Winter and Spring semesters of 2012, offers tools for understanding the structure of complex systems with which architects are not familiar, to my knowledge. Although Benjamin and Federici attempted to ascertain a correlational mathematical expression of the xylem formation process through comparing data from two slightly different systems, Crutchfield’s process depends on careful observation and data taken from only one system’s process over a period of time. Depending on the care taken in deciding how to appropriately extract information/data from the system (this takes learning and experience), the data can reveal the system’s memory (stored information), pattern and structure (statistical complexity), and its amount of randomness (entropy). Through using the tools of computational mechanics, this information can be modeled into what Crutchfield calls an epsilon-machine that shows the state space of the dynamical system and the probability of transitions between the states.
The course begins with dynamical systems theory. “A dynamical system consists of two parts: the notions of a state (the essential information about a system) and a dynamic (a rule that describes how the state evolves with time). The evolution can be visualized in a state space, an abstract construct whose coordinates are the components of the state.”[63] Using mathematician Steven Strogatz’s book Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering (2001), the course covers mathematical models and maps of different dimensional types of nonlinear systems—their attractors, basins of attraction, and bifurcation sequences.[64] Some one-dimensional systems like radioactive decay are attracted to a fixed point. Two-dimensional systems, like a pendulum or a heartbeat, exhibit periodicity and move around a limit cycle—a two-dimensional loop on a graph—or a fixed point. Some three-dimensional systems are drawn to the shape of tori or limit cycles or fixed points, while other 3-D systems like weather exhibit very complex behaviors that graph to what is called a chaotic or strange attractor, the Lorenz attractor being the first one discovered. In these latter kinds of systems, “microscopic perturbations are amplified to affect macroscopic behavior.”[65] Chaotic systems’ graphs and maps reveal a folding and bending within the system’s state space. “The process of stretching and folding happens repeatedly, creating folds within folds ad infinitum. A chaotic attractor is, in other words, a fractal: an object that reveals more detail as it is increasingly magnified,” Crutchfield describes. He compares it to placing a drop of food color onto a mound of bread dough, and kneading it twenty times. The dough visualizes what happens to trajectories within the state space as the food color is “stretched to more than a million times its original length.” How does one tell just how chaotic a system is? “A measure of chaos is the ‘entropy’ of the motion, which roughly speaking is the average rate of stretching and folding, or the average rate at which information is produced.”[66]
As the latter suggests with its references to entropy and to information, Claude Shannon’s information and communication theory has proved integral for scientists’ measuring of complex systems. Complex systems exhibit both ordered and disordered behavior; ordered behavior produces pattern and structure, whereas disordered behavior is random. Pattern and structure contain a certain amount of predictability that is measured by probability (i.e., how likely is something to happen?); randomness and disorder do not. “The outcome of an observation of a random system is unexpected,” Crutchfield writes. “We are surprised at the next measurement. That surprise gives us information about the system. We must keep observing the system to see how it is evolving. This insight about the connection between randomness and surprise was made operational, and formed the basis of the modern theory of communication, by Shannon in the 1940s.”[67]
Shannon quantifies information through its amount of surprise. “Given a source of random events and their probabilities,” writes Crutchfield, “Shannon defined a particular event’s degree of surprise as the negative logarithm of its probability.” An event that is certain to happen has no surprise and therefore provides no information (zero bits). Yet, one that may or may not happen or happens with a particular probability of frequency does, and using Shannon’s definition one may quantify just how much information it contains (up to the maximum of one bit).[68] Shannon also demonstrated that “the averaged uncertainty,” what he referred to as the “source entropy rate,” “is a fundamental property . . . that determines how compressible an information source’s outcomes are.”[69] Shannon then extended this to define communication, the transmission of information from one source to another, which often entails noise. A transmission of information may or may not become corrupted. After developing the concept of mutual information, he stated that if the mutual information is zero, then the communication channel completely failed to communicate the information from its source to its end. But if “what goes in, comes out,” then “the mutual information is the largest possible.” Furthermore, “The maximum input–output mutual information, over all possible input sources, characterizes the channel itself and is called the channel capacity.” The most important takeaway, however, that Crutchfield identifies is Shannon’s realization that “as long as a (potentially noisy) channel’s capacity . . . is larger than the information’s source entropy rate . . . there is a way to encode the incoming messages such that they can be transmitted error free. Thus, information and how it is communicated were given firm foundation,” Crutchfield explains.[70]
Two processes of communication exist in complex dynamic systems and their study. The first is the system’s own process of communication using its stored information from the past to compute its future by moving through its state spaces with certain amounts of randomness and predictability. The second is the process of the observer of the system, who uses instruments to extract data to ascertain the amount of randomness and structure in its history in order to communicate to someone else a model of the system (Figure 2.2).[71] Information and communications theories thus offer means by which to measure a system’s communication. “Shannon entropy . . . gives the source’s intrinsic randomness” in bits per symbol extracted from the system. A measure known as “statistical complexity,” on the other hand, represented as Cμ, “measures degrees of structural organization.”[72] Yet, it is the technique of computational mechanics, which adds to the tools and concepts of statistical mechanics, that “lets us directly address the issues of pattern, structure, and organization. . . . In essence, from either empirical data or from a probabilistic description of behavior, it shows how to infer a model of the hidden process that generated the observed behavior. This representation—the ε-machine,” Cosma Shalizi and Crutchfield write, “captures the patterns and regularities in the observations in a way that reflects the causal structure of the process. With this model in hand, one can extrapolate beyond the original observations to predict future behavior,” provided one can synchronize themselves to the system. They summarize, “ε-machines themselves reveal, in a very direct way, how information is stored in the process, and how that stored information is transformed by new inputs and by the passage of time. This, and not using computers for simulations and numerical calculations, is what makes computational mechanics ‘computational,’ in the sense of ‘computation theoretic,’” they explain.[73] It is also why computational mechanics becomes a primary tool for elucidating “natural computation.”
Figure 2.2. The Learning Channel, by James Crutchfield. A view of the scientific approach to modeling complex, dynamic phenomena that are observed only indirectly through inaccurate instruments. Inspired by Claude Shannon’s communication channel: (Left) The states and dynamic of the system of interest are accessed indirectly through an instrument that translates measurements of the system state to a time series of discrete symbols (process). (Right) From the process, the modeler builds a representation of the system’s hidden states and dynamics. © 1994 James P. Crutchfield.
Crutchfield requires students to conduct an original research project for his course. This entails picking a system (either temporal, spatiotemporal, network dynamical, or statistical mechanical), analyzing its informational and computational properties, and relating these to the system’s organization and behavior. In other words, students are asked to use the tools of the two-quarter course to attempt to measure and model a chosen system’s dynamic behavior as it moves probabilistically between states. I chose to study a biological system to which I was introduced just months before, in the Fall 2011 semester, when I was studying emergent architecture at EmTech. I did so to see what could be gained from studying a single system from different disciplinary approaches: those of architecture, physics, plant biology, and complexity using dynamical systems theory, information theory, and computational mechanics.
In the Biomimicry Studio at EmTech, my team had been assigned the topic of “tendrils” to research for inspiration for architectural design. The studio work was rapid-fire, with the expectation to begin physical and digital modeling by the third day; the whole studio was only a couple of weeks long. Because I had access to UC Davis’s library database even though I was in London, in light of the fact that the AA’s library did not have access to scientific journal databases, I found and printed about thirty scientific articles that explained the current state of knowledge about tendril coiling. My group members—Marina Konstantatou of Greece, Giancarlo Torpiano of Malta, and Chun-Feng Liu of China—and I divided them up to read, since we presumed we actually needed to know scientifically how and why tendrils coil. Very quickly, however, we learned that the tutors did not expect this. In fact, they discouraged it, for no time existed to truly understand the biological system. Rather, we were supposed to extract geometric formal knowledge about how tendrils likely coiled and then model this and use it to innovate a new architectural outcome. Although EmTech’s Weinstock professes deep interest in modeling architecture on the mathematical processes of emergence, in this case we were pushed to reduce a very complex scientific process into a very simple formula. This reductionism is symptomatic across much of the practice of generative architecture and stems from its historical development out of the formalism of postmodernism, beginning with Greg Lynn’s emphasis on developing new means to overwhelmingly formalist ends. Those who truly engage with biological scientific processes in the practice of generative architecture—such as Jenny Sabin, Peter Lloyd Jones, Benjamin, and Federici—are rare. Yet, without my having been present at EmTech as a student–observer–critic and based solely on Weinstock’s writings, I would have missed the extent of EmTech’s biological reductionism.Material Computation
We therefore put the scientific articles aside and came up with a principle that described tendril coiling. Because tendrils grow and coil along the length of their growth, our principle was Extend + Twist + Bend. Together, these three actions produce a coil. We modeled this physically and digitally in different ways, using Python coding in Grasshopper and Kangaroo with Rhino—notice the menagerie that is the zoo of “evolutionary” architectural software. Although our group did not fully succeed in developing a solid and innovative application of this process for architecture, in hindsight we joked among ourselves that we designed the tendril structure that sculptor Anish Kapoor designed and built with engineer Cecil Balmond of Arup just a few months later for the London Olympic Observation Tower, known as Orbit (Figure 2.3). We actually had thought of the possibility of creating a large structure in the shape of a tendril but had dismissed it as the shallowest sort of aesthetic biomimicry.
Selecting tendril coiling for the research project for Crutchfield’s course in April 2012 was fortuitous since the passionflower vine in my garden was growing rapidly. In looking at the vine, I realized that many tendrils coil without wrapping around anything at all. Coiling evolved to allow vines to “parasitically” climb on other plants and structures using the foreign object as the support for the vine’s own growth. The patterns of the “free coils”—those unattached to anything—demonstrated both order/structure and randomness, showing common traits but also differences, with almost every one being unique (Figures 2.4 and 2.5). This suggested that tools for measuring randomness and structure could potentially offer insight into the system of coiling dynamics. With my tutor and project partner Paul Riechers, we chose which instruments to use and how to extract the tendrils’ data and history in order to decode and model their dynamics. This time, I delved thoroughly into the literature of tendril coiling because we were determined to integrate biological knowledge into our interdisciplinary model.
Figure 2.3. Orbit, by Anish Kapoor, 2012. Steel, height 115 meters, Queen Elizabeth Park, London. Photograph Dave Morgan; copyright Anish Kapoor. All rights reserved, DACS, London/ARS N.Y. 2017.
Figure 2.4. Nonlinear dynamics of passionflower tendril free coiling, by Christina Cogdell with Paul Riechers, 2012. Photograph by the author.
To Riechers’s and my great surprise, not a single article that we could find addressed the topic of free-coiling tendrils. Additionally, the biochemical and biophysical explanations for how tendrils coil seem very incomplete, leaving many questions unanswered. Finally, of the articles that do exist about contact coiling, in which both ends of the tendril are affixed, the few that explore nonlinear dynamics all rely on the same mechanical model: Kirchoff’s equations for finite rods with intrinsic curvature at equilibria in minimal energy state. These use Kirchoff’s equations to explain “perversions”—places where the coil shifts its handedness—in uniform helical coils. This model bears no obvious relevance to the process of untethered coiling—coiling without contact in open air—that produces an astonishing variety of morphologies contrary to what would happen if there were a minimal energy state to which all coiling tends. We posited that coiling likely occurs by the same process in both free and fixed coils. Therefore, better understanding the process of free coiling potentially could transform knowledge of the contact coiling process as well.
Our study therefore attempted to integrate biochemical, biophysical, mathematical, and computational models in order to elucidate the complex stochastic dynamics of tendril free coiling. First, from reading all the scientific literature but without doing any experimentation, I derived a hypothetical biochemical and biophysical model for how tendrils coil. Paul and I wanted a model of the biological process to express using mathematical equations so that we could simulate coiling based upon this model in silico to see whether it seemed plausible. We chose to apply Turing’s reaction-diffusion model that has been used to study morphogenetic patterning in plants, particularly for modeling the plant hormone auxin’s role as a regulatory gradient, although in root hairs rather than tendrils.[74] Auxin is often characterized as a self-regulating, self-organizing morphogen, one whose differential gradients across tissues trigger different gene responses, including those whose products in turn can inhibit auxin: hence the term often used in tandem with “reaction-diffusion,” which is “activation-inhibition.” We posited auxin-gradient-induced cell elongation on the convex side of the tendril, combined with lignin-gradient-inhibited gelatinous-fiber (g-fiber) cell contraction on the concave side. We also assumed multidirectional coiling, as passionflower tendrils can coil both to the left and the right. Multidirectional coiling tendrils are contact-sensitive on all sides and can reverse the handedness (left or right, counterclockwise or clockwise direction) of the coiling at any time. We hypothesized that this is based on the location of which cells are active at any given time in relation to those that were active just prior. (G-fiber cells exist in roughly cylindrical form running up and down through the length of the tendril.) Our model presumed approximately a one-third circumferential active g-fiber contact zone, opposite of which active cell elongation occurs owing to a high auxin gradient. Riechers then worked on expressing this model mathematically for computational simulation.
Figure 2.5. Nonlinear dynamics of passionflower tendril free coiling, typical free-coiling patterns showing both structure and randomness, by Christina Cogdell with Paul Riechers, 2012. Photographs by the author.
Next, using the statistical methods of information theory and computational mechanics, we analyzed the morphologies of over five hundred free coils in order to measure their randomness (Shannon entropy) and structure (complexity) and determine correlated traits through mutual information analysis, with the hope of achieving an intelligible epsilon-machine minimal model of coiling dynamics.[75] Specifically, using discrete five-millimeter increments, we measured: the changing diameter (of the loops in the increment); periodicity (number of loops per increment); handedness (whether loops turned clockwise or counterclockwise, viewed from the perspective of the cut end); “pervertedness” (which occurs when it shifts handedness); angular axis rotation (when the line made through the center points of the coils turns away from an imagined center line axis from its linear start at its tip); and self-contact status (if a coil touches itself) along the length of each tendril. These six characteristics are sufficient to basically re-create the structure of any coil from the measurements we took. Then, from our 3,389 measurements—representing over seventeen meters in total coil length—we compiled a data string addressing all six of these dimensions. Next, we analyzed this string using the tools of computational mechanics, generating and interpreting Markov chains and Shannon entropies that modeled states and the probabilities of transition between them for each of the variables, as well as the mutual information of the variables in combination with one another.
While we strove to achieve enough clarity from our data to create an epsilon-machine modeling the dynamics of tendril free coiling, owing to the fact that we had six variables in our analysis, a successful epsilon-machine was out of our reach. To get closer to this goal, we would need to use what is known as “optimal causal inference” in order to minimize the noise in our data so as to be able to see the structure beneath the noise.[76] We did, however, learn about the difficulty of processes of multidimensional analysis. Usually, computational dynamics and chaos modeling work with three dimensions, not six; each additional dimension adds complicating factors.[77] Despite not arriving at a successful epsilon-machine, we did discover interesting facts about patterns of tendril free coiling, particularly at the tips and bases of the tendrils. For the tips, we found high degrees of correlation between diameter and periodicity; we also found that 28 percent of tendril tips have a perversion. This disproves what many scholars who are experts in tendrils believe: that free coils never have perversions at all, unless they were in contact with something and got their perversion and then somehow broke free again to become a “free coil.” In fact, a majority of free coils—57.4 percent—in our sample have a perversion somewhere in their coil. The pattern of how a coil ends, where it opens up toward the base of the tendril that attaches to the vine, is much more predictable than the pattern at the tip. Coils end with large diameters and small periodicities and are more often than not turning in a left-handed direction.
Riechers’s and my interdisciplinary tendril study remains unfinished and unpublished, not owing to infeasibility or lack of rigor or contribution, but simply to time and other obligations. The latter interrupted the computational modeling process, both in terms of moving forward toward an epsilon-machine using computational mechanics and in terms of simulating tendril coiling in silico based on the hypothetical biological model and corresponding mathematical equation. Our goal was ultimately to compare the proximity of the data from our computational model’s virtual tendrils—analyzed for the same six variables—to real tendril dynamics. It is worthwhile to summarize our approach here in order to communicate some of the methodologies used by complexity scientists to model nonlinear system dynamics. The amount of time and specialization demanded by truly interdisciplinary research often causes practitioners in one discipline to simply rely on the tools and conceptual apparatus with which they are already familiar. Architects wanting to mimic only the mechanical process of tendril coiling can therefore resort to Extend + Twist + Bend if they so choose. But if architects truly want to understand and integrate the dynamics of biological systems into architecture—especially if they imagine buildings will be living organisms in the future—then they need to team up with scientists to garner the depth and breadth of knowledge that is available for this task.
The Materiality of Silicon-Based Digital Computation
This chapter on material and natural computation ends with a short summary of the materiality of computers: the diminishing reserves of material substances from which they are made, the high embodied energy in transistors and integrated circuit chips that are integral to digital technologies today, and the loss of these materials and the pollution of their end-of-life disposal process. Conducting full life cycle assessments (LCAs) with numerical analyses is a difficult and unwieldy process, one that involves many subjective decisions about where to draw the boundary around what one considers to be the system associated with production of the product. Furthermore, in a product as complicated as a computer with so many different parts, each part must be analyzed and included in the overall assessment. For this reason, the most recent LCA of a personal computer dates to 2004, indicating a need for this to be updated as many facets of the process have changed since then.[78] Certainly, other general LCAs more recently have focused on computer parts, for example the display screens or the transistors going into integrated circuit chips.[79] This summary therefore combines this information into a short overview. Because so much of generative architecture and, increasingly, nearly every major facet of our global economy relies on these technologies, it is crucial to understand their material and energetic sources and environmental impacts.
LCAs consider three related types of inputs and outputs involved in the life cycle of any product. The first examines raw materials at every step of the way, beginning with where these are taken from the earth and how they are processed. The second entails cumulative embodied energy, which includes the mass and type of materials used to provide the power; note that energy consumed in operational use of the product is often a very small portion of the overall embodied energy. Finally, LCAs examine all the outputs—not just the useful products but also all the wastes released and environmental pollution associated with the full life cycle. This latter portion often includes the health risks facing workers who produce the product. Additionally, for each of the three main categories—materials, energy, and waste and pollution—every one of the six major facets of the life cycle is considered. Generally, these six include the acquisition of raw materials, manufacturing and production, transportation and distribution to stores and users, operational use and maintenance, recycling if possible, and management of it as waste.
The most basic building block of any digital technology is a silicon wafer transistor. Although the marketing text and imagery of companies like Intel imply that common beach sand is the source of silicon for transistor and integrated circuit manufacture, in fact it is not.[80] Only very pure quartzite can be used as the starting point for polysilicon. Also referred to as just “poly,” polysilicon is the name for the material that is produced after purifying small particles of quartzite, which is primarily composed of silicon dioxide. Poly is then altered to make silicon wafers. The creation of poly involves a multistep process. First, add twice as much coal as quartzite—the carbon combines with oxygen to release carbon dioxide, leaving pure silicon.[81] Second, add large amounts of hydrochloric acid to produce the gas trichlorosilane, which effectively removes the “impurities of iron, aluminum, and boron.” Finally, add hydrogen gas to turn the silicon into poly, which is then melted down at high temperatures and exposed either to boron or phosphorous and then turned into a crystal ingot that is cut to make wafers.[82]
Quartzite is a metamorphic rock made from seriously heating up sandstone deep in the earth’s crust. While it is mined around the world—Africa’s Great Rift Valley, Australia, Wisconsin, and the Appalachian region of North Carolina, as well as various locations in Europe—it is a much more limited natural resource than beach sand; for example, in 2009 one ton of quartzite from Spruce River, North Carolina, was selling for $50,000.[83] Its mining also devastates local landscapes, leaving behind large piles of rock debris and dust after removing the soil and plants. Although quartzite can sometimes begin as nearly 99 percent pure silicon, “nine-nines” (99.9999999 percent) level of purity is necessary for transistors today, and the number of nines has been steadily increasing. This incredible level of purity has been described by geologist Michael Welland, citing a Dow Corning scientist, like this: “Imagine stringing tennis balls from the Earth to the moon, and wanting them all to be yellow. . . . This would take about 5.91 billion tennis balls. For the color coding to be of semiconductor quality, ‘you could only tolerate two, maybe three that were orange.’ . . . For solar cells, which are slightly less demanding . . . ‘you could tolerate five or six orange balls.’”[84]
This level of purity is also the primary reason for the high amount of energy that goes into creating transistors, for it is not only the poly that must be extremely pure but also all the other chemicals used in the process.[85] According to the research of UC Davis engineering students Riyaz Merchant, Madison Crain, and Felix Le, the energy expended on the manufacturing and production of transistors accounts for 92 percent of the total embodied energy of its life cycle.[86] This high level of cleanliness is even a requirement of the facility in which transistors and circuits are produced, which uses special ventilation systems to create as close to a particulate-free space as possible. This is made difficult by the presence of workers and the fact that as the poly ingot is cut as much as 50 percent of it is lost as dust.[87] The wafers themselves are therefore further protected, moving through the facility in “front-opening unified pods” as they undergo the rest of the production process. This pod-enclosure system also protects workers from the deadly chemicals used throughout the process, including nitric and hydrofluoric acid.[88] Once the poly ingot is sliced, these wafers are physically and chemically buffed and then etched using photolithography. This involves covering parts of the wafer while doping it with chemicals—bases and acids, to remove or build up layers—and repeating this process many times. This creates the channels for different component function. A transistor is finished with aluminum and gold wiring at the terminals. Yet of all those made, as many as 40 percent are found to be defective before they leave the factory.[89]
Incredibly, putting over a billion transistors together creates only one of today’s microprocessors or integrated circuit chips that is approximately the size of a fingernail.[90] That is not a typo. According to Moore’s law, the number of transistors on a chip doubles every year, although owing to the physical properties of materials and the laws of physics this cannot continue indefinitely. This shrinking process, which is accompanied by even greater purity requirements, exponentially increases the amounts of energy embedded in this most basic part of today’s digital technologies. This is because as a general rule, extremely low-entropy highly organized forms of matter require very large amounts of energy to produce since they are “fabricated using relatively high entropy starting materials.”[91] Physicist Eric Williams demonstrates this by stating, “Secondary inputs of fossil fuels to manufacture a chip total 600 times its weight, high compared to a factor of 1–2 for an automobile or refrigerator.”[92] Along with Robert Ayres and Miriam Heller in 2002, Williams surveyed the energy and materials used in the production of a 32MB DRAM microchip, much of which comes from the process of making its many transistors, explained above using more recent information. They point out that often purification of materials is “routinely overlooked in most life cycle assessments.” In other words, system boundaries are drawn narrowly around the already purified materials used in a process in order to omit the high amount of energy entailed in that part of their production.[93]
Although chips allow computers to process information using binary code, many other materials and a significant amount of energy goes into the life cycle of a personal computer. Rather than six hundred times greater mass of energy used to produce compared to product final weight, computers only require about eleven times the total amount. In his 2004 study, Williams found that “the total energy and fossil fuels used in producing a desktop computer with 17-in. CRT monitor are estimated at 6400 megajoules (MJ) and 260 kg, respectively.”[94] His calculations, however, do not take into account the energy, materials, and waste that accompanies parts and wholes of computers as they travel around the globe during assembly, retail, use, and then disassembly. This is a significant oversight. UC Berkeley environmental policy researcher Alastair Iles writes that “computers are designed in the US, Europe, and Japan” but manufactured in China, Mexico, Taiwan, and Singapore by companies that purchase components like chips and other materials made from a number of different regions. These components are “produced with materials extracted from Africa and Australia”: “New computers are then shipped to markets worldwide. Similarly, recycling chains have become transnational, stretching from industrial nations to developing countries.” A computer purchased in California, say, upon its useful life’s end—usually after only three to five years—goes first to a local electronics disposal zone and from there to a port city like Seattle or Los Angeles. It is then sold to “foreign traders” who arrange for its disassembly journey, which often ends in China, India, and Pakistan. Via ship, it travels first to a “regional hub such as Dubai”; traders then “sell machines to dealers in India or China, sometimes routing them through the Philippines or Indonesia to evade customs scrutiny. These dealers then disassemble machines and distribute parts to specialized recycling workshops in urban centers and rural villages.”[95]
Iles is concerned about environmental justice and focuses on the economic, health, and environmental inequalities facing workers involved in computer disassembly and recycling. Remember that those involved in manufacturing chips are protected by working in super clean environments; the toxic chip materials, as they are being added, are isolated in front-opening unified pods. These precautions do not exist for those involved in disassembly. Many parts used in computers combine toxic and harmless materials together in such a way that they cannot be taken apart; they are not even labeled to warn of the hazard. This means that as workers remove parts of computers, they inevitably contact toxic materials. “Hands, aided by basic tools such as chisels, saws, hammers, pliers, and screwdrivers, are the primary technologies in use. In India, circuit boards are sometimes smashed against rocks or with hammers to dislodge their lead solder and metals, and to free semiconductor chips,” Iles writes. “In China, monitors are crushed to extract the copper yokes, breaking the heavily lead-contaminated CRT glass into fragments that are thrown away into water or land.”[96] He estimates that millions of workers in China, and over a million in India, are exposed to these conditions and contend with high respiratory disease rates, groundwater pollution, and pay of approximately $1.50 a day (2004 rates).[97] He summarizes the environmental effect of global outsourcing as relocating pollution caused by both manufacturing and recycling phases to Asia and other parts of the world that participate in the toxic aspects of the computer life cycle.[98]
Many people within the industry and beyond argue that the amount of greenhouse gases released into the atmosphere through the production of chips and computers will be offset by savings in greenhouse gas release owing to changes in lifestyle in which people drive and fly less because they use digital technologies.[99] While this claim is debatable (in fact, many experts dispute it), it has no direct bearing on another problem associated with digital technologies: the waning supplies of materials that go into digital device production. A few studies from 2012, presented at the optimistically titled Electronics Goes Green conference, address these shortages directly since companies and national consortiums are beginning to be concerned about this situation. Although a computer can comprise up to two thousand parts, the main parts produced using “critical metals” are the printed circuit board (PCB), liquid crystal display (LCD) monitor, battery pack, hard disk drive, and optical drive.[100] These parts contain significant amounts of cobalt, germanium, gallium, gold, indium, platinum group metals, silver, and rare earth metals including neodymium and tantalum. Critical metals were identified by a European Union study by the Raw Materials Initiative, which surveyed “the economic importance and supply risk of 41 materials,” with those in the direst situation being labeled as “critical.”[101] Given that approximately 275 million personal computers were shipped during 2015, not to mention other digital devices, a number of questions then arise.[102]
For these critical metals, how many reserves are there and how long are these predicted to last? One study has estimated the number of years left given recent consumption rates, finding between ten and fifty years for antimony (approximately ten), indium (approximately twelve), silver (approximately eighteen), and tantalum (approximately forty-five).[103] These reserves are divided equally between China and a number of other countries, including the United States, the Commonwealth of Independent States (formerly Russia), India, Australia, and others. Because China controls the global export of rare earth metals, it has been carefully monitoring exports to keep prices high. High prices stimulate the exploration and discovery of new or deeper reserves, which ironically can cause the apparent number of reserves to “increase” despite the fact that we are rapidly decreasing what are finite supplies.[104] Yet these further explorations and extractions are powered by fossil-fuel-driven equipment that releases carbon dioxide into the atmosphere; the rarer a metal is, the more carbon dioxide is released in its acquisition.[105]
What about substituting other metals for the ones that are running out? German industrial ecologist Mario Schmidt notes a number of factors that limit this possibility. He explains that “there are no separate mines for many metals”; rather, many metals are “mined as by-products of ‘Major Metals.’” Thus, just because some critical metals are in short supply does not mean that it is easy to just extract more of them, since their extraction is subsidiary. Furthermore, “there are very few deposits of enriched ores worth mining, for the physical frequency of elements in the earth’s crust says nothing about whether they can be mined cost-efficiently.” Lastly, “if one metal is to be replaced by another, it will need to have similar properties, and generally it is obtained from the same source,” he writes. He gives the example of lead, banned for use in solders, which can possibly be “replaced by tin, silver, indium or bismuth, but the latter three are by-products of lead mining. As less lead was mined as a result of the ban, the pressure of price on the other metals increased.”[106]
Since the foregoing options are limited, we then ask whether we are recycling the critical metals we have already extracted in order to maintain their useful supply. Unfortunately, the answer for the most part is no. A study from 2012 based on the recycling of computers in Germany—where more stringent requirements and precautions exist in the recycling industry than in India or China—found that “the only metals under study that are partly recovered from the devices are cobalt, gold, silver, and palladium. All other critical metals show losses of 100% distributed over collection, pre-treatment, and final treatment” (Table 2.1).[107] In fact, many metals are recycled hardly at all, as Smit and colleagues show. This returns us to the problem that all the parts in a computer are not designed for disassembly and the fact that many of the parts using critical metals are so tiny that they cannot be easily salvaged by hand using rudimentary tools. Until these materials become so expensive and scarce that industries and nations decide to regulate their design and reuse, or until fossil fuels become so expensive that it is no longer economically viable to extract deeper sources of rare earth materials and pour so much energy into chip manufacturing and shipping parts around the world, the situation is unlikely to change. As Kris de Decker aptly summarizes, “Digital technology is a product of cheap energy.”[108] So is generative architecture, despite the fact that its landmark constructions are incredibly expensive.
Clearly, the example of the digital industry suggests an unsustainable trend, and yet this industrial infrastructure is endemic to all major sectors today, not just generative architecture. Riechers hopes, perhaps optimistically, that products designed to “self-organize” may allow a more energy-efficient road to production. He argues that since scientists have seen materials compute, with guided design scientists and designers can influence this innate capacity to harness useful computation for design and manufacturing. Similarly, he envisions systems that can self-disassemble after a product’s useful life in order to return precious resources to further utility. For designers to approach these admirable goals, they will have to seriously collaborate with complexity scientists. Riechers sees this as a necessary future design paradigm, and at least in rhetoric and principle, if not in actual method thus far, generative architects agree. This chapter has demonstrated that choices of method matter significantly, beginning first and foremost with generative architecture’s reliance on environmentally devastating and finite digital technologies. Generative architects’ choice to use terms that mimic a core complexity concept—natural computation—as listed at the beginning of this chapter, reveals the deep ideological influence of complexism in the conceptual framing of their pursuits. Yet, the environmental effects stand as corollaries to this choice, as Stefan Helmreich’s “athwart theory” poignantly reveals.[109] The same applies to Riechers’s hope, for the road to learn how to design things so that they “self-organize” and “self-disassemble” is paved with two parts coal to one part quartzite plus the earth’s dwindling supply of critical metals.
Table 2.1. Critical raw material potentials in laptops and losses from the collection and treatment systems currently used in Germany, by Matthias Buchert, Andreas Manhart, Daniel Bleher, and Detlef Pingel. From “Recycling Critical Raw Materials from Waste Electronic Equipment,” Institute for Applied Ecology, Darmstadt, Germany, February 24, 2012. Only cobalt, silver, gold, and palladium are partially recovered to feed back into the industrial cycle.