Morphology is not only a study of material things and of the forms of material things, but has its dynamical aspect, under which we deal with the interpretation, in terms of force, of the operations of energy. . . . We want to see how . . . the forms of living things, and of the parts of living things, can be explained by physical considerations, and to realize that in general no organic forms exist save such as are in conformity with physical and mathematical laws.
D’Arcy Thompson, On Growth and Form (1917)
On the centennial of the publication of his classic On Growth and Form, D’Arcy Thompson’s words appropriately open this chapter on morphogenesis and evolution in biology, computation, and generative architecture. Whereas morphology is the study of forms, morphogenesis is the process of creating three-dimensional forms in organic or inorganic materials; some use the word to also refer to the generation of two-dimensional patterns. Many theorists consider both pattern and morphogenesis as emergent from processes of self-organization. In this light, they have garnered significant recent attention from complexity scientists, for example, physicist Philip Ball’s three-part series Nature’s Patterns: A Tapestry in Three Parts (Shapes, Flow, and Branches) (2009). In biology specifically, morphogenesis refers to the process by which an embryo or seed grows and transforms its shape as the organism matures into adulthood. Thompson’s lengthy tome expounds the mathematical basis of many natural forms and posits the evolutionary relatedness of species whose morphologies can be mathematically transformed through deformation one into the other. Discontented with natural selection as the sole means for explaining evolutionary diversity, Thompson focused on the roles that physical forces play in shaping morphological change over time, both on the small scale in terms of an individual organism’s development and growth, referred to as ontogeny, and over the long temporal scale in evolution, referred to as phylogeny.
Although Thompson’s groundbreaking work was not well received by fellow scientists in his own time, it has gained an immense appeal in the last twenty years for generative architects interested in designing organically inspired and parametrically based architectural forms that function under physical load. For example, Achim Menges and Sean Ahlquist include excerpts of Thompson’s book in their coedited Computational Design Thinking (2011). Their introduction to the selected portions explains that Thompson’s work establishes “two fundamental branches of a conceptual framework for computational geometry: parametrics and homologies. A parametric equation is defined as a constant equation in which relational parameters vary,” they write. “It results in producing families of products where each instance will always carry a particular commonness with others. Thompson defines these embedded relationships as homologies.” Similarly, Michael Weinstock, Jenny Sabin, and Peter Lloyd Jones have recently required students to read chapters from On Growth and Form as well. In 1917, Thompson completed his mathematical computations by hand, but owing to the mathematical nature of architectural form generation and the strong emphases on biological analogies and evolutionary computation in generative architecture, the importance of his writings to generative architecture now is not surprising.
The differences in historical context, however, in terms of the scientific knowledge of biological morphogenesis in the first decades of the twentieth century compared to those of the twenty-first, as well as in the intervening century, reveal significant theoretical changes in morphogenesis and evolutionary biology during this period. These changes pertain directly to some current computational approaches in generative architecture. Yet despite this being so, the assumption that therefore computational approaches in generative architecture accurately mirror those of biology is false. Two current experts in evolutionary computation, Sanjeev Kumar and Peter Bentley, state this clearly: “Talk to any evolutionary biologist and they’ll tell you that the standard genetic algorithm (GA) does not resemble natural evolution very closely. . . . Should you have the courage to talk to a developmental biologist, you’ll have an even worse ear-bashing.” Evolutionary computation—the theoretical and technical intermediary between biology and architecture—bears far more relevance to computer science pursuits of artificial intelligence and artificial life than it does to biology.
This chapter therefore introduces major developments in theories of biological morphogenesis and evolution from D’Arcy Thompson’s lifetime to the present, in order to critically analyze their transformations into the fields of evolutionary computation and generative architecture. After a brief overview of late nineteenth- and early twentieth-century ideas about evolution, eugenics, and genetics, it addresses the historical and conceptual overlaps between theories in biology and computer science, explaining when computation began to draw on theories of biological morphogenesis and evolution. It then explores the obverse: the conceptual shift that occurred when we began to conceive of biological development and evolution themselves as computational processes. The chapter goes on to focus on the three major developments in evolutionary theory from the mid-twentieth century onward that have inspired techniques of evolutionary computation. In turn, two of these have been adopted by and innovated on by generative architects, while the architectural potential of the third has yet to be fully developed.
The first is the neo-Darwinian modern evolutionary synthesis of the mid- to late twentieth century, arising concurrently with the rise of population genetics, the discovery of DNA in the early 1950s, and the central dogma of molecular biology. Second is the recent theory known as “evo-devo” (short for evolutionary developmental biology) from the late twentieth to early twenty-first centuries. Evo-devo arose from the results of DNA sequencing considered in tandem with experimentation that revealed a common set of “homeotic genes” integral to morphogenetic development that are shared by very distantly related organisms. The third entails recent theories of epigenetics and the roles of epigenetic processes in developmental systems biology and evolution. Epigenetic processes are environmentally responsive, affect gene regulation, and pose a second short-term line of heredity. For these reasons, some scientists consider epigenetics to offer a new Lamarckian addition to accepted Darwinian evolutionary ideas of environmental adaptation and natural selection. Generative architects rely primarily on neo-Darwinian computational design techniques. Only a few have adapted ideas from evo-devo, notably Michael Weinstock, Achim Menges, and possibly Aaron Sprecher and his team on the Evo DeVO Project at the Laboratory for Integrated Prototyping and Hybrid Environments at McGill University. Only one—John Frazer—has developed and used epigenetic algorithms, amazingly as early as the 1990s, although in their essay published in The Gen(H)ome Project (2006) Jones and Helene Furjàn call for greater interest in epigenetic approaches. The chapter concludes with an analysis of the implications of architects’ choices to use these theoretical models, with thoughts about why recent theories of evo-devo and epigenetics may matter for generative architecture going forward.
Historical Overview to the Mid-Twentieth Century
In the conclusion to his book, Thompson notes that despite the numerous decades since Charles Darwin’s publication of On the Origin of Species (1859), scientists still had not figured out how to explain the apparent gaps in the evolutionary phylogenetic tree. These could be small—such as the changes causing species differentiation—or large—such as breaks in the observable chain of evolutionary relatedness. Thompson does not accomplish this either, and in fact all but ignores chemical aspects of morphogenesis; he simply accepts discontinuity as a mathematical and evolutionary fact. Many others, though, in Thompson’s time and since, have worked very hard to discover what Darwin failed to explain despite the title of his book: the actual origin of new species, and therefore the sources of species’ change over time. New traits and new species might be naturally selected, but how variation arises in the first place was not known. Darwin’s own flawed theory of pangenesis was based on aspects of Lamarckism, the ideas of French biologist Jean-Baptiste Lamarck that posited evolutionary change through the inheritance of acquired characters. Habitual behaviors enacted throughout an organism’s life were absorbed into its adult form and essence and subsequently passed on to its offspring. In this way, Lamarckism proposed the processes of morphogenetic development over a lifetime, coupled to environment, as the guiding force of evolution writ large. An alternate late nineteenth-century theory put forward by German biologist Ernst Haeckel also linked organismal ontogeny to phylogenetic development overall. Haeckel observed that during embryonic development, different species’ embryos resembled one another at different phases of development. Accordingly, he proposed the now-refuted theory of recapitulation, which posited that the development of an individual organism recapitulates the entire phylogenetic evolution that came before and led up to that particular species.
Other biological theories in the late nineteenth century, however, seemingly contradicted those of Lamarck and Haeckel. The work of German evolutionary biologist August Weismann, for example, in the 1880s and 1890s established the differentiation of sex or “germ” cells from somatic cells (i.e., those cells in the body that bear no obvious relation to reproduction but just carry out other bodily functions). Although formerly Weismann had accepted facets of Lamarckism, his theory of the separation of the “germ” and the “soma” put an end to Lamarckism in many people’s minds, for it proposed that influence only flowed one way—from the germ cells to the somatic cells—rather than also, as Lamarckism suggested, from changes in somatic cells into the germ cells for ongoing inheritance. Lamarckism continued to be debated in some scientific circles throughout the twentieth century and is experiencing a revival and transformation today owing to increasing knowledge of epigenetics. But for the most part owing to Weismannism, in addition to the rediscovery of Gregor Mendel’s theory of inheritance based on invisible “factors” carrying dominant or recessive traits that appeared with mathematical predictability, early twentieth-century scientists pursued alternate routes of inquiry into inheritance. These led first to eugenics (“to have good genes,” coined by Englishman Francis Galton in the 1880s) and then to the development of modern genetics. One result of this shift was that theories of evolution slowly decoupled from those of biological development and morphogenesis, leading to the establishment of these as subfields that largely pursued independent research until the end of the twentieth century, when evo-devo began to rejoin them.
It is in this roughly sketched context, then, that Thompson published On Growth and Form, which entered the academic biological scene as an outlier both for its emphasis on physical forces and mathematics as well as for its approach conjoining morphogenesis to evolution. In 1917, eugenicists on both sides of the Atlantic were busy applying principles derived from Mendelism and long-established agricultural breeding practices to attempt to control human evolution toward “betterment,” using “rational selection” enacted through social and political policies on reproduction rather than natural selection. They presumed a biological basis for physical, mental, moral, and social human traits, without knowing what the seat and source of inheritance was. They conducted broad-scale research on patterns of inherited traits using family studies and fitter family contests at state and world’s fairs, creating a large U.S. databank in the fireproof vaults of the Eugenics Record Office in Cold Spring Harbor, New York.
At the same time, other scientists were attempting to uncover the source of heredity in germ cells. British biologist William Bateson first described this pursuit using the word “genetics” in 1905, which fit well with the words “eugenics,” “pangenes,” and “genes,” the second being proposed by botanist Hugo de Vries, based on Darwin’s word “pangenesis,” then shortened to “genes” by Wilhelm Johannsen in 1909. In the early twentieth century, de Vries published The Mutation Theory that, in contrast to Darwin’s ideas of gradual change, posited the possibility of abrupt changes in inheritance. This theory inspired American embryologist Thomas Hunt Morgan to investigate physical mutations in fruit flies and patterns of mutation inheritance through breeding experiments. The results led him and his colleagues to posit The Mechanism of Mendelian Heredity (1915), that mutations are heritable and that inheritance and sex factors reside on chromosomes in germ cells. It was not until the discovery of the molecular composition of deoxyribonucleic acid—DNA—and its double-helical structure, published in 1953, that geneticists felt confident that they had located the presumed root source of inheritance for which they had long been looking. That Francis Crick and James Watson described DNA as an information “code,” however, reveals a mid-century shift in the framework for considering biological processes, one influenced by developments in the 1940s in information theory, digital computation, and code-cracking from World War II.
In fact, Alan Turing, the eminent British code-cracker and one of the founders of digital computation, had proposed at the outset of the computer’s invention something like the obverse: that “evolution could be used as a metaphor for problem solving.” At the beginning of the formation of theories and techniques of digital computation, Turing envisioned the potential of evolutionary computation; thus began the conceptual overlaps between theories of biological morphogenesis and evolution and theories of computation. Philip Ball recounts that as a schoolboy, Turing had read Thompson’s On Growth and Form and had been fascinated by the problem of biological morphogenesis for many years prior to his prescient essay “The Chemical Basis of Morphogenesis,” published in 1952. In his essay—and similar to Thompson’s brief discussion of the same topic in On Growth and Form—Turing mentions the puzzle of symmetry breaking for the development of biological forms: “An embryo in its spherical blastula has spherical symmetry, or if there are any deviations from perfect symmetry, they cannot be regarded as of any particular importance. . . . But a system which has spherical symmetry, and whose state is changing because of chemical reactions and diffusion, will remain spherically symmetrical for ever. . . . It certainly cannot result in an organism such as a horse, which is not spherically symmetrical.” Ball describes the fundamental quandary: “How, without any outside disturbance, can the spherically symmetrical ball of cells turn into one that is less than spherically symmetric, with its cells differentiated to follow distinct developmental paths?”
Turing posited a process of reaction and diffusion of autocatalytic chemicals that in developing tissues could trigger gene action based on threshold levels of chemical gradients. He chose to call these chemicals “morphogens” owing to his faith in their certain role in morphogenesis, even though he had few specificities in mind about what types of chemicals might perform this function. Because in autocatalysis the amount of chemical or morphogen generated depends on the amount that is already present, “this kind of feedback can lead to instabilities that amplify small, random fluctuations and, under certain constraints, turn them into persistent oscillations.” Turing worked out the equations for this process by hand, all the while wishing he had a digital computer, for a hypothetical two-dimensional sheet of cells and concluded that such a reaction-diffusion process might be responsible for the stationary “dappled” patterns that appear on animal skins, as well as for gastrulation and plant phyllotaxis. Twenty years later, German scientists Hans Meinhardt and Alfred Gierer revisited Turing’s proposition; to it, they added an inhibitory element alongside the activating one, calling theirs an “activator-inhibitor scheme” (these are now recognized as one class of reaction-diffusion processes). The result of this addition is the production of stationary patterns such as spots and stripes, which are now often referred to as “Turing patterns.”
Owing to both the precociousness of Turing’s ideas about evolutionary computation and morphogenesis and his inability to develop them further due to his untimely death, other thinkers are credited with founding techniques of evolutionary computation in the late 1950s. It was not until John Holland’s work in the 1960s and his publication of the technique of genetic algorithms in Adaptation in Natural and Artificial Systems (1975) that the field became more widely known and began to be firmly established. In their 2015 Nature publication, computer scientists Agoston Eiben and Jim Smith recount that “although initially considerable skepticism surrounded evolutionary algorithms, over the past 20 years evolutionary computation has grown to become a major field in computational intelligence.” They include in the field “historical members: genetic algorithms, evolution strategies, evolutionary programming, and genetic programming; and younger siblings, such as differential evolution and particle swarm optimization.” They document the widespread success of evolutionary computation for multi-objective problem solving in general, typically for up to ten objectives. Computer scientist Melanie Mitchell in 1996 described the value of genetic algorithms and evolutionary computation for biologists as allowing “scientists to perform experiments that would not be possible in the real world,” “accelerating processes of biological evolution in silico,” and simulating “phenomena that are difficult or impossible to capture and analyze in a set of equations.” But Eiben and Smith make it clear that biological theories of evolution and methods of evolutionary computation are only superficially related (Figure 3.1). The successes they recount using techniques of evolutionary computation (e.g., NASA spacecraft antenna design, pharmacology, and robotics) lie far afield from biology.
If the primary impact of evolutionary computation is on complex multi-objective problem-solving in general, and if it bears more relevance to artificial intelligence and artificial life than to biology, why did Turing and others consider biological evolution to be a potentially useful metaphor for computational problem solving? Mitchell describes evolution as “a method of searching among an enormous number of possibilities for ‘solutions.’ In biology the enormous set of possibilities is the set of possible genetic sequences,” she writes (an oversimplified statement, to say the least), “and the desired ‘solutions’ are highly fit organisms. . . . The fitness criteria continually change as creatures evolve, so evolution is searching a constantly changing set of possibilities. Searching for solutions in the face of changing conditions is precisely what is required for adaptive computer programs,” she explains. She further describes evolution as a “massively parallel search method: rather than work on one species at a time, evolution tests and changes millions of species in parallel.” Note that her verbiage positions evolution as an active agent or force and not as simply a passive result. Lastly, she finds the rules of evolution to be remarkably simple—“species evolve by means of random variation (via mutation, recombination, and other operators), followed by natural selection in which the fittest tend to survive and reproduce, thus propagating their genetic material to future generations.” As will be shown below, the rules of evolution, as she calls them, are now considered to be far more complex than she describes. Mitchell’s version of the rules reflects neo-Darwinian evolutionary principles of the mid-twentieth century and does not include recent revisions and additions.
Mitchell goes so far as to suggest that evolutionary computation has a “largely unexplored but potentially interesting side,” which is that “by explicitly modeling evolution as a computer program, we explicitly cast evolution as a computational process.” Just a few years after she suggested this, historians of science Lily Kay, in Who Wrote the Book of Life? A History of the Genetic Code (2000), and David Depew, in his essay recounting how organisms came to be seen as “digital printouts,” traced the interesting historical development of how biology came to be viewed as computational. After physicist Erwin Schrödinger’s assertion in 1944 of a “code-script” at the root of life, and James Watson and Francis Crick’s casting of DNA as an information code in 1953, work in genetics throughout the ensuing two decades focused on “decoding ‘the code of codes.’” Scientists worked hard to ascertain which nucleotide sequences make which proteins, as viewed under the rubric of Crick’s “Central Dogma of Molecular Biology,” first pronounced in 1958. The central dogma emphasized that “information” from the “code” only flows one way: from DNA to RNA to proteins. Similar to information theory, communication down a line, and computer programming, the “code” was conceived of as a linear sequence of letters: A, C, T, U, and G. “They are, of course, not letters at all,” Depew writes, “any more than amino acids are words or proteins sentences. They are merely chemicals with a certain specificities [sic] for bonding with other chemicals.”
Even at this point, however, Depew notes that the use of information theory as a framework for “unraveling the DNA–RNA–protein relationship” was not the same as the “notion that an organism is a readout from something like a computer program,” for during the 1960s and 1970s, “computers and computer programs were not yet widely known.” He credits the shift to fully considering organisms using a “digital tropology”—seeing them as “digital printouts” from a “genetic program”—to stem from the influence of a few specific publications in the 1980s and 1990s. These are Richard Dawkins’s The Blind Watchmaker (1986) and Climbing Mount Improbable (1998), and Daniel Dennett’s Darwin’s Dangerous Idea (1995), which posits an “algorithmic” view of natural selection. “In these works the assimilation of genetic programs to computer programs—and in particular to so-called genetic algorithms that mimic the sheep-and-goats process of natural selection, in which only adapted combinations of genes are allowed to ‘reproduce,’” he writes, “is presented as a way of adumbrating, protecting, and even empirically confirming [Dawkins’s] selfish gene hypothesis, which was first put forward without any analogy to computational software or hardware.”
Thus, Dawkins’s and Dennett’s interpretations of biological evolution and genes around the turn of the twenty-first century came to be tinged by evolutionary computational approaches, specifically that of genetic algorithms, in which “inefficient combinations are programmatically weeded out by a recursive decision procedure.” Depew notes the resonance of these interpretations of genes, evolution, and natural selection with biologically deterministic “quasi-eugenic” approaches prior to the war. And while genetic algorithms are structured on mid-twentieth-century neo-Darwinian principles, they do not reflect the statistical considerations of variability proffered by population genetics from that time. Depew, whose specialty is the history of changing views of Darwinism, emphatically concludes, “As widespread as digital imagery of the gene now is among both expert and popular audiences, it is nonetheless a markedly imprecise representation of the relationship between genes and traits. Even if we insist on seeing the relationship between nucleic acids and protein as a coded and programmed one,” he asserts, “still there is no ‘machine language’—no binary system of zeros and ones—lurking beneath the correlation between the base pairs of nucleic acids and proteins.”
After Depew’s essay from 2003, another historian of science, Hallam Stevens, picked up the story of the transference of ideas from computation to biology where Depew left it. Over the next decade, biological organisms came to be viewed as comprising numerous networks: “gene regulatory networks, protein interaction networks, cell-signaling networks, metabolic networks, and ecological networks.” Stevens identifies a network as being made of up “objects” viewed as “nodes,” connected to one another by “edges,” combined into a “web.” He points to the widespread growth across domains of digital technologies, which are linked in networks, as the source of this new conceptual mode of seeing relationships, including relationships between “objects” in biological organisms. Because biological networks “do not consist of stable physical links between fixed entities,” because their links are always changing, and because the “objects that are supposed to be connected are not always the same objects,” he asserts that “the idea of a ‘biological network’ is therefore a kind of fiction.” Yet, this fictional mode of seeing and thinking about biological processes undoubtedly contributes to a general view of biological organisms as computational entities. As Depew writes, “This rhetoric celebrates the cyborgian notion that there is no distinction in principle between organisms and machines that can be programmed to perform various tasks.” Additionally, within the field of engineering synthetic biology (hereafter engineering synbio), the consideration of genes as “standard biological parts” arranged into “circuits” (conceived as being like both electronic and digital circuits), and from circuits into “devices,” further reinforces this mode of thought. That much biological research is now done sitting at computers rather than in experimental laboratories only heightens this perception. When biology and evolution are viewed from this perspective, the current pursuit by some of nano-bio-info-cogno (aka NBIC) convergence seemingly gains plausibility simply through pervasive metaphorical overlaps between, in this case, biology and computation.
Undoubtedly, prominent discourses in generative architecture build on these overlapping metaphors between biology and evolutionary computation, in contrast to their actual disjunctions. One of the founding texts of generative architecture, John Frazer’s An Evolutionary Architecture (1995), asserts, “Our description of an architectural concept in coded form is analogous to the genetic code-script of nature.” Alternately, Alberto Estévez, in his essay “Biomorphic Architecture,” states his aim to fuse “cybernetic–digital resources with genetics, to continuously join the zeros and ones from the architectural drawing with those from the robotized manipulation of DNA, in order to organize the necessary genetic information that governs a habitable living being’s natural growth.” Frazer clearly recognizes that his approach to architecture relies on a biological analogy, but Estévez believes in a more literal correlation. Even so, statements like theirs stem from a view of biological processes heavily influenced by methods of evolutionary computation, as Depew shows.
In the interdisciplinary cross-borrowing between biology, computer science, and generative architecture—despite the aforementioned shortcomings lost in translation between biology and computation—generative architecture still lags furthest behind in adapting its theories and techniques to current ideas of biological development and evolution. As will be discussed below in the summaries of the three major modes of evolutionary theory and programming since the mid-twentieth century, computer scientists have modeled new, roughly analogical approaches to current understandings of evolutionary developmental biology and epigenetics. These are referred to as computational development or evolutionary developmental systems, and epigenetic algorithms. Yet, in the field of generative architecture, only Weinstock and Menges promote adapting basic neo-Darwinian approaches into scripting for generative architecture with the aim of integrating some features of individual “embryonic development” in tandem with population evolution. Apart from this, however, the scientific biological sources that Menges cites remain squarely in mid-twentieth-century neo-Darwinism and his approach is heavily neo-Darwinian, similar to most other generative architects who publish on uses of evolutionary computation or genetic algorithms. The next three sections, therefore, trace the theoretical shifts with regard to morphogenesis and evolution in biology, computer science, and generative architecture, beginning with adherence to the neo-Darwinian modern synthesis and the central dogma of molecular biology, on which most genetic algorithms are based.
Mid-Twentieth-Century Neo-Darwinian Evolutionary Theories
The term “neo-Darwinism” was created by George Romanes in 1895 to join Darwin’s idea of natural selection to Weismann’s theory of the separation of “germ” cells from somatic cells (referred to as the Weismann barrier). Darwin himself believed that change occurred gradually through a blending process affected by an organism’s actions in its environment. Romanes’s term, therefore, distinguished neo-Darwinism of the late nineteenth century as a theory that upheld natural selection while also asserting that germ cells were the seat of heredity, unaffected by an organism’s actions or the environment in which an organism lived. Yet, in the 1940s, the term was revived and reinterpreted once again, this time to include knowledge from the previous four decades of research on Mendelian genetics, including ideas from the rise of population genetics in the 1930s. The latter asserted that genetic variation across populations played an important role in evolution overall; it established the population, rather than the individual, as the primary unit for academic evolutionary study since heredity can vary far more between individuals than between populations. Although phenotypes were considered the unit of natural selection, genes were seen as the source for variation and change over time, with changes due to mutations and sexual recombination.
Also referred to as the modern evolutionary synthesis or just the modern synthesis based on the title of Julian Huxley’s book Evolution: The Modern Synthesis (1942), neo-Darwinism became the reigning evolutionary theory for the duration of the twentieth century. Historians of science Depew and Bruce Weber, in their masterful book Darwinism Evolving (1995), describe the modern synthesis as a synthesis of a few different sorts. It not only brought together the ideas mentioned above (Darwin + Weismann + Mendel + population genetics), but also synthesized ideas from different fields of biology, including genetics, morphology, and paleontology, among others. It worked to reconcile microevolution—evolution on the scale of genetic change—with macroevolution—evolution on the scale visible in the paleontological record. Depew and Weber describe the modern synthesis as “more like a treaty than a theory,” for it was intended to define acceptable research areas in evolutionary biology by excluding contrary voices whose opinions challenged aspects of the synthesis.
Watson and Crick’s publication of the structure of DNA in 1953 kick-started the field of molecular biology in earnest. Crick’s assertion in the late 1950s of the central dogma of molecular biology played an influential role in establishing the kinds of questions to be asked, and it quickly became an integral feature in neo-Darwinian thought. The ensuing research focused almost exclusively on seemingly linear molecular processes, with many scientists turning a blind eye to the complexity of cellular interactions. The first task was deciphering which sequences make which proteins, with the twenty amino acids produced by sequences referred to as codons becoming known by 1965. It had been presumed from the work of George Beadle and Edward Tatum published in 1941 that one gene made one enzyme. This idea shifted in the 1950s to one gene, one protein—since enzymes are only one type of protein—and then to one gene, one polypeptide—since some proteins consist of multiple polypeptides and it was learned that a gene could encode a single polypeptide. These phrases followed an earlier reductionist assumption prominent in eugenics that one gene makes one trait. While this may be true for a simple trait such as eye color, of which there are few, it is certainly not true for most other complex traits. Ongoing research focused almost exclusively on the sequences known to code for polypeptides and proteins, those referred to as “genes,” with the rest of the DNA in the genome considered inconsequential and named “junk DNA” as early as 1960. During the 1970s and 1980s, challenges to the central dogma were voiced from respected scientists, but they were largely considered anomalies rather than revealing fundamental theoretical problems. These eventually culminated in a reconfiguration of evolutionary theory under the influence of epigenetics in the late twentieth century. Yet, for the most part, the gene-centered reductionism of the central dogma was reinforced further by the writings of evolutionary biologist Richard Dawkins. His “selfish gene theory” in the late 1970s and 1980s all but shifted the unit of natural selection to the gene, with the phenotype viewed primarily as just a carrier that exists for the duplication and propagation of genes.
Thus, when computer scientists were first inventing evolutionary computation including genetic algorithms in the 1960s, they did so within the context of neo-Darwinian evolutionary theory. Dawkins’s emphasis on genetic reductionism simply tightened the fit with the version of evolution implemented in genetic algorithms, which Mitchell considers to have been created “in the spirit of analogy with real biology.” In her summary of the biological terminology borrowed by this branch of computation, she offers a quick summary of neo-Darwinian concepts, which is useful not only because computer scientists saw neo-Darwinism this way but also because they reflect broader popular understandings of evolutionary theory. “All living organisms consist of cells, and each cell contains the same set of one or more chromosomes—strings of DNA—that serve as a ‘blueprint’ for the organism,” she begins. “A chromosome can be conceptually divided into genes—functional blocks of DNA, each of which encodes a particular protein.” Note the focus only on what is considered “functional” DNA, in contrast to “junk DNA” which is completely omitted, just as it largely was in twentieth-century molecular biology; note also the assertion that each gene codes for one particular protein. This oversimplification is then further oversimplified: “Very roughly, one can think of a gene as encoding a trait, such as eye color. The different possible ‘settings’ for a trait (e.g., blue, brown, hazel) are called alleles,” whose statistical variance in populations was being analyzed by population geneticists. “Each gene is located at a particular locus (position) on the chromosome,” she asserts, noting that “many organisms have multiple chromosomes in each cell. The complete collection of genetic material (all chromosomes taken together) is called the organism’s genome.” Note again that since she earlier defined a chromosome as being divided conceptually into genes without “junk DNA,” that this definition of the genome can be read as only consisting of “genes” that make up chromosomes. In fact, this was the assumption of the Human Genome Project (HGP) at the end of the twentieth century, which was conceived of and implemented under the neo-Darwinian framework. The Human Genome Project only decoded 1.2 percent of the full genetic material on our twenty-three pairs of chromosomes, ignoring the other 98.8 percent, yet its name specifies this tiny portion as “the” human “genome.”
Many of these statements, and more to follow, now read more as faulty assumptions owing to their having been overturned or seriously revised by scientific research in the last couple of decades. It is important to state them here since this is the fundamental biological theory that genetic algorithms reflect, which in turn are broadcast by many generative architects. Mitchell describes the genotype as “the particular set of genes contained in a genome” (again, ignoring everything but genes), stating that “the genotype gives rise, under fetal and later development, to the organism’s phenotype—its physical and mental characteristics, such as eye color, height, brain size, and intelligence.” By creating this list, starting with eye color (which was previously used in her explanation to define a trait), all entities in the list by implication read as if they are traits determined by genes, since the genotype gives rise to them, without any environmental interaction or sociocultural factors being mentioned. She notes that organisms with paired chromosomes (usually owing to sexual reproduction) are referred to as “diploid,” while those “whose chromosomes are unpaired are called haploid.” Recombination (or crossover) occurs in sexual reproduction for diploid organisms, and this plus mutation offer the two sources of genetic variation between generations. Her definition of mutation states that “single nucleotides (elementary bits of DNA) are changed from parent to offspring, the changes often resulting from copying errors.” Finally, “the fitness of an organism is typically defined as the probability that the organism will live to reproduce (viability) or as a function of the number of offspring the organism has (fertility).” Note that it is only within the arena of determining fitness that the environment has any role in this conception of evolution, although Mitchell does not specifically mention the environment as a factor in natural selection so strong is her focus on gene centrism. It is also far more difficult to include environmental factors in computational processes, as this demands that “the environment” be reduced to a few qualities that are numerically quantifiable and from which data are continually gleaned.
After explaining the biological terminology, Mitchell summarizes how these terms (the ones she italicizes) are integrated into the structure of genetic algorithms. “The term chromosome typically refers to a candidate solution to a problem, often encoded as a bit string. The ‘genes’ are either single bits or short blocks of adjacent bits that encode a particular element of a candidate solution,” she writes, offering as an example “in the context of multi-parameter function optimization the bits encoding a particular parameter might be considered to be a gene.” Since most genetic algorithms “employ haploid individuals” and therefore computationally simplify the process by omitting sexual reproduction and just crossing over between single chromosomes, variation comes through crossover combined with mutation: “Mutation consists of flipping the bit at a randomly chosen locus (or, for larger alphabets, replacing the symbol at a randomly chosen locus with a randomly chosen new symbol).” These algorithms most often work only with genotypes; “often there is no notion of ‘phenotype’ in the context of GAs, although more recently,” she notes, “many workers have experimented with GAs in which there is both a genotypic level and a phenotypic level (e.g., the bit-string encoding of a neural network and the neural network itself).” The omission of the phenotype could only be seen as almost justifiable if the algorithm is based on Dawkins’s extreme genetic reductionism, where the phenotype really only matters to sustain the genotype. It is certainly an omission that Kumar and Bentley realize would qualify for an “ear bashing” from a developmental biologist, since it omits the developmental stage completely and reduces an organism to only a string of information. This is obviously the ultimate “digital tropology,” as Depew calls it.
Architects’ descriptions of genetic algorithms vary little from what Mitchell describes, although some of their terminology offers a clearer description of how genetic algorithms often function eugenically. John Frazer was arguably the first to use adaptive learning processes to find digital design solutions, as well as the first in 1968 to digitally print a coded two-dimensional architectural rendering of a roof structural design; his three-dimensional sculptural model had to be made by hand (Figure 3.2). He “evolved” column designs beginning in 1973 using Donald Michie’s OXO method, but after discovering Holland’s genetic algorithms he began using these techniques in the late 1980s. His important publication An Evolutionary Architecture from 1995 describes the technique of GAs, as well as a technique on evolving “biomorphs” put forward by Dawkins in The Blind Watchmaker. The same year, Foreign Office Architects (FOA, consisting of Farshid Moussavi and Alejandro Zaera-Polo) designed a building that won the famous competition for the Yokohama Port Terminal in Japan. Over the next seven years as it was completed, it likely became the first building constructed integrating features designed using genetic algorithms (Figure 3.3). Possibly for this reason, the Museum of Modern Art in New York acquired documentation of the project for their permanent collection.
FOA’s use of genetic algorithms as part of their design process is strongly hinted at by the title of the 2003 exhibition featuring their work at the Institute of Contemporary Art (ICA) in London, Foreign Office Architects: Breeding Architecture. This exhibition was accompanied by a book, Phylogenesis: FOA’s Ark, that further reinforces the predominant neo-Darwinian evolutionary theme. In their opening essay to the book, Moussavi and Zaera-Polo describe their practice as a “phylogenetic process in which seeds proliferate in time across different environments, generating differentiated but consistent organisms.” They create a classification scheme for analyzing the “families” and “species” of buildings created by their practice’s “genetic potentials,” also described as “a DNA of our practice.” From this classification system, they construct a phylogenetic tree (Plate 3) that resembles Darwin’s famous evolutionary tree sketch from 1857, of which a more formal version appeared as the sole image in On the Origin of Species (1859) (Figure 3.4). They describe how stylistic and functional aspects “compete” against one another to result in “improved” designs: “This is not a simple bottom-up generation; it also requires a certain consistency that operates top-down from a practice’s genetic potentials. Just as with horses and wines, there is a process in which successful traits are selected through experimentation and evolved by registering the results.” “Top-down” intervention in breeding to select particular traits for design improvements is otherwise known as eugenics, despite the fact that some restrict usage of this term to refer only to humans, not to plants and animals.
Just a few months after the Breeding Architecture exhibition closed at the ICA, Weinstock published his first description of genetic algorithms for use in “morphogenetic” architectural design. In his 2004 article “Morphogenesis and the Mathematics of Emergence,” he described Holland’s technique of designing adaptive processes in artificial systems using genetic algorithms. “Genetic algorithms initiate and maintain a population of computational individuals, each of which has a genotype and a phenotype,” he explains, showing already a difference from Mitchell’s description in 1996 that mentioned only the beginnings of experimentation with having phenotypes. Through simulated sexual reproduction and crossover plus mutation, “varied offspring are generated until they fill the population. All parents are discarded, and the process is iterated for as many generations as are required to produce a population that has among it a range of suitable individuals to satisfy the fitness criteria.” In a 2010 publication he added to this description, retroactively imposing neo-Darwinian principles into Darwin’s own mind. Weinstock mistakenly asserts, “In Darwin’s view variations are random, small modifications or changes in the organism that occur naturally in reproduction through the generations. Random variation produces the raw material of variant forms, and natural selection acts as the force that chooses the forms that survive.” He elaborates, again referring to neo-Darwinism (instead of Darwin’s own Lamarckian-influenced views): “Changes arise in the genome by mutation, often as ‘copy errors’ during transcription, when the sequence may be shuffled or some modules repeated by mutation.” Finally, he mentions implementation of “the kill strategy,” which decides “how many if any of the parent individuals survive into the next generation, and how many individuals are bred from.” Weinstock’s explicit mention of discarding and killing parent individuals is unique; usually descriptions of GAs find ways around so clearly describing this integral yet metaphorically and historically unsettling part of the process.
Consider these two examples. The first is an online tutorial website about genetic algorithms, created in 1998 and maintained by computer scientist Marek Obitko, that has been referenced by graduate students in David Benjamin’s Columbia University Graduate School of Architecture, Planning, and Preservation studios. While Obitko repeats Mitchell’s description of the biological background almost verbatim without citing her, nowhere does he mention killing. He simply emphasizes selection of the fittest using the principles of “elitism,” a term with clear eugenic resonance. Similarly, Keith Besserud and Josh Ingram (previously Joshua Cotten), of BlackBox SOM, presented a paper titled “Architectural Genomics” at the Association of Computer Aided Design in Architecture (ACADIA) conference in 2008. In their description of a genetic algorithm for the selection portion, they simply state, “Test the fitness of the designs that are generated from each genome” against the established “fitness function” parameters. Then, “identify the top performers; these will become the selection pool for the next generation.”
Besserud and Ingram’s talk focused on a hypothetical architectural example that used the algorithm to find “the optimal shape” for a “300-meter tower (subject to a handful of geometric constraints) in order to maximize incident solar radiation on the skin of the building” (Figure 3.5): “The working assumption was that this form would best suit the deployment of photovoltaic panels to collect solar radiation.” In general, features that architects want to “optimize” include “construction cost, structural efficiency, carbon footprint, daylighting quality, acoustic quality, programmatic compliance, view quality, etc. Basically any parameter that exerts an influence on the execution of the design intent is eligible to become the metric of the fitness function,” they write, so long as it is numerically quantifiable and automatable into the program. When two criteria for optimization conflict with one another in a multi-objective optimization problem, one way they resolve the dilemma is by using “penalty functions” whereby particular fitness scores are marked down by how poorly they meet the other criteria. They also mention the pragmatic aspect of computation time for consideration of how one establishes a fitness function, for “the speed of the fitness function is the single most influential factor in the efficiency of the GA,” they state. “It is not uncommon for a GA to have to run tens of thousands of iterations to reach convergence. Even if it takes just a few seconds to complete a simulation and analysis iteration, the total optimization process could take many days to reach convergence.”
As the images of their hypothetical example reveal, for architectural GAs a morphological phenotype is required for visual evaluation by the designer. Additionally, as will become more apparent in the following section on evo-devo, environmental features are also important to integrate into the algorithmic process as a context within which building parameters can be optimized through fitness assessment. This is so not only for aesthetic reasons, which factor heavily into design, but also for functional reasons, as surrounding buildings for a skyscraper may have glass surfaces that reflect light and heat, thereby affecting calculations for the structure being designed. Despite the necessity of having a phenotype and including some features of the environment, architectural GAs are still heavily gene-centric. Visual phenotypes simply exist as digital visualizations of their underlying genetic code, which is designed at the outset to include various morphological and behavioral features. In other words, their structure reflects an underlying genetic determinism. This coding basically follows the “one gene, one trait” framework despite this rarely being the case biologically. Such genetic reductionism permits designers and scientists to minimize the true biological complexity of organisms in order to make “design” seem achievable.
That GAs are more eugenic than neo-Darwinian is revealed by a few other factors. First, in mid-century neo-Darwinism, population genetics valued genetic diversity as a variability that protects a population in the context of environmental change. But in GAs, fitness criteria and discarding or killing of parent individuals moves potential diversity ever closer to the fitness criteria. In other words, “genes” dubbed unsuccessful are omitted rather than preserved. This is akin to “dysgenics,” the negative counterpart of eugenics historically that strove to remove “bad genes” from the population through policies of reproductive sterilization. Furthermore, perhaps through the influence of repeated use of methods of evolutionary computation, the meaning of natural selection as Darwin intended begins to get lost. GAs carry a teleology, goals toward which their evolution aims that are set by the designer; Darwinian evolution does not. This replacement of Darwinian natural selection with eugenic “rational selection” is apparent in Weinstock’s thinking. He confusingly writes, “Darwin argued that just as humans breed living organisms by unnatural selection, organizing systematic changes in them, so wild organisms themselves are changed by natural selection.” His use of “just as” and “so” equate human rational selection with natural selection. Furthermore, he believes that this process in nature tends toward ever-greater success, the “eu” part of eugenics: “Successful organisms will survive the fierce competition and have greater breeding success and, in turn, their offspring will have greater reproductive success,” he states. Certainly there is no guarantee of this in nature. Computer scientists Eiben and Smith at least separate the two types of selection, even while embracing the eugenic version. “From a historical perspective,” they write, “humans have had two roles in evolution. Just like any other species, humans are the product of, and are subject to, evolution. But for millennia . . . people have also actively influenced the course of evolution in other species—by choosing which plants or animals should survive or mate.” Without being clear whether they are talking about biology or computers, they declare, “Together with the increased understanding of the genetic mechanisms behind evolution, this brought about the opportunity to become active masters of evolutionary processes that are fully designed and executed by human experimenters ‘from above.’” Note that they do not hide the “top-down” approach of GAs behind the “bottom-up” rhetoric of self-organization, unlike generative architects who imply that “bottom-up” self-organization is at work in methods of evolutionary computation that generate architectural designs.
These two examples clearly reveal the tendency of gene-centric neo-Darwinism toward what Depew calls “quasi-eugenics.” I would go further and simply state that GAs should be renamed eugenic algorithms (EuAs). At least in the computational realm, this would make their eugenic assumptions clear. Yet in the realm of biology, owing to the prevalence of digital tropology, its implications are murkier while tending in the same direction, as engineering synbio demonstrates. As Depew states, digital tropology of living organisms “gives the impression that the evolutionary process is more orderly, more programmatic, more oriented towards adaptive efficiency than the main line of Darwinism has hitherto assumed. This effect is rhetorically enforced by projection of the language of engineering design onto the statistical process of natural process.” Accordingly, “Dennett even speaks of natural selection as a ‘designer.’” When evolution is designed, it becomes eugenics, since we tend to design for “what seems to us” improvement. I assert this in Eugenic Design, quoting Charles Davenport, the father of American eugenics, from 1930: “When we understand the processes directing contemporary evolution we will be in a position to work actively toward the acceleration of these processes and especially to direct them in what seems to us the best way.”
Within neo-Darwinian molecular biology and in GAs, owing to a number of factors, morphogenesis did not play a major role in the dominant research agenda or model of evolution. In part this was due to gene-centrism, the emphasis on random mutation and sexual selection as the primary avenues of change, the influence of the central dogma and its one-way flow of information and action that outlawed Lamarckian environmental influences on heredity, and the minimization of the phenotype under followers of Dawkins. Within developmental biology, of course, the study of morphogenesis continued, but evolutionary theorists and computer scientists were not paying close attention to that arena as an influence on their considerations. Some realizations from scientific experiments that later, in hindsight, have been interpreted as potential challenges to the central dogma, were at the time of their discovery interpreted and accommodated within the dogma’s framework. For example, in the late 1950s, Frenchmen François Jacob and Jacques Monod discovered that the bacteria E. coli changed its genetic response to producing certain proteins depending on whether a food source was present in the environment. This could be seen as environment-triggered gene action with information flow moving in the wrong direction, from the environment to DNA rather than the other way around. Jacob and Monod, however, theorized the existence of “regulatory genes”—genes that function as a switch to turn on or off another gene that produces a protein. The regulatory gene is “off” so long as a “repressor protein” is bound to it, but in the face of particular environments, that repressor protein may release itself from the regulatory gene, in effect turning the gene “on,” which then triggers the protein-producing gene. Historian of science Evelyn Fox Keller writes that Jacob and Monod’s theory maintained the central dogma in the face of this challenge, by conceiving of the genome still as made up only of “genes” that matter—some of which happened to be “structural (i.e., responsible for the production of a protein that performs a structural role in the cell), while others did the work of regulating the structural genes. In this way, the Central Dogma held, with genetic information still located in gene sequences, and the study of genetics still the study of genes.”
Other evidence contrary to the dogma accrued as mounting challenges. For one, Barbara McClintock had discovered “jumping genes,” referred to now as transposable DNA, which are DNA sequences that move “from one area of the chromosome to another and could even reposition themselves on another chromosome.” Although she discovered this in the late 1940s, she was so criticized that she stopped work on this topic in the early 1950s and only was validated twenty years later when other scientists verified her research. She received a Nobel Prize for it in 1983. According to Depew and Weber, “The transposition of genes from one site on a chromosome to another is possible because specific enzymes can recognize transposons, can cut or cleave the DNA at an appropriate spot, and then can reinsert the gene(s) that are attached to the transposon at another site on the same or a different chromosome.” In other words, genes do not have only one locus, as Mitchell stated: “When this occurs, changes are observed in the phenotype, even though there is no change in the gene itself, and no substitution of an alternative allele.” Around the time that McClintock’s work was beginning to be recognized, Howard Temin discovered that viruses can penetrate organisms’ genomes through the process of reverse transcription. By using an enzyme encoded by their own RNA, they can reverse-transcribe their own RNA into “complementary DNA” (cDNA) that the host organism then integrates into its own genome. This discovery forced Francis Crick, in a rebuttal published in Nature (1970), to “clarify” that the central dogma “has been misunderstood” in order to integrate reverse transcription into the dogma.
Results of these experiments and others, along with the development of technologies that could sequence DNA in the 1980s and 1990s when the Human Genome Project began, led to new knowledge of gene sequences and gene actions in many different organisms. Some of these proved to be instrumental in developmental morphogenesis. Furthermore, when the results of the HGP were announced in 2003 revealing that humans only had around thirty thousand genes (remember, only 1.2 percent was decoded) instead of the eighty thousand predicted by Francis Collins, director of the HGP, scientists had to quickly rethink many of their long-standing assumptions about the relationship between biological complexity and genetic complexity. (The estimated number is now around nineteen thousand.) Together, these developments transformed evolutionary theory into a new synthesis that came to be known as evo-devo in 2000, although it took until 2005 and the publication of Sean Carroll’s Endless Forms Most Beautiful: The New Science of Evo Devo to become popular knowledge. Thus, when John Frazer wrote in 1995 that “there is so far no general developed science of morphology, although the generation of form is fundamental to the creation of natural and all designed artefacts. Science is still searching for a theory of explanation, architecture for a theory of generation,” he was correct. However, he would soon have access to a very solid theory for biological form generation that both scientists and architects could begin to explore.
Because the major discoveries that led to the theory of evo-devo were genetic rather than epigenetic and came from the study of higher organisms (fruit flies, then mice, frogs, worms, insects, cows, and humans), evo-devo is considered by many scientists to fit within the neo-Darwinian framework as a revised synthesis. In the 1980s, when the first full decoding of the genomes (“genes” only) of many organisms allowed their comparison with one another, scientists were shocked to find major similarities in genetic sequences across very evolutionarily distant organisms. They named this shared sequence the “homeobox,” which shortens to Hox genes, and named the proteins produced by this region the “homeodomain.” Incredibly, these similarities occur in the genes that scientists knew contributed to body plan organization and organismal development. Virtually overnight considering how many decades had passed, morphogenesis reentered the stage of evolutionary drama, and earlier embryological evidence such as that which had led Haeckel to his misguided theory of recapitulation could be seen in a new light.
Centuries of observation of living organisms had revealed a number of physical similarities across phyla. Many animals have homologous parts in different species that are basically the “same structure . . . modified in different ways,” for example, the segmented bone structure of human arms, mice forelimbs, bat wings, and dolphin flippers. Other similarities are more general; for example, insects and animals have modular parts repeated in sequence, whether segmented sections of an insect like an ant or repeating vertebrae down the spine of a mammal. Different types of organisms also share symmetry and broken symmetry, with left and right sides or radial segments mirroring each other, say, but front and back sides showing differences. Some of these broken symmetries occur along axes of polarity (much like the axes of the three dimensions); for animals that walk on four legs, these run from head to tail, top to bottom, and from center out to appendages (imagine left to right, like a splayed-out animal). The discovery of genetic similarities suggested that the visual correlations of animal morphology might be traceable to a common root related to morphogenesis. This idea radically departed from previous notions of gradual genetic change over a very long time via random mutation and sexual recombination. Such notions had led scientists to assume that greater biological complexity required the existence of many more genes, such as Collins had predicted for the human genome. As Carroll states, “No biologist had even the foggiest notion that such similarities could exist between genes of such different animals.” The discovery of the homeobox made sense out of the otherwise shockingly few numbers of protein-coding genes that genome-decoding projects had revealed different species possess, along with their similarities.
Scientists therefore implemented visualization strategies in order to literally see the processes of gene activation across different cell lines during embryonic development, beginning with the fruit fly but also in other species for comparison. To determine which early cells in embryo formation became later cell lines associated with different parts of the body, they inserted colored dye into early cells and then observed the location of daughter cells carrying that color dye in later development. To follow the activation of the Hox genes and the homeodomain—also referred to as the “tool-kit proteins” since the homeobox was viewed as a tool kit shared across species—they used techniques of fluorescent tagging and fluorescent microscopy to create images of developing embryos that reveal which tool-kit proteins are active, when, and where (Plate 4). These methods produced amazing discoveries for they revealed that the order in which Hox genes appear in the homeobox sequence is the order in which they are expressed in biological development across most species: “This meant that the depth of similarity between different animals extended not just to the sequence of the genes, but to their organization in clusters and how they were used in embryos. . . . The implications were stunning. Disparate animals were built using not just the same kinds of tools, but indeed, the very same genes!”
Furthermore, like the lac operon model whereby a protein binds to or releases from a regulatory gene that then switches on or off another gene, tool-kit proteins appeared in a sequence that “prefigured” subsequent organization and production of different parts of an organism in development. In other words, Hox genes function as regulatory genes for other gene action. Philip Ball describes them as “mere routers, like train-line signals, the effects of which depend on precisely when and where in the developmental process they get switched on.” All of this enabled scientists to create both “fate maps” of development for particular organisms and an overall general theory of morphogenetic development (Figure 3.6). And similar to how Turing had hypothesized decades earlier that secreted morphogens might be responsible for embryonic symmetry breaking, scientists found that “organizer” cells and “zones of polarizing activity” were responsible for developments at particular locations and stages. For example, the tool-kit protein sonic hedgehog (Shh, based on the humorous naming of a gene) can function as a secreted morphogen that diffuses and triggers gene action at a distance from its source. Further experimentation over the last decade aimed at identifying morphogens has revealed a number of proteins that may function as Turing thought in morphogenesis.
The resulting major change in evolutionary theory shifted the presumed origin of biological complexity away from assumptions of gene mutation and number and onto the role of the homeobox and gene regulation. “The development of form depends upon the turning on and off of genes at different times and places in the course of development,” and major species’ differences in complexity arise from changes in the location and timing of regulatory gene activation and changes in which genes are subsequently switched on or off. This is especially so for “those genes that affect the number, shape, or size of a structure.” “There are many ways to change how genes are used,” Carroll writes, and “this has created tremendous variety in body designs and the patterning of individual structures.” This revelation has been elating. “The ability to see stripes, spots, bands, lines, and other patterns of tool kit gene expression that precisely prefigured the organization of embryos into segments, organs, and other body parts provided many ‘Eureka!’ moments when the role of a gene in a long studied process became exquisitely clear,” Carroll recounts. “Stripes that foreshadowed segments, patches that revealed powerful zones of organizing activity, and other patterns that marked positions of bones, joints, muscles, organs, limbs, etc.—all of these connected invisible genes to the making of visible forms.”
Two years before Carroll’s book was published, two computer scientists turned to evo-devo for a new model of evolutionary computation. In 2003, Kumar and Bentley published “Biologically Inspired Evolutionary Development,” a longer version of which begins their coedited anthology of the same year, On Growth, Form, and Computers, named in honor of D’Arcy Thompson’s classic. Evo-devo made Thompson’s work relevant in a new way and to a new audience, for while he did not anticipate homeobox genes or morphogens, he had argued that the same physical and mathematical forces at work in individual development affected evolution overall. Kumar and Bentley proposed a new field named “computational development” and created an approach called “evolutionary developmental systems” (EDS). “Development is controlled by our DNA,” they write, reflecting knowledge of homeobox genes. “In response to proteins, genes will be expressed or repressed, resulting in the production of more (or fewer) proteins.” They acknowledge that some of these proteins may be present as “maternal factors” in the cytoplasm of the egg that is fertilized, but that others are produced by DNA in the developing embryo: “The chain-reaction of activation and suppression both within the cell and within other nearby cells through signaling proteins and cell receptors, causes the complex processes of cellular differentiation, pattern formation, morphogenesis, and growth.” In this way, gene regulation triggered by protein signals affects other gene action.
Kumar and Bentley acknowledge some deficiencies in their model in comparison with biological knowledge. For example, they write, “Currently, the EDS’s underlying genetic model assumes a ‘one gene, one protein’ simplification rule (despite biology’s ability to construct multiple proteins); this aids in the analysis of resulting genetic regulatory networks. To this end, the activation of a single gene in the EDS results in the transcription of a single protein.” Other oversimplifications are not acknowledged, however, such as their statement that “the only function of genes is to specify proteins”; in fact, some genes do not code for proteins but function as regulatory elements. For example, Hox genes are referred to as regulatory or transcription factor genes, although some drop the word “gene” and refer to “cis-regulatory elements,” leaving the word “gene” for functional protein-producing regions. This problem of terminology and understanding has compounded since 2003; scientists even more recently are acknowledging that the definition of a gene is very much in question. In 2008 a New York Times story by Carl Zimmer, “Now: The Rest of the Genome,” opens by narrating that bioinformatician Sonja Prohaska, of the University of Leipzig, tried to not say the word “gene” for a day owing to the need for scientists working in this area to completely rethink its meaning, based on “too many exceptions to the conventional rules for genes.”
With reference to the biological accuracy of the EDS model, however, the only reason that variations from or oversimplifications of biology matter depends on for what the model is being used and by whom. If it is being used by biologists in conjunction with computer scientists to complete experiments in silico, then biologists can ascertain whether the model suits their needs and uses. If like GAs, EDS is another general problem-solving tool that opens up new modes of solution—say, ways to evaluate and integrate functional “phenotypic” features in the short or long term in the initial evolution of a solution—then it will likely be adopted as a preferable design strategy. For the purposes of this chapter, however, it matters only that generative architects understand that, like GAs, EDS does not actually mirror biological process, no matter how complicated it is. And compared to GAs, EDS is complicated, but nowhere so much as actual cellular, much less organismal, processes.
The basic design of EDS utilizes three main components: proteins, genes, and cells. Cells offer an isospatial element, allowing proteins to diffuse between neighboring cells in virtual 3-D space (one cell is surrounded by twelve others). Different proteins are generated at certain rates, and decay and diffuse at certain rates, and they are also tagged with levels of strength of interaction and inhibition. This design is roughly analogous to diffusing morphogens or to processes of gene regulation associated with transcription factors, such as in the lac operon. Each cell has two genomes, the first of which holds all the information about each protein, and the second of which “describes the architecture of the genome to be used for development.” “It describes which proteins are to play a part in the regulation of different genes” and is the primary genome “employed by each cell for development.” Each gene in the genome has two parts, a cis-regulatory element that precedes and regulates the gene that follows, and proteins that bind to and release from the cis-regulatory element to trigger gene action. Each cell has a cell membrane that functions as a sensor “in the form of surface receptors able to detect the presence of certain molecules within the environment.” “Cells resemble multitasking agents, able to carry out a range of behaviours. For example, cells are able to multiply, differentiate, and die.” Cells are tracked as “current” or “new,” with a heavy infusion of proteins provided to new cells. Around all these “developmental” aspects of their program is wrapped an “evolutionary” GA that “represents the driving force of the system.” It provides “genotypes for development,” tasks or functions against which genotypes are measured for success or failure, and a way to measure the fitness also of “individuals.” EDS is only one approach to computational development, as is shown by the essays in Kumar and Bentley’s anthology On Growth, Form, and Computers, which combines writings by development and evolutionary biologists alongside those by computer scientists.
The usefulness of EDS to generative architecture is not readily apparent. In 2010, when Weinstock described his own approach to integrating a developmental factor into GAs as a means to add an aspect of evo-devo to his basic neo-Darwinian method in computational architectural morphogenesis, he was possibly not aware of Kumar and Bentley’s work or else could not see its relevance to architecture. “The use of evolutionary algorithms has been quite limited in architectural design, and algorithms that combine both growth (embryological developments) and evolution (operating on the genome) over multiple generations have not yet been successfully produced,” he writes. Weinstock’s method focuses on including a homeobox into a GA, where the homeobox genes act “on the axes and subdivisions of the ‘bodyplan’” in a mode very similar to the general theory of morphogenesis mapped out by Sean Carroll in Endless Forms Most Beautiful (Figure 3.6). “The earlier that ‘mutation’ or changes to the regulatory set are applied in the growth sequence,” Weinstock explains, “the greater the effect is on the completed or adult form. Random mutation applied to the homeobox produces changes in the number, size, and shape of each of the subdivisions of the ‘bodyplan.’” By altering the amount of mutation “in different segments of individual form” and “by constraining the differentiation of axial growth across the population,” he writes, “very significant differences in populations and individuals are produced.” He induces “environmental pressures” onto populations through applying, for example, “constraints on the total amount of surface ‘material’ available for the whole generation. The interaction of environmental constraints and population strategies are also amplified or inhibited by the kill strategy.”
Apart from Weinstock, Menges and Sprecher are two of the very few architects who publish references to evo-devo when describing their evolutionary computational approaches. Menges finds that “the underlying principles of morphogenesis present relevant concepts for the development of generative design.” This includes both the “ontogenetic aspects” and the phylogenetic ones, which are related because of the long-term conservation of the homeobox across species. Together, development plus evolution “provide a conceptual framework for an understanding of computational design as variable processes of algorithmic development and formation, whereby adaptation is driven by the interaction with internal and external influences.” Menges explicitly mentions the role of the external environment as a factor in architectural natural selection, including such environmental qualities as gravity and load, “climatic factors like solar radiation, wind, and natural light,” thermal loading, cross-ventilation, amount of outdoor covered space, and connectivity between spaces, considered in relation to “overall build volume and floor area.”
This suits two student projects he discusses, which are case studies exploring the role of evolutionary computation in architecture at different scales. The first addresses the design of “spatial envelope morphologies” for a building in which form and energy performance due to climate and site are linked, and the second uses evolutionary computation to design “urban block morphologies” to lower overall energy use as considered for the interaction between a number of structural units on the block, not just one building. Processes involved the creation of different “evolutionary operators” in the algorithm, including: a “ranking operator” to determine “an individual’s overall fitness”; a “selection operator” to determine the “preferred individuals for reproduction and creation of offspring for the next generation”; an “intermediate recombination operator” that “allows offspring variables”; and an “embryology operator.” “The embryology operator initiates the growth of individuals through a series of subdivisions and the assignment of characteristics to the resulting volumes based on five sets of genes. The embryology operator developed for this project requires a special chromosome order of the gene sequence controlling the spatial subdivision”; it functions similar to the homeobox.
As Weinstock’s and Menges’s methods both demonstrate, evo-devo is adapted for use in architectural evolutionary computation as a means of providing an order of development and greater structural variety only during the design phase of creating a solution. The temporal aspects of organismal development—which, as we shall see shortly, depend very much on environmental inputs—are constricted in generative architecture to an in silico pre-finalization realm. Actual buildings are dissimilar from biological organisms in that their “development” in material form is simply a process of construction of a finalized design—the “adult” form selected by the architect as the design. The building itself does not undergo material morphogenesis or development or growth, unless you count a later addition or renovation as “morphogenesis,” which would be a trite analogy. Some parts in some buildings can move to alter its shape between pre-set ranges of possibilities, or sensors can affect the behavior of different functional systems. But there is no actual phase remotely analogous in the architectural realm to what occurs in the development of biological organisms. This drives home the point that even the use of a more current evolutionary model—that of evo-devo—on top of earlier neo-Darwinian views brings architecture no closer to actual biology. However, because some architects claim to want to grow living buildings—not just in silico but in actuality, using cells as building material or organisms as buildings—understanding the state of current biological knowledge of cellular processes, organismal development, and evolutionary theory is important for evaluating the likelihood of their visions being realized.
Epigenetics and Evolutionary Theory
This last section briefly introduces many exciting discoveries from microbiology and epigenetics. Together these have prompted some serious rethinking of some of the fundamental assumptions of Darwinian and neo-Darwinian evolutionary theory regardless of the version of “synthesis.” Darwin’s theory was based on observation of higher eukaryotic organisms—plants and animals—and did not factor in microbiological processes of prokaryotes, including archaea and bacteria. A comprehensive theory of evolution, however, should account for both so that all living organisms are included. This means that evolutionary theory needs to accommodate the capacity of microbial organisms to swap genes with each other through processes of horizontal gene transfer (HGT, also called lateral gene transfer) (Plate 5). While this 2005 diagram from Cold Spring Harbor Laboratory shows the reconfiguration of the microbial phylogenetic tree into a net, scientists are debating the mounting evidence that bacteria also swap genes with plants and animals, and not just through viral vectors. The most obvious example stems from Lynn Margulis and Dorion Sagan’s now-accepted theory of endosymbiosis, which posits that eukaryotic cells arose from the engulfing of one prokaryotic cell by another. This resulted in eukaryotic cells that in animals have mitochondria, and in plants, chloroplasts. DNA sequencing has revealed that in fact the DNA inside mitochondria and chloroplasts is bacterial DNA, and in eukaryotic cell functioning, mitochondrial DNA interacts with the DNA in the nucleus. One recent study has found bacterial genomes integrated into human somatic cell genomes in certain tissues, which reveals not only that humans are “multispecies” or “polygenomic” beings—both genetically and in the fact that bacteria constitute 90 percent of the cells in our bodies—but also that despite the widely accepted generalization, every cell in the body does not absolutely carry the exactly same genome. These types of findings from microbiology contributed to articles published in 2009 on the 150th anniversary of On the Origin of Species that proclaimed “Darwin Was Wrong” and argued that the idea of the “Tree of Life” demands serious reconfiguration.
While knowledge of microbial gene-swapping processes is causing reconfigurations of the phylogenetic tree, rapid increases in knowledge of environmentally responsive gene regulation via epigenetic processes are transforming our understanding of both development and heredity. The word “epigenetics” was created by Conrad Waddington in 1942 as “the study of processes by which the genotype gives rise to the phenotype.” By this definition, morphogenesis and biological development, and what we now understand as significant aspects of the homeobox and evo-devo, clearly fit within Waddington’s epigenetics. Yet Michel Morange and others working in the history and research of epigenetics since Waddington recount that over the subsequent decades, interpretations of both genetics and epigenetics have shifted. One key example of this already cited pertains to the discovery by Jacob and Monod of the lac operon model and their choice to describe transcription factors that function as a genetic switch as involving “regulatory genes” and being part of a “genetic program.” Because the lac operon is triggered by the presence of particular food sources in the surrounding environment, it could also have been interpreted as information flowing from the environment through proteins to DNA—in other words, as a challenge to the central dogma and as a clear example of epigenetics. Morange shows that just before Jacob’s discovery of the operon in 1961, Jacob had worked on a topic related to “mobile genetic elements” (MGEs, or in French, episomes). MGEs are McClintock’s “jumping genes” (also referred to as gene transposition), which she felt to be “the main mechanism controlling gene expression through differentiation and development,” and therefore decidedly epigenetic in Waddington’s sense. Morange argues that Jacob therefore could have interpreted the operon as epigenetic, but instead, chose to only characterize the process as a genetic one, hypothesizing the existence of “regulatory genes,” when before, only “genes” that made “functional” or “structural” proteins for use in other cellular functions were considered “genes.” Morange states that other scientists have interpreted the results of biological experimentation similarly, following Jacob and Monod’s precedent, to the extent that “epigenetics, as defined by Waddington,” has been made “an integral part of genetics.”
Other respected scientists, however, interpret both environmentally triggered gene regulation and gene transposition—which is aided by changes in genome architecture, its structural configuration, and compaction in chromosomes—to be fully epigenetic processes. “Epi-” means “over,” “above,” or “beyond”—hence, beyond the gene. These processes affect gene function and result in a changed cellular and even a changed organismal phenotype, despite the “fact” that the genes in all cells are generally presumed to be the same. The processes are therefore beyond the gene. Yet, since the definition of a “gene” has become very much in question, what is properly “genetic” versus “epigenetic” is perhaps even murkier than it was before. Scientists today often use this definition for epigenetics: “the study of changes in gene function that are mitotically and/or meiotically heritable and that do not entail a change in DNA sequence.”
This definition calls attention to a number of interesting features. First, through its reference to mitosis, which is the process of cell division, it shows that changes in gene function, despite cells having the same DNA sequence, occur in cell differentiation. As stem cells in morphogenesis assume particular identities as skin or liver or muscle tissue cells, these identities are heritable from cell to cell in subsequent cell lines, with the stability of cell identity in the line maintained through epigenetic processes. Second, through its reference to meiosis, which is the process in organisms that sexually reproduce by which “germ” or sex cells are created containing only half the chromosomal material as somatic cells, it shows that changes in gene function are heritable across generations of organisms through sexual reproduction despite there being no change in the DNA of the genome (Plate 6). For example, both flowers in this image are toadflax, Linaria vulgaris, but when Carl Linnaeus discovered the second one, he was sure it was a new species, since the second has five petals and radiant symmetry but the first has a different petal formation and bilateral symmetry. Genetic sequencing, however, reveals that both flowers share the same genome. In 1999, scientists learned that the remarkable phenotypic difference is due to a change in methylation on one gene, which they called an epimutation. DNA methylation is an epigenetic process whereby a small methyl group (CH3) attaches to some nucleotide bases of DNA, often cytosine (C, of A, T, C, G). Methylation can be a global pattern across the genome, one that changes during the process of development, and it affects gene regulation, often preventing transcription, by serving as one method of chromatin marking. Chromatin is what makes up chromosomes; in addition to DNA, it includes RNA proteins and other molecules, and in eukaryotes, specific proteins known as histones. Histones help compact chromosomes into tightly wound structures, thereby shaping the architecture of chromosomes, which can be different both in the same cell at different times of its life and in different cell types. These architectural differences affect gene transcription and therefore can and do alter the functioning of cells.
The toadflax example and the second part of the definition of epigenetics should make one pause, even miss a breath, for under classic neo-Darwinism based on Weismannism, the only form of heredity passed from organism to organism resides in the genome. Even with the most recent definition of genome after the close of the Human Genome Project, which can refer to all the DNA on the chromosomes, the existence of two visibly different phenotypes arising from the same genotype does not fit our usual understanding of heredity. Eva Jablonka and Marion Lamb write, “Over two hundred years after Linnaeus’s specimen was collected, the peloric [radially symmetrical regular variant] form of Linaria was still growing in the same region.” This shows that epigenetic processes form a second line of heredity that can be very stable. Clearly, at least this second line of heredity, and maybe even two or three others as Jablonka and Lamb argue, exists in addition to that of the genome and plays a role in evolutionary processes.
Some aspects of epigenetic heredity are clearly sensitive to and affected by the environment, even by a mother’s behavior in diet and habits during pregnancy and early development of her offspring. One pathbreaking study on this topic was conducted by geneticist Emma Whitelaw and her colleagues in 1999, where they found that a genetically identical strain of mice of the “agouti” type produced differently colored offspring, but the color of the coat of the offspring depended on and followed that of the mother despite all the mice having the same genome. The correlation with the mother’s coat color stemmed from sharing the same methylation pattern as her, revealing continuity from generation to generation in methylation heredity. Methylation patterns, however, can be affected by stress, chemical exposures, the mother’s diet (which can change the color of the coat of her offspring), or her behavior, for example, a lack of maternal care (e.g., mother rats who do not lick their young offspring). Thus, a number of scientists are calling for recognition of a new form of Lamarckism in addition to changing neo-Darwinian ideals, for it is clear that behaviors and environmental effects can produce heritable epigenetic patterns that affect traits in offspring, with these patterns lasting for as few as four generations but sometimes for many more. For this reason, epigenetic mechanisms are viewed as a relatively short-term form of heredity that is responsive to environmental or behavioral changes.
Although modern epigenetics research began in the mid-1970s and picked up in the 1980s, since the Human Genome Project an explosion of research on epigenetic processes has occurred. Adrian Bird in Nature describes 2006–7 as a watershed year, with over 2,500 articles and even a new journal being devoted to epigenetics. Two major sequencing projects by international consortia followed the HGP to begin to fill in the huge gaps and numerous questions left by its results. The first was the ENCODE project (Encyclopedia of DNA Elements), which ran from 2003 to 2012 and decoded the other 98.8 percent of the human genome. Although scientists are still debating their interpretations, the results have undoubtedly transformed knowledge of the genome’s complexity and only added to the identity crisis of the “gene.” It is now certain that one gene can make many proteins, that genes frequently splice and recombine with other genes to make proteins, and that different cells can use the same gene to make different proteins. Therefore, a “gene” is no longer a stretch of DNA at one location that codes for one protein. Furthermore, what previously was known as “junk DNA” is now known to be pervasively transcribed, producing noncoding RNA (ncRNA) that is heavily involved in genetic regulation at many levels. Noncoding RNA plays a role in mediating between “chromatin marks and gene expression” and between other gene regulatory systems, including leading proteins to places they need to be in order to lay down methylation patterns. “The take-home message would seem to be clear,” Keller writes. “Genetics is not just about genes and what they code for. . . . All of this requires coordination of an order of complexity only now beginning to be appreciated. And it is now believed that the ncRNA transcripts of the remaining 98–99 percent of the genome are central to this process.”
Concurrently with the ENCODE Project, the National Institutes of Health (NIH) in the United States and the International Human Epigenome Consortium began creating databases from deciphering the sequences of “normal” and “abnormal” human epigenomes, focusing on “methylation, histone modifications, chromatin accessibility, and RNA transcripts.” Epigenome projects are unlike the HGP or ENCODE, both of which worked with the so-called human genome—meaning, the sequence of DNA as statistically averaged across those humans sampled, which is presumed to be the same in all cells. Because epigenetic processes are responsible for cellular differentiation into different cell lines and tissues and organs, in order to understand “normal” and “abnormal” epigenetic structures, each tissue type requires multiple samplings and study. The NIH project is therefore targeting “stem cells and primary ex vivo tissues . . . to represent the normal counterparts of tissues and organ systems frequently involved in human disease.”
Yet, all involved are certain that the investment of time and money to accomplish this huge task is worthwhile because of the numbers of diseases that we are learning are associated with epigenetic differences. As early as the 1980s, cancer researchers noticed that cell genomes in some types of tumors are abnormally methylated. Other epigenetic mechanisms such as the action of prions—proteins that have taken an alternate structural conformation that then convert other proteins to their structure—play a role in Creutzfeldt–Jacob disease and mad cow disease. Researchers are expecting to find numerous overlaps between epigenetic mechanisms and many other diseases. Apart from the pursuit of knowledge about human health, agricultural and biotechnological industries are also seriously delving into epigenetics owing to the connectivity of life and biological processes and the fact that epigenetics offers a second line of heredity, one actively involved with the first. Epigenetic marks also affect processes involved in cloning and genetic engineering, which scientists discovered when inserting a gene resulted in producing an opposite effect from their intent, or when cell lines that were engineered reverted to their former state because the inserted genes were silenced epigenetically.
Given the powerful role that epigenetics plays in development and evolution, as Jablonka and Lamb argue, and its recognition by scientists as “some of the most exciting contemporary biology” and “a revolutionary new science” as Bird stated in 2007, it is not surprising that computer scientists in the field of evolutionary computation are adapting it for new algorithmic approaches. Bird’s article “Perceptions of Epigenetics” inspired Sathish Periyasamy, William Alexander Gray, and Peter Kille to create the Epigenetic Algorithm (EGA) the following year. The authors base their interpretation of epigenetics on Robin Holliday’s definition as “the mechanism of spatial and temporal control of gene activity during the development of complex organisms.” Their system attempts to integrate biomolecular “intra-generational adaptive mechanisms . . . to self-organize various activities in biological cells.” They refer to epigenetic processes such as “gene silencing, bookmarking, paramutation, reprogramming, position effect, and a few other mechanisms.” They state that their EGA “aims to bridge the gap between phenotype-to-genotype representations, considers Lamarckian properties, and builds decentralized, highly parallel, self-adapting, coevolving, and agent-based models.” They hope that it will aid in cancer research. Yet, owing to the structuring of their model based on a combination of terminologies and approaches from evolutionary computation, swarm intelligence, epigenetics, and autopoeisis, it is quite difficult to follow. Nonetheless, a few points deserve mention.
Autopoiesis was coined by biologists Humberto Maturana and Francisco Varela in 1973 to describe the self-maintaining and self-reproducing properties of the cell as the basic unit of life. Some today, including Periyasamy, use it as a synonym for self-organization, though this is quite an inexact translation. Interpreting biology through the lens of autopoiesis, Periyasamy and his colleagues describe biology as being organized into “strata”: “The lowest level entities of the organization are the atoms which form the next higher level entity biomolecules. Biomolecules self-organize via covalent and non-covalent interactions in space and time to form the cells as next higher level entities.” Here, they refer to cells coming into existence through protocells as if this is known, when more accurately, protocells are still hypothesized as an origin of life theory since no scientist has yet created a protocell. “The cells form the unicellular and multicellular organisms which are considered as the unit of selection. Finally the organisms and their interactions with nature form the biosphere.” Like Weinstock, they mistake natural selection in biological evolution as aiming for improvement, likely owing to their immersion in the field of evolutionary computation. They state, “Evolution is an optimization process, where the aim is to improve the ability of a biological system to survive in a dynamically changing and competitive environment.” Although they acknowledge that biological “structures are not optimized,” they assert that “they are on the verge of approaching it.” Along these lines, they state that their algorithm “is one of many . . . that could be developed to optimize the internal organization of autonomous systems.”
Nature without human interference is not an optimization system; it seems likely that their language is pointing to ideas of a eugenic ideal type rather than Darwinian natural selection as we have understood it. However, owing to the fact that some facets of epigenetics have been interpreted as active strategies for environmental adaptation by cells and organisms, it is possible to interpret their use of “optimization” with regard to their epigenetic algorithm in this light. Microbiologist James Shapiro posits that “natural genetic engineering,” rather than random mutation, should be seen as the dominant twenty-first-century mode by which novelty arises and evolution changes (Figure 3.7). Shapiro is not just referencing horizontal gene transfer but predominantly bases his idea of “natural genetic engineering” on “epigenetic modifications and rearrangement of genomic subsystems” that result in gene silencing, activation, or alternative uses and functions, often in direct response to the environment. If cells and organisms are actively adapting themselves to their environment, then what is the point, the “goal,” of their adaptive processes? Biologist J. Scott Turner, in his 2012 article in AD, argues that their “goal” is homeostasis, which he interprets as seeking comfort—or rephrased, that comfort is produced through natural processes of homeostasis. Yet, Periyasamy and his colleagues do not cite Shapiro or Turner for this more nuanced interpretation of “optimization” in terms of goal-directed adaptation by cells and organisms themselves.
An Evolutionary Architecture?
Whether, and how, epigenetic algorithms matter to biologists or generative architects is for them to determine. Only John Frazer references “epigenetic algorithms,” and he did so in 1995, which is very early considering that knowledge of epigenetics has become mainstream only in the twenty-first century. In An Evolutionary Architecture, he describes some general features of his model for evolutionary architecture: “The environment has a significant effect on the epigenetic development of the seed. . . . It has been emphasized . . . that DNA does not describe the phenotype, but constitutes instructions that describe the process of building the phenotype.” The materials produced and assembled by this process “are all responsive to the environment as it proceeds, capable of modifying in response to conditions such as the availability of foodstuffs, and so on. . . . The rules are constant, but the outcome varies according to materials or environmental conditions.” Some of Frazer’s and his students’ projects in this vein were realized as interactive installations in the mid-1990s that took cues from and responded to observers and qualities of the environment, which fits his biological analogy of epigenetic environmental responsiveness. Although Jones and Furjàn do not discuss epigenetic algorithms per se, in 2006 they proposed the need for them: “An epigenetic approach to design, then, suggests that complex feedback relations with the environment must be front-ended and generative. Code is no longer everything, context matters.” They suggest integrating the “dynamic forces and flows” between the building and its environment in feedback loops, including “flows of matter, air, heat, light, moisture, sound, but also infrastructural flows of energy, information, capital, transportation, and so on.” Their list begins to get at the complexity of the environment, to which we could add other social factors, chemical inputs, other species and their needs, and so forth. Yet in order to do so, if one wants to treat epigenetics as a careful analogy, then all these variable conditions need conversion to data and real-time feedback with development, not just a onetime statistical summary for the design phase of the process.
Frazer mentions the possibility of moving from analogy to reality, in which case it is not epigenetic algorithms but epigenetic processes themselves that become exceedingly important. “Our longer-term goal lies in trying to incorporate the building process literally into the model, or perhaps the model into the very materials for building, so that the resultant structures are self-constructing. This may be achievable by either molecular engineering,” he imagines, or “by the application of nanotechnology, or perhaps by the genetic engineering of plant forms or living organisms to produce forms appropriate for human habitation as an extended phenotype.” Frazer is not alone in this vision, but follows in a tradition of architects from previous decades before he was writing. “Frei Otto has suggested growing structures,” and “[Rudolph] Doernach and [William] Katavolos imagined organic structures erected from chemical reactions,” he writes. “Alvy Ray Smith envisaged buildings growing from single brick-eggs. Charles Jencks referred to scenes from ‘Barbarella’ showing the emergence of human and vegetable forms,” and “the final issue of the Archigram magazine contained a packet of seeds from David Greene.” Yet, in the short term, he writes, “the prospect of growing buildings seems unlikely, but self-assembly may be achievable.” Since 1995, his voice has been joined by others whose ideas are discussed in the last two chapters of this book.
The foundation laid here, though, as an entrée to this discussion, begins to hint at the scope and complexity of the challenges to be faced by those hoping for this future. This chapter has examined how closely techniques in generative architecture mirror recent advances in biology. In doing so, it has shown that the most useful, or perhaps just the most used, approaches thus far in generative architecture are neo-Darwinian ones structured on outdated, faulty assumptions about biological processes and evolution, at least from today’s vantage. Of all the theories from D’Arcy Thompson’s lifetime to the present, the neo-Darwinian period was also the most divorced from actual morphogenesis and knowledge of developmental biology. This is an ironic choice for use then as a model for architectural morphogenesis, or perhaps it is just a pragmatic choice since neo-Darwinism and neo-Darwinian evolutionary computation are the simplest and most reductive models of them all. For those generative architects who do not aim to grow living buildings but are primarily interested in the “instrumentalisation” of architecture, Eiben and Smith recognize the usefulness of evolutionary computation as a tool for the “evolution of things.” “Recent developments in rapid fabrication technologies (3D printing) and ever smaller and more powerful robotic platforms mean that evolutionary computing is now starting to make the next major transition to the automated creation of physical artefacts and ‘smart’ objects,” they write. Clearly, generative architects are already aware of this, as this has been a primary motivator for shifting to techniques of generative design in order to enhance compatibility of and develop new methods of digital design and fabrication.
Yet, given the lack of congruence between evolutionary algorithms from natural evolution as summarized by Eiben and Smith (Figure 3.1), generative architects at the very least should refrain from rhetorically positioning their approaches as biological and should be explicit about the computational thrust of their methods. Although Kumar and Bentley’s EDS or any version of an epigenetic algorithm may need adapting for architectural purposes, use of these approaches or even just adoption of their terminology would make architects appear far more knowledgeable about contemporary scientific and computer scientific theories than they currently are. Of course, this is a poor reason to adopt such language, especially if such adoption is unaccompanied by curiosity about and acquisition of current biological knowledge. However, should architects want to actually collaborate with contemporary scientists—as is the case in the next chapter, about Jenny Sabin and Peter Lloyd Jones’s establishing of LabStudio—then they must become fluent in current biological terminology and theory.
One major challenge facing contemporary biologists and computer scientists that is directly relevant to generative architecture is the difficulty of describing, quantifying, and integrating “the environment” into models of biological functions at the cellular or organismal levels. The framework in biology and medicine has moved away from gene-centrism toward gene–environment interactions. If one is to use current computational methods and draw on the big data of genetics for statistical correlations with epigenetic markers and environmental phenomena affecting bodies, then one needs the capacity to acquire, codify, and search environmental data that pertains to the issue one is investigating. Such is the focus of Sara Shostak and Margot Moinester’s article “The Missing Piece of the Puzzle? Measuring the Environment in the Postgenomic Moment” (2015), which compares the new field of exposomics with approaches in sociology and epidemiology examining “neighborhood effects on health.” Exposomics aims to track data on environmental exposures that are known triggers of epigenetic response by focusing on the internal environment of a body—say, molecular markers from toxic encounters correlated by one’s zip code, diet, smoking, stress, et cetera.
Architects who have read Michelle Murphy’s Sick Building Syndrome and the Problem of Uncertainty (2006) and who understand epigenetics might ask themselves about the possible epigenetic health effects of their building materials and construction methods on the health of buildings’ occupants. But the point of developing more useful methods of identifying and quantifying “the environment” in architecture is actually broader than this. If we play along with the idea that a building is an organism, and we are trying to generate an appropriate design solution either in silico pre-construction or model a building through its lifetime, then we need much better ways of integrating “the environment” into our models. Besserud and Ingram, in “Architectural Genomics,” describe the need for all parameters in evolutionary computation to be both quantifiable and automatable. Menges mentions the importance of designing a building or a block of buildings in relation to environmental factors. Gravity/load, solar angle, thermal gain, wind speed and direction, and cross-ventilation are just a small percentage of all the possible environmental features one might want to consider and include; more surface the closer one looks. Perhaps Menges and Hensel’s narrow definition of “morpho-ecologies” as simply humans and the buildings they occupy is a strategic oversimplification. This is because actually considering the broader ecological impact of buildings on the environment—both local and immediate, and through the life cycle of building materials—is a huge big-data task. It is also a task not limited to physical qualities like gravity, force, and heat, but also chemical, biological, and ecological impacts. Although D’Arcy Thompson avoided chemistry and focused only on physics and mathematics in his theory of biological growth and form, it would be overly reductive and irresponsible for generative architects to continue now to do so.