In Fall 2011, the curriculum at the Emergent Technologies and Design Program (EmTech) at the Architectural Association (AA) opened with a three-week-long Boot Camp, which offered a crash course in EmTech’s most basic version of the theory and practice of “emergent architecture.” The specifications were given: Using only sheet material or fabric, design a component-based system that aggregates “through local, regional, and global hierarchies.” Components should be formed using a “simple geometry”; their mode of connectivity to other components should be designed into them using “system-specific assembly logic.” Teams of students had to design and manufacture these components using digital tools—Grasshopper or other “computational associative modeling” software and laser cutters for the sheet material—alternating between physical form-finding and digital experimentation. The “global array” should “demonstrate a clear hierarchical component logic, resulting in a final form with global curvature” that functioned as a “complex spatial configuration.” It could be anchored in only three places to the floor, walls, or ceiling of the EmTech studio and should be self-supporting. It would be evaluated based on its “performative and structural qualities,” as well as on the “emergent spatial and aesthetic” “ambient effects” of the “intervention.”
To my team, which included Spaniard Mara Moral-Correa and Italian Vincenzo Reale, the limitation of flat sheet materials suggested bending and folding to achieve three-dimensional form. This was not the only possible approach, though, as other groups used flat components connected into larger systems employing the principle of tensegrity structures. Whereas three-dimensional form could be achieved from bending a single cut shape, we designed smaller flat parts to bend and fit together to form a three-dimensional base component (Figure 1.1). This organization contributed toward the goal of having a hierarchy of forms, starting with “local” (interpreted as single parts in relation to one another) and moving to “regional” (a component in relation to other components) and then “global” (the final installation, featuring lots of components in relation to one another). Since the aggregate had to be self-supporting and would be evaluated based on its structural performance and aesthetics, teams tried different materials and forms. Paper was easiest to experiment with but was weak; thin polypropylene sheets were thicker, stronger, flexible in any direction, and translucent (Figure 1.2). We finally selected 1.5-millimeter birch plywood, which owing to its cellular structure can only bend in certain directions; this was made much easier to work with by soaking it in water. Depending on the orientation of the triangular form with rounded corner cutouts in relation to the grain of the wood, and depending also on how large the rounded corner cutouts were, we achieved a wide range of curvatures. The most difficult challenge was the “assembly logic.” The bone-shaped parts with moon-sliver slits in the ends did slot together to form round lantern-like shapes, but the other types of components we experimented with needed either string, tiny nuts and bolts, or both to become an aggregate assembly (Figures 1.3 and 1.4).
Our first critique midway through the studio was dismal. Our rather floppy “pile of plastic” made up of repeating “dog-bone” ball components (system 004 in Figure 1.1), connected to one another with nuts and bolts and string pulled taut, apparently had few redeeming, much less emergent, properties. I personally was not discouraged, though, since my primary goal was trying to understand what counts as an “emergent property” in the first place. Hopefully not to my teammates’ chagrin, I politely asked fundamental questions such as “Why do we have to start with components?” and “What distinguishes the local from the regional from the global?” It seemed grandiose that “global” referred to something that would fit inside one corner of the room where we worked, although our class did consist of students from twenty-three countries. Our final critique went better, for we eliminated the structurally unsuccessful plastic, worked with different curvatures of wood based on its material performance capacities, and used our triangular components in different ways through different modes of connection. This made a few important differences, including not only a more stable foundation and growth pattern for the “global array” but also the production of “differentiation” across the system. This means that parts are used differently to add variety or distinction, or parts are slightly changed, or a new part is added into the system in order to allow for differentiation to occur. As the light studies and shadow patterns show, our various systems all produced organic-looking aesthetics.
Introduction to Complexism in Generative Architecture
EmTech is and has been the primary academic program focusing specifically on emergence in architectural design. Although faculty at other programs around the world have undoubtedly used generative techniques and expressed interest in complexity or biological systems, no equivalent program has been in existence as long as EmTech, since 2001. Furthermore, the publications by its three main founders—Michael Weinstock, Michael Hensel, and Achim Menges—constitute the theoretical core for the application of principles of self-organization and emergence into architecture. This chapter, therefore, explores their writings to elucidate how they envision transforming architecture through these concepts. Although in the early to mid-2000s they published together, after this period Hensel and Menges diverged from Weinstock in their focus and in their joint publication efforts. Weinstock discounts sustainability and has moved toward the macroscale, applying emergence to the study of urban “metabolism” and urban systems. Hensel and Menges, however, have developed the concept of “morpho-ecology” that applies the rhetoric of sustainability, specifically interpreted, to performance optimization at the scale of a pavilion or building. They use associative computer modeling (parametric tools) to integrate data from the microscale of material structure with other design factors in order to design environmentally responsive structures that offer “heterogeneous” interior spaces, sometimes with different temperature gradients owing to gradated permeable membrane facades. Taken together, Weinstock’s, Hensel’s, and Menges’s modes of applying self-organization and emergence to architecture span different scales and tend toward different ends. This chapter explores which ideas and thinkers from scientific complexity theory they rely on to frame their divergent approaches, and how they propose to integrate these ideas into design.
First, though, scientific definitions of the three most basic terms—complex systems, self-organization, and emergence—raise some interesting issues, for these terms are closely interrelated. Computer scientist Melanie Mitchell, in Complexity: A Guided Tour, defines a nonlinear complex adaptive system as one “in which large networks of components with no central control and simple rules of operation give rise to complex, collective behavior, sophisticated information processing, and adaptation via learning or evolution.” More succinctly, she states that it is “a system that exhibits non-trivial emergent and self-organizing behaviors.” Biologist and physician Scott Camazine and others, in Self-Organization in Biological Systems (2001), define self-organization as “a process in which pattern at the global level of a system emerges solely from numerous interactions among the lower-level components of the system. Moreover, the rules specifying interactions among the system’s components are executed using only local information, without reference to the global pattern.” Self-organization is closely related to the concept of emergence, which is generally understood to mean that “the whole is more than the sum of its parts,” a phrase credited to Aristotle but repeated so many times it has become common knowledge. According to Camazine and colleagues, “Emergent properties are features of a system that arise unexpectedly from interactions among the system’s components. An emergent property cannot be understood simply by examining in isolation the properties of a system’s components, but requires a consideration of the interactions among the system’s components.” The title of 2001 New York Times best seller Emergence: The Connected Lives of Ants, Brains, Cities, and Software, by Steven Johnson, offers a few of the most popular examples of complex systems; add to the list in his title honeybee colonies, the internet, and urban transport systems and the catalog is virtually complete.
A few things about these definitions deserve special attention, beyond the fact that they are interrelated and by definition almost circularly self-constituting. The word “self” in self-constituting is used figuratively, since humans are the ones making these definitions and seeing the “components,” “levels,” and “systems” in the first place. This clarification pertains as well to the “self” in self-organization, discussed below. First, all three require parts or components, and quite a lot of them; just a couple is not enough. The generic terminology allows these things to be living or nonliving so long as they are multiple-to-many. Second, these units make up the “lower level” of a system, so by definition it is thereby presumed or called into being that multiple levels or some sort of hierarchy, layering, or nestedness exists in the system. This is important because emergence or an emergent property arises at the next level up from that of the components, and its occurrence depends upon there being many components that interact with one another according to the same rules. Third, no “central control” tells the components “top-down” what to do; the components use only “local information, without reference to the global pattern” as their inputs. This is referred to as “bottom-up.” This means a component or group of components cannot see itself or themselves as if looking down from on high, cannot see the patterns the group is making, and then use the information from that seeing to shape its own or the group’s actions. (In social and cultural manifestations of complex systems, human beings are frequently considered to be “components.”) Fourth, there are “rules” that the components know and obey. The passive voice is used intentionally here since it is never even noted, much less explained, why there are rules or from whence the rules had come. Rules simply exist and precede or are called into being by the components, which somehow all know and follow them.
The above points are troubling for three reasons. The first has to do with agency and the location or identity of the “self” in self-organization. It is unclear whether a component is considered to be a self, or whether the self is being organized or emerging from the organization. In other words, are lots of selves actively undertaking or passively undergoing the process of self-organization? Evelyn Fox Keller believes the self emerges from the organization at the next higher level, but still, she quotes cybernetician W. Ross Ashby stating, “Since no system can correctly be said to be self-organizing, and since use of the phrase ‘self-organizing’ tends to perpetuate a fundamentally confused and inconsistent way of looking at the subject, the phrase is probably better allowed to die out.” If components merely follow rules of interaction, and humans are the components of cultural or social systems, then humans are denied agency, intelligence, perspective, and free will; in other nonhuman systems, parts or all of this are true as well. Perhaps, instead of the components then, it is the system that is self-organizing. But the system is pre-assumed and pre-existent in the above definitions. When I asked scientists, “What is a system?” they replied, “Whatever you say it is.” When I asked, “Where are a system’s boundaries?” they responded, “Wherever you draw them, but usually there is some clearly defined yet semipermeable border.” These are very flexible and convenient answers.
The second reason pertains to the idea of system hierarchy, which is defined into the idea of a complex system though the existence of levels. History shows that humans are very adept at conceiving of things as hierarchical, as well as at projecting beliefs like social hierarchy onto the behavior of “natural” things. Hierarchies imply power especially if the complex system is a human-related one (economic, social, cultural), even though scientists might interpret hierarchy simply as organization or structure or architecture, not power. The existence of rules also implies or creates power, and the origin of the rules is unclear. They cannot just emerge, since the rules cause emergence to happen. Finally, the third reason the definitions and their interrelatedness (circular self-constitution) are troubling derives from their cohesiveness, which strongly resembles the cohesion of a highly successful intellectual, ideological, or religious system. If something cannot be explained by one part of the framework, it can be explained by another part or by redrawing the system’s boundaries.
The flexibility inherent in these terms permits their application in many different disciplines, not just generative art and architecture. While I use “complexity” to refer to the scientific theory and its scientific applications, I use “complexism” to call attention to its ideological instantiations that are widespread across the arts, humanities, and social sciences. Complexism is used to theorize political protests like the Occupy movement and revolutions like the Arab Spring to explain nonlinear dynamics of criminology or to posit “cryptohierarchies and self-organization in the open-source movement.” It offers new modes of writing history, new ways of theorizing pedagogy, new ways of understanding borderline personality disorder or anti-sociality. Economists in the field of evolutionary economics have promoted the idea of The Self-Organizing Economy in support of deregulated global trade (the title of a book by Princeton professor Paul Krugman, who received the 2008 Nobel Prize for Economics). Mainstream financial institution HSBC picked up on this, using emergence and self-organization to advertise their services in international airports in 2015 (Figure 1.5). They use the typical examples of the honeybee, urban transportation design, and neural networks as their chosen metaphors. Complexism has even been used to bolster the assertion that “Fairness Is an Emergent Self-Organized Property of the Free Market for Labor”—an article title from the journal Entropy in 2010—an assertion with which many would disagree, including not only Occupy protestors but also sociologist Saskia Sassen. Her recent book Expulsions: Brutality and Complexity in the Global Economy (2014) argues that the growth and complexity of the deregulated global economy has produced not fairness but brutal expulsions of individuals, small businesses, and wasted lands. Another way of saying this is that the boundaries delineating those entities inside from those outside the complex system of the global economy have shrunk, while the economic growth of the last thirty years has been channeled to those fewer entities that remain inside the system’s boundaries. As these examples begin to show, both promoters and critics of economic neoliberalism are able to use complexism to argue their case. Scientific ideologies are most powerful when their terminologies are sufficiently vague but still logically interconnected so as to be able to be used by people or groups with opposing perspectives to justify their views. Eugenics functioned this way in the 1930s, and complexism does so today.
The general allure of complexism for generative architects is thus multilayered, arising from its cachet and authority as a scientific theory with broad explanatory power, its ambiguity of agency, and its flexibility of application. Additionally, self-organization offers a useful framework for designers because it posits where and how order—pattern and form—arises in nature, be it organic or inorganic. With regard to living systems, self-organization is often interpreted with reference to homeostasis, which occurs within the bodies of individual organisms and collective groups such as termites in a mound. Homeostasis refers to the capacity to self-regulate to an internal norm in order to sustain comfort and life, both of which occur within narrow parameter ranges. Such is the function of a thermostat or “governor” in machines that senses changes externally or internally and offers positive or negative feedback to restore and maintain balance. In living organisms, common examples of homeostasis are the maintenance of body temperature, fluid or gas concentrations, and bone density, as well as the construction (self-assembly) of structures (hives, mounds, nests) by organisms that help regulate their immediate environment. Architects, therefore, find homeostasis to be an attractive model for architecture since buildings moderate environmental conditions (temperature, humidity, etc.) for human existence, often relying on large amounts of energy and material to do so. Menges and Hensel focus on minimizing the latter, although within a very narrow frame of consideration.
Yet, self-organization is by no means the only explanation of pattern formation in nature and culture, and for a long time now, humans have designed machines to function homeostatically, including for the regulation of building environments. Think about the use of sensors plus feedback to a central control or thermostat to regulate how a building maintains its temperature or lighting levels. This shows that both homeostasis and patterns frequently occur through the use of centralized processes such as “top-down” intention and design, as well as through sensor systems that are designed to be distributed and communicate with one another. Examples of other pattern-making processes are the human use of recipes in cooking, patterns for sewing, and blueprints for building; some nonhuman organisms use these processes, too. Alternately, cells function as “preformatted” templates for future cells, such that when a cell divides asexually, it creates a new copy of itself. This notion of copying, or repeating, of structures also inheres to the idea of a pattern, and is similar to sexual reproduction, which also includes mechanisms for slight differentiation from generation to generation. These reproduction processes do not exactly fit the definition of self-organization, however, which requires many nearly identical components, not just one, operating according to rules using only local information without a central control. Rather, reproduction performs the role of component multiplication. So, given that other modes of pattern-making exist in nature, why do generative architects focus almost exclusively on self-organization and emergence as their theoretical platform and justification for this particular mode of parametric design?
This chapter explores complexism as an ideological influence on the foundational architectural theories of complexity, self-organization, and emergence, and on the fundamental practices of generative architecture as taught at EmTech. It focuses on the published writings of its three main founders as interpreted through experiential insights gained during my time as a student there during the Fall 2011 semester. First, Weinstock’s theories of emergent architecture are explained in conjunction with his opinion that sustainability need not be a significant concern for architecture. These are then contrasted with Menges and Hensel’s advocacy for sustainability using complexity theory as developed in their concepts of morpho-ecology and heterogeneity in architecture. The final section reflects on historical and contemporary sociopolitical and economic developments that contribute to generative architects’ current interests in self-organization and emergence as an architectural paradigm for today. These broader developments are explicated in part by Patrik Schumacher in his insistence on the superiority of homogeneity and his own streamlined version of parametricism as the only suitable architectural expression of the neoliberal era. Because of the contrast this offers to Hensel’s and Menges’s approaches, the chapter thus demonstrates the ideological power of complexism as used by generative architects to argue for different ends—for and against sustainability, for and against homogeneity—thereby cautioning its supporters against blithe acceptance.
Michael Weinstock’s Architecture of Emergence
Weinstock, continuing director of EmTech since its beginning, certainly conceives of emergence as related to self-organization, even though the latter word was not used in the 2011 Boot Camp brief or in the name of the graduate program. His first major explication of emergent architecture in print from 2004 opens, “Emergence is a concept that appears in the literature of many disciplines, and is strongly correlated to evolutionary biology, artificial intelligence, complexity theory, cybernetics, and general systems theory.” He cites the simplest definition: “Emergence is said to be the properties of a system that cannot be deduced from its components, something more than the sum of its parts.” He then sets the task for architects to “delineate a working concept of emergence,” “to outline the mathematics and processes that can make it useful to us as designers” by searching “for the principles and dynamics of organization and interaction.” He describes these as “the mathematical laws that natural systems obey.” But first, he asks, “What is it that emerges, what does it emerge from, and how is emergence produced?” Images accompanying his article depict a satellite photograph of the patterns made by clouds in a turbulent weather system and photos of spiraling fractal helices of shells and the florets of broccoflower. These suggest that emergence occurs in nonliving systems and living organisms, both plants and animals. To these, he adds the dynamics of group behavior: “flocks of birds” and “schools of fish” that “produce what appears to be an overall coherent form or array, without any leader or central directing intelligence.” He includes “bees and termites” that “produce complex built artefacts . . . without central planning or instructions.” Thus, in answer to his first questions, it is order or pattern in form and behavior that emerges “from the processes of complex systems.”
Weinstock’s mention that no leader or central instructions are involved points directly to the idea of self-organization as a founding principle of both emergence and complex systems. Two years later he made this explicit: “The evolution and development of biological self-organisation of systems proceeds from small, simple components that are assembled together to form larger structures that have emergent properties and behaviour, which, in turn, self-assemble into more complex structures.” This time, his images included an electron micrograph scan of spongy bone tissue and a close-up of the structure of soap bubbles (Figure 1.6), upon whose geometry the “Watercube” National Swimming Centre in Beijing was then being built for the 2008 Olympics (Figure I.1). His explanation of self-organization is very similar to how he defines complexity theory, which “focuses on the effects produced by the collective behaviour of many simple units that interact with each other, such as atoms, molecules or cells. The complex is heterogeneous with many varied parts that have multiple connections between them, and the different parts behave differently, although they are not independent.” As we learned through the Boot Camp final critique, “complexity increases when the variety (distinction) and dependency (connection) of parts increases. The process of increasing variety is called differentiation, and the process of increasing the number or the strength of connections is called integration.” Weinstock believes that “evolution produces differentiation and integration in many ‘scales’ that interact with each other, from the formation and structure of an individual organism to species and ecosystems.”
In retrospect, other parts of the initial Boot Camp assignment, besides the obvious requirement to create a hierarchy of connected components, explored fundamental concepts and processes associated with complex biological systems. By starting with flat sheet material, groups mimicked one way that three-dimensional tissues are created in some biological forms, as sheets of cells fold over on themselves to create cavities or layers. In Weinstock’s 2004 article titled “Morphogenesis and the Mathematics of Emergence” he cites cybernetician Alan Turing’s paper “The Chemical Basis of Morphogenesis” (1952). Turing proposes that some two-dimensional surface patterns in nature develop through a chemical process called “reaction diffusion,” where gradients of chemicals in surface tissues of plants and animals trigger thresholds that produce patterns of branching or stripes and spots. “Turing’s model operates on a single plane, or a flat sheet of cells,” Weinstock writes. He continues, “Some current research in the computational modeling of morphogenesis extends the process that Turing outlined on flat sheets to processes in curved sheets. . . . Folding and buckling of flat sheets of cells are the basis of morphogenesis in asexual reproduction.”
Similarly, by designing components digitally, students were forced to think with the logic of software, describing form mathematically using software tools, or for those who were advanced programmers, by writing custom algorithms. This aligns the practice with a mode familiar to complexity theorists: “Mathematical models have been derived from natural phenomena, massively parallel arrays of individual ‘agents’ or ‘cell units’ that have very simple processes in each unit, with simple interactions between them. Complex patterns and effects emerge from distributed dynamical models.” To Weinstock, the use of digital technologies is mandatory, and not just because that is how contemporary architecture is mostly designed today. The rationale is higher, and one with a lengthy history in architecture: to base architecture on the principles of evolution. “Strategies for design are not truly evolutionary,” he writes, “unless they include iterations of physical (phenotypic) modeling, incorporating the self-organising material effects of form finding and the industrial logic of production available in CNC and laser-cutting modeling machines.” Every studio at EmTech stressed this iterative cycle between physical form-finding and advanced digital modeling. Without having been a student or observer there, I would not have understood from just reading published articles how at its most basic level, EmTech weaves together these layers of meaning and process. Considering Weinstock’s publications from the perspective of lived experience and personal observation opened up new modes of understanding the theoretical writings of generative architecture, thereby posing a new mode of research for design studies.
In his early elaborations of the theory of emergence for his architectural audience, Weinstock cites the work of twentieth-century scientists and thinkers whose contributions he considers influential. Because architects create three-dimensional forms, he explores theories that pertain to the emergence of form (“morphogenesis”) and behavior in nature. He begins with the early twentieth-century Scottish biologist D’Arcy Thompson, whose book On Growth and Form (1917) established an original theory of evolution and organismal development using mathematical relationships between the physical forms of related and divergent species. Weinstock associates Thompson’s ideas with those of mathematician and philosopher Alfred North Whitehead, whose writings emphasize the primary importance of process and interaction in nature more than just substance alone. Weinstock then turns to Norbert Wiener’s cybernetic theory to establish that the only major systemic difference in how animals and machines—of the sort regulated by a governor guided by feedback from communication inputs—maintain themselves over time is in their “degree of complexity.”
This common pattern of behavior between animals and machines, Weinstock asserts, was further developed by the work of chemical physicist Ilya Prigogine, who argued that “all biological and many natural nonliving systems are maintained by the flow of energy through the system.” Prigogine is well known for his recognition and description of open systems, those whose sources of energy, in addition to material or informational inputs or both, are external to the system yet interact with and help maintain it. Open systems exhibit nonequilibrium thermodynamics, and the foregoing characteristics are integral to the formation of complex dynamic systems. Prigogine’s publications of the 1970s and 1980s thus greatly furthered the interdisciplinary study of complex systems. Weinstock describes the general pattern of nonequilibrium systems: “The pattern of energy flow is subject to many small variations, which,” as in cybernetics, “are adjusted by ‘feedback’ from the environment to maintain equilibrium,” he writes. “But occasionally there is such an amplification that the system must reorganise or collapse. A new order emerges from the chaos of the system at the point of collapse.” More on this below, but many patterns in nature occur when a system is “far from equilibrium” or “on the edge of chaos,” not in its closer-to-equilibrium states. Weinstock continues describing what complexity theorists typically claim happens when a system reorganizes at this influx: “The reorganisation creates a more complex structure, with a higher flow of energy through it, and is in turn more susceptible to fluctuations and subsequent collapse or reorganisation. The tendency of ‘self-organised’ systems to ever-increasing complexity,” he states, “and of each reorganisation to be produced at the moment of the collapse in the equilibrium of systems extends beyond the energy relations of an organism and its environment. Evolutionary development in general emerges from dynamic systems.”
Weinstock relies on two other basic principles that are widely accepted in complexity theory. The first was proposed by Francis Heylighen in the late 1980s and pertains to the idea of “assemblies.” This sounds very similar to what Gilles Deleuze and Félix Guattari referred to in A Thousand Plateaus (1980) as “assemblages,” a term also taken from dynamical systems theory but which they extend in different philosophical directions. For Heylighen, some component interactions (think of organisms in groups, or species in ecologies) evolve together as “assemblies” that “survive to go on to form naturally selected wholes, while others collapse to undergo further evolution. This process repeats at higher levels,” producing the effect that “an emergent whole at one level” becomes “a component of a system emerging at a higher level.” The other fundamental concept of complexity theory that undergirds Weinstock’s interpretation of his theory of emergent architecture is that “system theory argues that the concepts and principles of organisation in natural systems are independent of the domain of any one particular system. . . . What is common . . . is the study of organisation, its structure and function. Complexity theory formalizes the mathematical structure of the process of systems from which complexity emerges.” The concept of “independence” of domain presumes that any and all complex systems, regardless of which disciplinary area might study them, exhibit common processes and characteristics to the extent that their disciplinary domain becomes insignificant. Hence, Steven Johnson considers ants, brains, cities, and software in the same book, in order to point out common systemic processes of emergence across these different domains. Weinstock follows suit, discussing architecture, biology, computation, and other domains as systemically equivalent to the extent that at times it is unclear which domain he is discussing. This lack of specificity enhances the already existing confusion elicited by terminology across these domains, such as use of the words “gene,” “genetic,” and “evolution” that read as both biological and computational.
Weinstock develops these tenets much more fully in his book The Architecture of Emergence: The Evolution of Form in Nature and Civilisation (2010). His narrative is historicist, reinterpreting basic scientific knowledge about the formation and function of the earth’s major natural systems through the lens of emergence. He writes, “Emergence requires the recognition of all the forms of the world not as singular and fixed bodies, but as complex energy and material systems that have a lifespan, exist as part of the environment of other active systems, and as one iteration of an endless series that proceeds by evolutionary development.” He begins with weather and the atmosphere, then moves to geology and landscape, then living organisms and their metabolisms. Humans are of course living organisms, and Weinstock makes it explicitly clear in his first chapter and throughout the book that he considers humans and their cultural forms (i.e., “civilization,” meaning mostly cities) to be part of, not separate from, nature. “Humans are the work of nature, and all the works of man, their material practices, constructions and artefacts, evolve and develop over time as part of nature.” At the same time, he rejects the idea that an untouched nature exists: “There is no singular ‘natural landscape’ to be found, no ideal state of nature that can be reconstructed or modeled. The difficulty of hypothesising a landscape with little or no human influence is evident.”
Together, these two claims effectively dissolve the conceptual dichotomy between nature and culture, though he does acknowledge differences between biological and cultural processes. He effectively constitutes human actions as “natural” and, at the same time, positions the materiality and processes of the earth as having been seriously transformed by humans. He acknowledges that this has not always been to the benefit and often has been to the detriment of other living forms. Yet, because he sees all the systemic processes discussed throughout the book as interconnected and “self-organizing” toward an inevitable, ever-greater complexity, the ethical consequences of human actions are mitigated, evaded, or dismissed since human actions simply become one more part of the current system leading toward the next near collapse and “higher” reorganization.
This attitude is likely a significant factor in Weinstock’s discounting of current human efforts to use architecture and other cultural arenas to enhance “sustainability.” The idea of sustainability is both a part of and at odds with his framework of the advance of complexity. On the one hand, homeostasis is a means whereby organisms sustain themselves within a variable environment, and this is seen as one example of “self-organization.” Weinstock views this as a metabolic process and considers architecture to be an extension of the human metabolism. On the other hand, though, according to open systems and complexity theory, all systems are in flux, maintaining balance for a while but then reorganizing into a new form of complexity, one usually considered “higher” or “greater” than the previous one. He writes, “The tendency of living systems and of cultural systems to ever increasing complexity, and of each reorganization to be produced subsequent to the collapse, suggests that the evolutionary development of all forms is regulated by the dynamics of energy flow.” Because he thinks that “an increase in complexity is always coupled to an increase in the flow of energy through the system,” it follows that to try to “save” energy or reduce the flow of energy through cities or the global economy would suggest a “rever[sion] to a simpler organisation.” Such an action would be tantamount to what modernists’ decried as “degeneration.”
In fact, this is the argument Weinstock develops toward the end of his book. He summarizes human history in relation to growth in population, technologies, and urbanism, as tied to the increase in the burning of fossil fuels, the only source besides nuclear power that is dense enough in energy to have powered the exponential increase in the flows of energy and information since the Industrial Revolution. This growth has proceeded hand in hand with deforestation, species extinctions, soil exhaustion, increasing desertification, changes in weather and the evaporative cycle, and a huge increase of atmospheric pollution owing to soot, carbon dioxide, and greenhouse gases. These human-caused changes, however, while being cited, are cast as value-neutral in Weinstock’s text. “There are many indicators that suggest that the system is close to the threshold of stability. Systems that have evolved close to their maximum capacity are poised at the critical threshold of stability,” he writes, “and are consequently very sensitive to social, climatic and ecological changes. A local failure may trigger a cascade of failures that amplify each other right across the world and so trigger the collapse of the whole system.”
Weinstock goes so far as to predict the number of generations it will take to develop major new sources of energy and for population to decline. “Voluntary commitment to limiting the expansion of the population may begin to slow the rate of expansion within one generation,” he writes, “but will have to be reinforced by strong societal commitment with some coercion if the world population is to be stabilized within two generations.” Notice his prediction that humans will have to be coerced to not reproduce, although no mention is made as to whether “fitness” determinations will be part of this process. In general, Weinstock seems closed to the idea and fact of the ongoing prevalence of eugenics. He predicts that “world dissemination of free information . . . will then begin to have a significant impact on all energy and material transformations, and the transition to a truly ‘distributed intelligence’ world system will be accelerated.” This is but one example of the “higher complexity” that will emerge. “All forms change over time,” he states. “It is clear that the world is within the horizon of a systemic change, and that transitions through multiple critical thresholds will cascade through all the systems of nature and civilization.” He closes his book with this biblical-sounding prediction: “New forms will emerge down through all the generations to come, and they will develop with new connections between them as they proliferate across the surfaces of the earth.”
As his subsequent issue of AD, titled “System City” (2013), makes clear, these proliferating new forms that Weinstock imagines will emerge are not humans or even buildings but cities, considered as if they are organisms or “superorganisms.” Some of his verbiage suggests that he views humans as significantly subsidiary to cities. He describes humans as a “fluctuating discharge” that comes out of subway stations, and says that a city will be conscious of “its citizens.” In other words, he considers cities to be assemblies, per Heylighen’s description, that will become the self-organizing components of the next higher order of complexity after collapse and reorganization. The buildings in these cities will be “smart”—both within their own walls through sophisticated sensor systems that feed back to homeostatic controls, as well as through linkage to their neighboring structures. “Linking the response of infrastructure systems to groups of environmentally intelligent buildings will allow higher-level behaviour to emerge,” he wrote in 2004. By 2013, he was classifying the taxonomy of types of “intelligent cities” based on their scale of “cognitive complexity”: “These cognitive categories are, in ascending order of complexity: situated, reactive/responsive, adaptive/attentional and self-aware. . . . The ‘self-aware’ city does not yet exist.”
For a city to be intelligent, what is first required is sentience, “a primary attribute of intelligence,” which he defines as “the ability to sense the world external to the organism; no organism can respond to its environment or become better adapted to it over time without sentience.” Based on studies in the field of artificial intelligence on collective intelligence, such as is exhibited by insect societies that build “dynamically responsive” nests to regulate their proximal environment, he argues that “intelligence is not just the property of a singular brain, but is situated and socially constructed and emerges from the interaction of large numbers of relatively simpler individuals within fluctuating dynamical contexts. This suggests that collective intelligence is the appropriate model of intelligence for the integration of the systems of intelligent cities.” As the termite mound functions homeostatically to maintain a constant comfortable environment for the termite colony, owing to the self-organizing collective behavior of millions of termites, so, too, are cities imagined to function as homeostatic organisms with collective intelligence and infrastructure, made up from the interactions of populations of smart buildings that happen to be inhabited by humans.
Thus, he proposes that “situated cities” at the most basic taxonomic level of urban intelligence and complexity have evolved over time to be very well suited ecologically to their climate and place. Situated cities can become “reactive and responsive,” with “sentience, the ability to sense critical changes in the flows of the external environment and within itself, and to respond by modifying or changing some aspects of the behaviour of its own systems appropriately.” If a city has attained this level, it can then evolve to become “adaptive and attentional,” meaning it “has the capacity to selectively change some aspects of both the behaviour and configuration of any of its infrastructural systems. It requires the capacity for selective attention to moderate changes that are beneficial at a local scale but potentially conflict with global system parameters.” Note that this description of adaptive and attentional cities has moved beyond the definition of self-organization, whereby components are not seemingly able to observe themselves as if from above to make decisions about and control subsequent behavior, but rather only exist on the local level following local rules without any “top-down” control. Finally, once these mechanisms are in place, a city can become “self-aware . . . ‘conscious’ of its citizens and the interrelation between all of its infrastructural systems, and able to synchronize its city systems with climatic and ecological effects at the regional scale.” A self-aware city can “learn from experiences . . . run simulations to predict the effectiveness and long-term consequences of system modifications and reconfigurations . . . and is capable of planning its further expansions or contractions according to the fluctuations of its global and regional contexts.”
As becomes clear from Weinstock’s writings, his most sacrosanct belief is that everything emerges from self-organizing dynamical system processes, including evolution, and the direction that emergence takes teleologically is toward ever greater complexity. Higher levels of complexity supposedly have higher energy and informational flows; note that material flows are almost never mentioned, perhaps because matter is inconveniently finite. In other words, higher informational flows imply higher orders of intelligence, such as he predicts for the evolution of urban “organisms” (cities). The architecture of emergence that he seeks and is training students to design prioritizes digital information technologies—the use of associative modeling, the embedding of microprocessors, sensors, and digital feedback and control systems—throughout urban environments. Given his dismissal of current efforts toward future-oriented “sustainability,” along with his characterization of humans as creatures of lower-level complexity compared to smart buildings and intelligent self-aware cities that plan their own futures, it seems that Weinstock is investing his time and energy into preparing for what he imagines to be the future. If cities are to be the next organisms (components) in the march toward higher complexity, Weinstock is laying the theoretical groundwork for designing and installing their communication and control systems, on the questionable assumptions that materials will not run out and city infrastructures and buildings along with their many microprocessors will not collapse when the climate, economy, and energy infrastructures do.
Michael Hensel and Achim Menges’s Morpho-Ecologies
From the early 2000s, along with Weinstock, Hensel and Menges offered formative contributions to the development of EmTech at the AA. Hensel codirected the program with Weinstock until 2009, and Menges taught as a Studio Master. All three also collaborated in a design practice, the Emergence and Design Group. Their component-based approach to creating emergent architecture, based on principles of self-organization and complex systems, is something they shared in the 2000s, although recent projects have broadened beyond this design method. They have followed different trajectories in their careers, each developing a unique focus for his research. Whereas Weinstock has remained at the AA and been the most heavily committed to developing emergence as a theory for architecture (actually as a theory of everything), Hensel and Menges have architectural practices, doing design work in addition to teaching at other European institutions. Hensel became a founding member of the architectural firm OCEAN in 1994, which has morphed since 2008 into two Norwegian nonprofits focusing on the human and built environments, the OCEAN Design Research Association and the Sustainable Environment Association (SEA), now fused into OCEAN/SEA. These have focused more on research and publications than on built projects, with the SEA’s promotion of “sustainability” following in line with Hensel and Menges’s concept of morpho-ecologies, as explained below. Since 2011, he has directed the Research Center for Architecture and Tectonics at the Oslo School of Architecture and Design. He also taught some in the Scarcity and Creativity Studio there, a design-and-build studio focusing on lower-tech, lower-embedded-energy materials and construction approaches for local communities with few resources. The built practices of this studio, now directed by Christian Hermansen Cordua, tend more toward time-tested methods of vernacular architecture, although no doubt they are designed using advanced technologies. Although Menges taught at the AA until 2009, with ongoing visiting professorships and lectures since then, he also held positions at HfG Offenbach University for Art and Design, the Harvard Graduate School of Design, and founded the Institute for Computational Design at the University of Stuttgart in 2008, which he still directs. He also has his own architectural practice in Frankfurt, Germany.
Although generally Hensel has focused his research on sustainability and Menges on understanding the material properties of wood and other materials through digital technologies and experimentation, they have coedited and copublished a number of articles and books owing to their common concept of morpho-ecologies. As early as 2004, Menges used this phrase in relation to “complex environments” (complex here references complex systems). The “morpho” part comes from morphogenesis, and “ecology” he defines as “all the relationships between human groups and their physical and social environments,” by which he and Hensel consistently just mean buildings. The term expresses their interest in creating parametric tools that can associate (link with feedback) many factors into the generation of a design: at the outset, these included ecology, topology, and structure, but soon this list came to include additional characteristics. In the introductory essay to their book Morpho-Ecologies (2007), he and Hensel write, “The underlying logic of parametric design can be instrumentalised here as an alternative design method, one in which the geometric rigour of parametric modeling can be deployed first to integrate manufacturing constraints, assembly logics and material characteristics in the definition of simple components, and then to proliferate the components into larger systems and assemblies.” Using these associative tools, “if we change a variable of the basic outward proliferation, we may see an accompanying change in the number of components populating the surface. Indeed, as we introduce changes, we can identify results ranging from the ‘local’ manipulation of individual components to the ‘regional’ manipulation of component collectives to the ‘global’ manipulation of the component system.”
In general, Hensel and Menges aim for these tools to aid them in generating “heterogeneous space,” in contrast to what they describe as the “homogeneous space” of modern architecture. In modernist homogeneous space, the interior of a building is regulated for uniformity—the building is closed off from the surrounding environment, generally with rectangular rooms, lighting, and air-conditioned temperature. With heterogeneous space, they aim to design structures that modulate the barrier between inside and outside, perhaps through the use of screens or layered walls with fractal, branching, or cell-shaped perforations that absorb heat and cast shadows for cooling. Inside, heterogeneous space is not uniform, but flows between different kinds of spaces—some cooler possibly, some warmer, but all flexible enough for multiple formal uses and types of human interactions. One example they offer of morpho-ecological design is a project Hensel designed with OCEAN and Scheffler + Partner, their unbuilt competition entry for the New Czech National Library (2006) (Figure 1.7). The design features “gradient spatial conditioning” as well as “intensive differentiation of material and energetic interventions that are evolved from their specific behavioural tendencies in a given environment and with regards to their mutual feedback relationship, passive modulation strategies that are sustainable, and speculation on the resultant relationship between spatial and social arrangements and habitational pattern and potentials.” The concept of morpho-ecologies thus describes their multi-objective optimization parametric approach that integrates material and structural performance with environmental conditions—light, temperature, gravity, wind, humidity—flexible spatial program, and assembly and manufacturing logic.
Buildings designed as morpho-ecologies are intended to functionally exhibit internal environmental balance, such as that of termite mounds, a key example of homeostatic architecture created through the process of self-organization. This much is made clear by Menges’s article “Manufacturing Performance” from 2008 (Figure 1.8), in which “form, material, structure, and performance are understood as inherently related and integral aspects of the manufacturing and construction process.” Menges describes the research of Freeform Construction, led by Rupert Soar at the Civil and Building Engineering Department at Loughborough University, to design new material structures for additive manufacturing. Soar and his students learned from the “high-level integration of system morphology and function” demonstrated by termite mound architecture. They traveled to Namibia to cast termite mounds in plaster in order to have a negative-space model (filling the tunnels with plaster and then washing away the soil) that they could incrementally slice and scan, in order to re-create a virtual 3-D model in the computer. They used this model to study how the material properties and structure functioned homeostatically to regulate temperature, water vapor, oxygen, and carbon dioxide in the face of environmental conditions. Termite mound architecture is not static, but changes with the seasons and even on a daily basis based on the continual action of the termites to remove and redeposit soil particles in new locations. This process is a “closed-loop, self-organised process driven by positive feedback phenomena, including pheromone dispersal known as stigmergy, acoustic signaling, response to perturbation and the related interactions between countless termites, and partly directed by differential concentration of respiratory gases in larger fields, or negative feedback, within the mound.” They found that a “colony-level performance such as ventilation appears to be the synergetic effect of integrating two energy sources: the external wind pressure and the internal metabolism-induced buoyancy. . . . The effect is a dynamic interaction of all variables leading to a complex permeability map over the mound skin.”
One expert on homeostasis in Namibian termite architecture is the biologist J. Scott Turner, professor at the State University of New York’s College of Environmental Science and Forestry. As one of the photos on his website of Turner at a Namibian mound was taken by Rupert Soar, it is clear that they have worked together. Turner’s book The Tinkerer’s Accomplice: How Design Emerges from Life Itself (2007) opens with a chapter on termites but moves on to many other examples of homeostasis in the biological world. Menges invited Turner to contribute to the special issue of AD that he guest-edited in 2012. Turner’s article, titled “Evolutionary Architecture? Some Perspectives from Biological Design,” describes the homeostatic actions of osteocytes (bone cells), actions that are similar to those of termites. Osteocytes monitor the strains that bone receives, and continuously remodel bone structure based on these strains. Some cells (the osteoclasts) “bulldoze” bone calcium away from the areas where it is too thick for the smaller stresses received in that location, while others (osteoblasts) are bricklayers, cementing it down where the bone needs thickening. Through this process, the bone retains its own optimal structure, what Turner calls its “sweet spot,” given its previous environmental conditions. It is an environmentally responsive architecture, and not one dictated by genes, Turner is clear to point out. He does so to specifically take issue with architects’ ongoing promotion of gene-centric discourses when, in many examples of biological functioning, gene–environment interactions with strong emphasis on the environment offer the best explanations for behavior.
Turner writes, “Architects seek to create environments that are equable to the inhabitants of their creations. There are many ways to do this, but the way living nature does it is through the operational, bottom-up striving for comfort that is implicit in homeostasis. This means that living design is not specified or imposed,” such as through gene regulation, “but emerges from self-created structures that are dynamically responsive to the inhabitants’ comfort: bones are well designed because the osteocytes strive to provide themselves a comfortable environment.” As bone provides the “morpho-ecology” of the osteocytes, so buildings designed with these principles are the “morpho-ecologies” of humans, except . . . for the slight flaw in the analogy. For obvious reasons, humans do not continually bulldoze and bricklay the same building on a daily basis to match environmental conditions. For this reason, Soar imagines that “once the technology has matured” thousands of robotic devices will “collaborate in ongoing swarm construction processes driven by continual adjustments to individually sensed internal and external conditions.” Note that again we are faced with problems in the definition of “self-organization” with regard to what the “self” is in relation to the components. Is “self-organizing” heterogeneous “morpho-ecological” architecture meant to be inhabited by humans, or by robotic devices, which happened to be programmed, a “top-down” action, by humans?
Hensel and Menges state repeatedly that architecture that is designed and built to the principles of morpho-ecologies will demonstrate what they call “advanced sustainability,” owing to how it “links the performance capacity of material systems with environmental modulation and the resulting provisions and opportunities for inhabitation.” In 2006, Hensel described how computational biologists can model a plant’s growth in relation to its particular environment, including “gravity, tropism, contact between various elements of a plant structure and contact with obstacles.” This technique of “modeling environmentally sensitive growth” offers architects “a method and toolset in which design preferences are embedded within a parametric setup . . . simultaneously informed by a specific environmental and material context,” leading to “an advanced take on sustainability.” Hensel and Menges clearly dislike the “currently prevailing approach to sustainability.” They claim that most efforts toward sustainability today are done to “serve either mere public-relations and fund-raising purposes, or boil down to an ever greater division of exterior and interior space through ever thicker thermal insulation combined with reductions in energy use of electrical heating, cooling, ventilation, and air-conditioning devices.” This characterization harks to an ongoing debate within the architectural discipline of the roles that aesthetics, innovation, and cultural expression should play in relation to “green” building practices, with those who criticize the “prevailing approach to sustainability” perceiving and branding themselves as striving for higher aims. Hensel does, however, make it clear that he understands that solar energy technologies (photovoltaics) that rely on silicon “require a highly energy-intensive production process” and are not very efficient. Recognition of this fact is rare among most architects interested in sustainability.
Unfortunately, though, Hensel and Menges do not make clear how morpho-ecologies are in fact sustainable, much less how they demonstrate “advanced sustainability.” Is it that a morpho-ecological building should respond to its environment homeostatically, since all parts have been designed parametrically to inform the others? Having this type of associative modeling at work in the in silico design process is not the same as having it work in real time in the ongoing functioning of a building. Menges, with his collaborators and students, builds wooden pavilions that are responsive to humidity, owing to the hygroscopic properties of wood. Thin wooden components bend or straighten depending upon weather conditions, causing the surfaces to open and close (Figures 1.9 and 1.10), modulating the interior environment. While this is an interesting approach to design for educational purposes and for the design of pavilions—which are really large-scale sculptures that permit temporary occupation—it is not a sound approach to the design of buildings that need to be inhabited comfortably regardless of weather conditions. For wood to be responsive to humidity it cannot be sealed, which is what protects it from weathering and biodegrading. Few people or companies would invest their money into an unfinished wooden building whose walls open and close based upon humidity (and not, also, temperature). However, it is not at all surprising that the Centre Pompidou commissioned Menges and Steffen Reichert’s piece HygroScope: Meteorosensitive Morphology for its permanent collection.
To get environmentally responsive architecture in line with the model that Menges and Hensel propose, it seems that morpho-ecological architects would need to follow Soar’s perennial robotic deconstruction and construction process, which would make for an interesting, albeit distracting, work environment. Or, more practically, they would need to equip buildings with numerous sensors and motors—such as how Weinstock envisions future cities—to dynamically integrate, regulate, and possibly even move parts of a building. This latter approach is already being used today by architects to turn off lights in empty spaces and to move louvers on the exterior of buildings to function as screens making shade in response to the angle and intensity of the sun. These two general approaches, however, are becoming standard approaches in sustainable architecture, and they are energy-intensive propositions, if not so much during the operational life of a building then in the life cycle of all the materials and products that go into the building in the first place. What Hensel notes for photovoltaic silicon-based technologies is just as true for any device built with silicon-based microprocessors (see the discussion of this at the end of chapter 2).
A further problem with their calling morpho-ecologies sustainable is that their definition of the “ecology” part is so incredibly narrow. Helen Castle, in her editorial preface to Hensel and Menges’s guest-edited special issue of AD titled “Versatility and Vicissitude” (2008), claims that the guest editors “make us think about the word ‘ecology’ from afresh, as ‘the relationship between an organism and its environment.’” Her definition of ecology is not new at all. What is “fresh” is how Hensel and Menges limit its range of applicability only to humans and their buildings. This move, I think, robs them of a credible claim to promoting environmental sustainability, since generally most people do not define the environment or ecology as a building but rather as the larger world around and including buildings that is increasingly losing species diversity. To even come close to living up to their claim of sustainability, they would have to select materials and modes of design and production that (1) rely on abundant rather than increasingly rare earth materials, (2) have low amounts of embedded energy in their life cycles, and (3) emit low levels of pollution in their production and use, such that overall their use does little harm to other species. These criteria are possible to achieve in tandem with strong considerations of architectural aesthetics and cultural expression. While the work of the Scarcity and Creativity Studio that Hensel worked with at the Oslo School of Architecture and Design is in line with these criteria, most of the projects proposed by OCEAN are not focused on these issues. Parametric design and CAM production fall short of these standards in many regards, for computers and computer-aided manufacturing tools are built using rare earth materials and millions of high-embedded-energy silicon-based transistors, combined to form microprocessors and integrated circuits.
One of the most common verbs that Hensel and Menges use is “instrumentalise,” by which they mean to make useful and formatted for computational instruments rather than its other meanings as to make important or to employ or use. This is because digital technologies are fundamental to parametric design, which aligns with the general trend today toward “big data.” For example, one technique that Menges uses to optimize the performance of wooden designs is to laser-cut out the “structurally dispensable earlywood” cells in the wood being used to lighten the load but maintain performance in the final structure. To do this, he conducts a finite element structural analysis and digitally scans every piece of wood to be used: “An algorithmic procedure then isolates the earlywood and latewood regions, comparing this data with the structural analysis data and determining, depending on stress intensity, the cut pattern for a laser that subsequently erases the dispensable earlywood.” He also mentions that some logging companies have begun using X-ray tomography to scan each tree they cut down to find its irregularities or “defects” with regard to “morphology, grain structure, and anatomical features,” in order to decide how to best use each tree. Menges wishes these data were saved and shared, staying with the wood from the tree as it moves through the consumption cycle, becoming part of the product when one purchases wood for a particular use. While the foregoing steps to lighten the wood are already data-heavy, the data on the post-laser-cut wood is then integrated with the other parameters of the pavilion’s design and manufacture, an even bigger-data process.
Without having known the precise location of each earlywood cell or possessing the capacity to remove each one with a laser-cutter, humans have been building successful and beautiful structures with wood for millennia. Similarly, the hygroscopic properties of unfinished wood have not changed during this time. The tiny amount of “performance optimization” regarding how wood responds within a particular design obtained through these methods pales in comparison to the giant amount of “instrumentalisation” that makes it possible. How much do we really gain from transitioning to this mode of design and construction, in relation to the energy and materials used to get there? Hensel is aware of passive low-tech building strategies used throughout history in many different cultures to mitigate temperatures in order to create spaces comfortable for human habitation. In his article “Performance-Oriented Design Precursors and Potentials” (2008), he explores three themes with past precedent in “vernacular architecture” that he thinks bear new potential: “functional building elements with regards to the articulated surface; heterogeneous spatial arrangements facilitating varied microclimates and gradient thresholds that in turn are related to dynamic modes of habitation; and bodies in space with their own energy signature.” He references the modulating properties for light, heat, air, and visibility of different types of Islamic screen walls, made of wood or stone, which are semipermeable, perforated, even filigreed, as he imagines morpho-ecological architecture should be. He mentions vernacular designs that consider sun position in winter versus summer, including courtyards, porches, overhangs, loggias, or the practice in mountain climates of Europe of sleeping above the barn to make use of the heat of the animals. These are relatively low-tech solutions, even if making them took considerable time and human labor. He correctly states that to cover all the historical precedents for environmental modulation “would vastly exceed the scope of this article.” With his collaborators at SEA, they conducted airflow digital analysis and rapid prototype models to help visualize the environmental temperature and airflow properties of fifteen vernacular structures, exhibited in 2014 in Oslo as Architectural History from a Performance Perspective.
Yet, Hensel conducts his very brief historical and vernacular survey for one reason: “The question is how such strategies can be updated and instrumentalised with regard to the dynamic relationships between subject, object, and environment and towards a critical spatial paradigm.” As demonstrated in the exhibition models, he proposes using “thermal imaging, digital analysis of environmental conditions, analysis of material behaviour, and so on as critical design parameters.” Why do we need these data when we already know so much about the properties of different materials and spaces and have so many different building strategies that offer environmental modulation and heterogeneous microclimatic spaces? If we walked into those vernacular spaces, we would immediately feel the environmental modulation caused by the material and construction strategies. In other words, what do we gain from instrumentalizing these vernacular analog construction techniques, turning their properties into big data, apart from the ability to then use these data in other digital design operations? Parametric design does far more to push the economic growth of digital technologies and the use of energy and materials to create these technologies, as well as the use of machines to replace skilled human labor, than it does to produce “sustainable” architecture. The “advanced” take on sustainability surely refers not to attaining a new height of sustainable achievement, but rather to their dependence on “advanced” technologies. Beautiful and culturally expressive design is possible regardless of whether one chooses the low-tech rather than the high-tech approach.
Hensel has considered alternate approaches, though, including the creation of new materials and energy sources using the tools of synthetic biology. He calls this approach a “literal biological paradigm for architectural design” and claims that it moves consideration of both biology and architecture down to the molecular scale. “The composite material organisation of biological structures is typically morphologically and functionally defined across a minimum of eight scales of magnitude, ranging from the nano- to the macro-scale,” he writes. “While inherent functionality is scale dependent, it is nevertheless interrelated and interdependent across scales of magnitude. It is, in effect, nonlinear: the whole is more than the sum of its parts.” He credits this emergence to the “central role . . . played by processes of self-organisation.” He cites different efforts in “synthetic-life research” that are working at this molecular scale—both branches of synthetic biology, the first known as protocell or origin of life research, and the other pursuing the engineering of novel life forms or biologically produced materials. Hensel describes the criteria of “real life” established by biologist Tibor Gánti’s The Principles of Life (1971). These include, among other properties, the need for containment yet with a semipermeable membrane (somewhat like termite mound surfaces and morpho-ecology in architecture), metabolism (the processing of energy and materials through the semipermeable barrier), homeostasis, and an “information-carrying subsystem” that he credits as the source of heredity and evolution. Hensel and Menges envision that “bottom-up” biochemistry of the sort occurring in synthetic biology may become part of the material practice of architecture, as well as potentially offer a new source of energy through “artificial photosynthesis,” which ideally would function more efficiently and with far less embedded energy than photovoltaics.
More recently, Hensel has taken an interest in local constructions, not vernacular architecture per se but rather recent architect-designed structures that are place-based, situated to their environment, using local materials and cultural values. The projects that interest him are similar to those done in the Scarcity and Creativity Studio, which is not surprising given that he and its current director, Christian Hermansen Cordua, guest-edited the issue of AD on “Constructions: An Experimental Approach to Intensely Local Architectures” (2015) that explores these structures. As he and Menges dislike homogeneity in modern architectural design and environments and propose its replacement by heterogeneous architecture, so, too, does this new interest of Hensel’s in local architectures stress the values espoused by morpho-ecology. Rather than the homogeneity of modernism, local architectures built today challenge the homogeneity of globalization, and it is this that captures Hensel’s attention. Such interest can thus be fit loosely into his ongoing fascination with self-organization, interpreted here socially: individual architects, in distinct locations and cultures, building distinctly to meet local needs, which together add heterogeneity or plurality into modes of contemporary design in the face of increasingly homogeneous globalization.
Menges, too, has another separate research trajectory related loosely, in both a literal and conceptual sense, to self-organization. Through a number of different projects over the years, and particularly with his former student Karola Dierichs at the Institute for Computational Design, he has pursued the study of aggregate forms referred to also as “granular morphologies.” This is a component-based approach to the creation of structures; yet as the word “granular” implies, the components are unattached to one another. Rather, through careful computational design, their shapes allow them to loosely “grab” other components, being held in place through friction and gravity rather than through actual connectors. Furthermore, rather than “self-” organizing or assembling, they are poured out in a stream on top of one another, either by human hands or by a six-axis industrial robot (Figures 1.11 and 1.12). The resulting forms behave very much like sand or other granular materials in nature, which can function both in stable forms like a solid or flow as a liquid, depending on environmental conditions. These structures are therefore very environmentally responsive and can form different types of patterns.
In fact, the topic of granular pattern formation was explored by physicist and science writer Philip Ball, whom Menges invited to contribute in 2012 to the same issue of AD as J. Scott Turner; their articles ran back to back. Ball wrote on “Pattern Formation in Nature: Physical Constraints and Self-Organising Characteristics.” In addition to discussing the patterns of granular substances like sand and sand dunes, he described the geometries of different rock formations, oscillating patterns formed by chemical reactions (such as the famous Belousov–Zhabotinsky oscillation), and Turing patterns that make for stripes and spots on animals’ coats. These are oft-cited examples of self-organization in complexity theory, to which Ball returns at the end. “There is—despite aspirations to the contrary—no universal theory of pattern formation in nature,” he asserts. For example, consider the aforementioned dividing cells as templates, and other modes of reproduction (i.e., component iteration) that do not fit into the definition of self-organization but do make for pattern. Similarly, some scientists consider structures such as beehives and termite mounds to be self-assembled rather than self-organized—the behavior of the insects might be self-organized, but the structure itself remains behind if the colony moves and is therefore self-assembled. In other words, as Ball asserts, patterns arise in different ways.
“Nonetheless,” Ball writes, “it has proved possible to identify many common principles.” These include “the universality of certain basic forms (hexagons, stripes, hierarchical branches, fractal shapes, spirals), the importance of non-equilibrium growth processes, the balance or to-and-fro between conflicting driving forces, and the existence of sharp thresholds of driving force that produce global changes in the pattern.” His aside earlier in the article—“despite aspirations to the contrary”—challenges those like Weinstock who hold self-organization and emergence as a theory of everything. Yet, many who espouse complexity theory less vigilantly than Weinstock, including scientists, might still consider the principles that Ball subsequently lists to fall under complexity theory’s purview. This flexibility in interpretation about the origins of pattern formation, the different modes by which it arises, and the relation of pattern formation overall to self-organization and to complexity contributes to the ease by which architectural features and design approaches are interpreted as self-organizing or emergent. It also contributes to confusion for those trying to unravel just what self-organizing architecture is and to understand just why its rhetoric is so prevalent now.
Architects have perennially looked to nature as a source of inspiration for architectural forms. Since theories of evolution became prominent at the end of the nineteenth century, architects have used its variants as rationales for different approaches to design. In Eugenic Design, I described some of the ways that modern architects from the 1890s to the 1940s applied aspects of the evolutionary theories of Jean-Baptiste Lamarck, Charles Darwin, Ernst Haeckel, and Herbert Spencer to creating the founding tenets of architectural modernism (e.g., “form follows function” and the prohibition on ornament). More broadly, evolutionary theories affected how broad swaths of the American public viewed race, class, gender, disability, “progress,” and “civilization.” In many ways, streamline design embodied the ideals of eugenics, which was an extension of evolutionary theory using Mendel’s laws in order to argue not for natural selection but for rational selection—man’s ability to direct or design evolution. After the war in the 1950s, with the discovery of the structure of DNA as the “code of life” and the rise of cybernetics and complexity theory, some architects immediately took note. Gordon Pask, John Frazer, Christopher Alexander, and others linked self-organization and systems thinking to evolutionary and genetic programming, an approach cemented by John Holland’s pioneering work in computer science. (For a brief history of the historical intersections of the rise of cybernetics, complexity theory, and generative architecture, see the Appendix.)
Complexity theory has grown in its breadth of application and popularity since its inception. Because it encompasses the dynamics of complex biological systems including development and evolution, it is not surprising that architects have appropriated its theories to explain and justify the development and evolution of generative architecture. This continued appropriation of evolutionary theories in architecture serves the purpose of removing some of the responsibility for design choices from architects as their approaches become naturalized and venerated through associations with science. For generative architects, this removal of agency is compounded because it occurs not just through this naturalization process but also through the use of computers to generate design solutions that an architect may never have considered. Although the first generation of parametric designers often used the passive voice to describe how generative designs arose, current practitioners are owning up to their primary role and responsibility as designers, as the theme of “design agency” for the 2014 conference of the Association for Computer Aided Design in Architecture showed. The use of complexism in generative architecture has opened the door to other pretenses besides just the denial of full responsibility. These include the possibilities of integrating the idea of the “avant-garde” to recast sustainability as “advanced sustainability”; appearing antimodernist when in some ways applications of the ideas of self-organization closely mimic tactics in modernism; discounting the role of energy, materials, labor, and the full life cycle because emphasis is directed to self-organization and biomimicry; and, finally, appearing to be “bottom-up,” that is, democratic, while owing greater conceptual allegiance to the dominance of hierarchies. Each of these pretenses are addressed in turn, below.
Historically, evolutionary assumptions offered a scientific-seeming foundation for the art historical idea of the “avant-garde,” a concept still prevalent in architecture today that tends to apply to those architects using the most “advanced” technologies. Because of generative architecture’s reliance on computational design and manufacture, “starchitects” working in the generative vein have seized the opportunity for recognition. But because the broader architectural discipline is very concerned about its contributory role to environmental damage and climate change, the need to align parametricism with natural processes in light of the broader “sustainable design” movement becomes obvious. Perhaps in this context, then, Hensel and Hermansen Cordua recently celebrated Rural Studio and other non-vernacular contemporary architects working with local materials in local conditions by interpreting them as examples of heterogeneous morpho-ecological design that counteract the homogeneity of globalization, even if their approaches are more low-tech. Whereas the sustainable approaches of Rural Studio are obvious—reuse of local materials, low-cost structures, socially equitable function—the sustainability of parametricism is more dubious. Hensel and Menges’s strategy to rename it “advanced sustainability” for its alignment with complexity’s march toward ever greater complexity which, according to Weinstock, throughputs ever higher amounts of energy and information, is a savvy ploy. For the production of advanced technologies does in fact entail high amounts of energy, even if during the building’s use the amount of energy consumed seems acceptable. Thus, complexity theory seemingly justifies advanced technologies; it is also, of course, “natural.” Things that are natural must be sustainable, or . . . the reasoning must go something like that. In modernist versions of evolutionary architecture, the teleology pointed toward “progress,” which included hygiene, efficiency, and “advances” in “civilization,” interpreted usually as white, technologically advanced cultures. In our current version, the teleology points toward higher energy use and information throughput (big data) in order to make things that we actually made quite well in earlier eras (even “morpho-ecologically”) without advanced technologies.
Although current promoters of self-organization, emergence, and complexity, including those in architecture, often align philosophically with materialism and the abolition of the nature/culture divide, in some ways their modes of aligning today’s “cultural” practices with “nature” harks back to modernist practices when the nature/culture divide ran strong. This is one intended reference of my title Toward a Living Architecture? which points to Le Corbusier’s foundational creed Towards a New Architecture (1923) but with less certitude of the future. White modernists often considered ethnic “others” and their arts to be “primitive,” which implied a closeness to and even alliance with nature, a prioritization of intuition, a lack of rationality, a heightened sexuality, and in general an unevolved simplicity. Since modernists conceived of themselves and their lifestyles in dichotomous relation to the “primitive”—as “civilized,” rational, inhibited, complex, and lacking vitality—they appropriated facets of “primitive” cultures seen as natural into modern artistic production as a means of rejuvenation or revitalization. In similar fashion, many scholars in different disciplines who rely on digital technologies for their research (which, as a humanist, I classify as cultural production) are appropriating self-organization and the naturalizing tendencies of complexism to seemingly make complex products and processes seem “natural” or “materialist.” In the strain of generative architecture seeking “bottom-up” design using either the techniques of protocell or engineering synthetic biology, the longing for the primitive hut arising out of the earth is undeniable—and undeniably modernist.
To make this point clearer and to offer a cautionary example, one architect at the Bartlett School of Architecture in London described to me a photograph of a Dogon settlement on the Bandiagara Escarpment in Mali as “self-organized architecture” (Figure 1.13). The photo appears in Bernard Rudofsky’s classic book Architecture without Architects (1964), along with many other images of vernacular architecture, some of which exhibit fractal forms. When pressed to explain why he characterized the Dogon settlement as “self-organized,” he offered a counterexample: James Gibb’s architecture in London after the Great Fire, which had to conform to new building laws instilled “top-down” by officials hoping to prevent such calamity in the future. When asked how “top-down” laws in London functioned any differently from building principles passed down to each generation of builders in Dogon culture—meaning, both exemplify “top-down” human decisions made for certain reasons, passed on to others to affect design-and-build choices—he seemed to not understand the question. My background is in material culture studies, where architecture begins with small a, a bit like how Rudofsky considers it in his book. The Bartlett professor’s background is in “Architecture” proper, so to speak, and perhaps that accounts for our different views.
My fear was that this professor assumed that an African culture was “naturally” self-organized—as in, assuming that as under primitivism, its people are nature, building the way termites build mounds, and not only because the profession of architect presumably did not exist when the settlement was built, as implied by Rudofsky in his book title. His interpretation of the Dogon settlement as “self-organized architecture” clearly differs from the approaches of parametric designers (of which he is one)—and, again, not just because most parametric designers go to school or pass an exam to become professionals in the field. Perhaps Rupert Soar’s robot termites building the architecture of the future might function as he imagined the Dogon did. But this farfetched example points to the main differences that seem to be invisible to parametricists: their self-externalizing positioning, their “top-down” role in programming, their use of advanced digital technologies—computers and robots—as opposed to local soil and thatch (the latter point made because of material and energetic differences between Dogon and parametric buildings). I would argue that neither the Dogon settlement nor parametric architecture is “self-organized.”
Rather, parametric designers appropriate a theory of natural pattern formation and assumed progression of order toward greater complexity, and apply it to architecture. The architects are not the components—in the position of the termites, or as he may have imagined, the Dogon—that are self-organizing. Rather, they are designing components “top-down” to supposedly “self-organize.” Yet again, the process of construction is seemingly invisible to parametricists as well. The components have to be assembled by architects and builders’ hands, or put together or poured out by robots; they do not assemble on their own. Even designer Skylar Tibbit’s self-assembling designs (Figure 2.1), in which components connect through being shaken or being buffeted in a turbulent fluid, or open through the force of gravity while falling from a helicopter, are not “self-assembling.” Humans have to build the structures and shake them or place them in a turbulent tank, or fly them up and drop them or, even worse, build robots and drones to do this, all requiring large amounts of energy and materials. I say even worse with regard to drone and robots because I do not subscribe to the elimination of human labor through energy- and material-intensive technologies. Human labor is powered by food, not jet fuel, and many humans are unemployed, having been harshly expelled by our complex economy, as Saskia Sassen clearly points out in Expulsions: Brutality and Complexity in the Global Economy.
Calling such designs “self-organizing” or “self-assembling” effectively obscures the necessary fact of energy, labor, laborers, and tools to create these structures. Could one say that a Gothic cathedral of the twelfth century, or a brick palace of the eighteenth century—to intentionally pick examples from Western architectural history—“self-organized”? After all, in these buildings, “local” components—brick and stone, some with differentiation—join together to form the “regional” structure of walls, which produce the emergent properties of protection from the elements, glorious echoes, and the capacity to carry a roof. Together, walls plus the roof form the “global” structure of the building, which has the emergent properties of being able to host large gatherings of people playing music and feasting and transmitting power and authority to the person who paid to build the structure in the first place. If the answer is yes to the question about cathedrals or palaces “self-organizing,” then I opine that, like Weinstock’s The Architecture of Emergence, such a view is historicist revisionism. Weinstock might say no, not revisionism nor historicist at all; everything has self-organized, so cathedrals did just as much as urban information technology systems will. Again, if the answer is yes, then clearly the “self” in “self-organization” does not mean anything at all. We could just call it “organization,” which in fact might then prompt us to ask about who organized it if whatever is organized is a work of cultural production. After all, buildings have often been designed using components of some sort. Including the “self,” though, contributes to the demolition of culture into nature. But if the answer is no, cathedrals and palaces did not “self-organize,” then what is the difference in agency, top-downness and bottom-upness, in how a cathedral or palace was built compared to a parametric building? Differences in tools of production do not a “self” make.
Similarly, the appearance of pattern does not “self-organization” make. Pattern formation can be generated “top-down” or “bottom-up” or through some combination of these approaches; it can also be generated by using different methods. “Top-down” designers can generate “bottom-up” patterns in a computer. Self-organizing termites may “self-assemble” a termite mound, but humans or robots using digital technologies intentionally assemble a parametric building, and not in the same way that termites do. When applying the definition of self-organization to generative architecture, always ask: What is the component and is the component itself the agent making the interactions, or is something outside the component forcing it to interact? I do not accept that generative architects are the lowest-level components without a central control, who happen to follow some preordained rules using only local information without reference to the global, to interact with computers to then make the next higher level of components, printing out repeating elements that happen to need assembling by forces external to themselves. In other words, I do not accept the inevitability or naturalism of parametric design. Rather, I think companies are choosing to develop, produce, and profit from digital technologies; architects are choosing to buy and use them as well as choosing to theorize how they are using them according to the most current scientific paradigm and ideology, as a means of branding their work. Architects also control the scripting. Because these choices are made, alternate choices are also possible. There is nothing inevitable about the “advance” of digital technologies, although I agree with promoters of complexity that digital technologies consume large amounts of energy in their production and consumption. Herein lies the rub.
This is, in fact, a problem with “biomimicry,” a word that aptly describes the approaches of Weinstock, Hensel, and Menges. EmTech’s Biomimicry studio establishes that the term means the adaptation of and integration into technology of solutions to problems solved by biological organisms. In other words, see how “nature” solves a problem, and then adapt that solution to a new technology or technological approach that addresses a similar problem faced by humans. No consideration of the life cycle of the new technology was ever mentioned at EmTech when I was there in 2011 or in most publications. Biomimicry has no definitional requirement to be sustainable, although it is often presumed that if one mimics a natural solution, it will de facto be more sustainable than a solution that does not mimic nature. Weinstock, Hensel, and Menges are taking principles commonly assumed to be natural (complexity theory) and applying them to advanced technologies to arrive at new approaches with very little discussion of life cycle factors.
For this reason, it is important to apply Stephen Helmreich’s concept of “athwart theory” to complexism, for it encourages us to consider theory not just for its ideational roles but also for its material environmental effects. With regard to complexism, athwart theory helps us parse the differences between systems in different domains (cultural, social, economic, physical, biological, meteorological, etc.). Systems theory aims to encompass all systemic properties and processes regardless of domain; Weinstock reiterates this tenet that “the concepts and principles or organisation . . . are independent of the domain of any one particular system.” His writing reflects this principle in that he often does not clearly distinguish whether a word like “morphogenesis” refers to computational, architectural, or biological processes, leaving the reader to guess or assume it does not make a difference. Yet when we examine the material environmental effects of different systems and the life cycles of common materials and processes used in different domains, it becomes immediately obvious that all domains are not equivalent. Without interpreting it through the lens of athwart theory, complexism can direct our attention away from domain differences, say, the materials and energy consumed and the off-gassing produced by a plant compared to that of a building, since both are complex systems operating according to the same principles. To miss these differences are oversights with serious environmental consequences.
The final pretense I see interwoven in some of the rhetoric of “self-organization” in generative architecture is that parametric design is somehow more democratic than previous modes of architecture. Many people interpret “bottom-up” to mean democratic; this democratic quality, if it exists, is blithely assumed to be beneficial and positive. It is easy for many people to forget that democracies pass laws that are discriminatory and damaging (consider the involuntary sterilization laws enacted while eugenics was in vogue), and that democracies promote economic policies that harm millions of people. Think of the effects of economic deregulation and free trade agreements in our ever-globalizing world, per Sassen’s arguments in Expulsions. By overlooking all the “top-down” decisions and actions that inhere to the processes of parametric design, it may be possible to imagine, perhaps, that they are only “bottom-up.” Yet, self-organization does not insist that all components are created equal, and in fact, its espousal of hierarchy and assemblies imply very strongly that they will not be. In general, Weinstock, Hensel, and Menges do not fall for this common mistake of assuming that “bottom-up” de facto equals “democratic.” Hensel and Menges, though, do promote the idea of democratic architecture through heterogeneous space, in the sense that people have the freedom to inhabit different zones based on different moods and needs, to choose according to their own liking. Hensel also states that computer-automated design and manufacturing technologies are making design more affordable “for those less fortunate than the richest man in the known world, the Shah of Persia.” Hensel used the music auditorium built for the shah in the seventeenth century as one of his historical precursors to performance-oriented design. Another proponent of self-organization and parametric design, however, Patrik Schumacher, fully supports the increasing privatization of architecture and urban spaces that has been proceeding apace under neoliberal economic globalization. It is thus informative to compare Schumacher’s vision with Hensel’s and Menges’s to see the ways that he uses complexism to argue in favor of a competing aesthetic and economic agenda.
In his recent guest-edited issue of AD called “Parametricism 2.0: Rethinking Architecture’s Agenda for the 21st Century” (2016), Schumacher’s article “Hegemonic Parametricism Delivers a Market-Based Order” opens with a very clear declaration: “Parametricism 2.0 makes urbanism and urban order compatible with the neo-liberal re-emergence of market processes.” His theory is based on the current mode of evolutionary economics that relies on self-organization to naturalize laissez-faire capitalism. “The market process is an evolutionary one that operates via mutation (trial and error), selection (via profit versus loss), and reproduction (via imitation),” he writes. “It is self-correcting and self-regulating, leading to a self-organised order.” He states that there has been a “vacuum left by state planning,” and proposes instead that “‘private planning’” fill the gap. He defines the latter as “a process whereby private development corporations or consortiums unify larger development areas within a coherent, market-controlled urban business strategy.” Yet, over the last two decades or so that this deregulated economic model has been driving urban development around the world, this process has not, in Schumacher’s opinion, led to “spatio-morphological” “legibility,” which he intends to provide with parametricism. Rather, urban zones have grown willy-nilly—in good laissez-faire fashion—into what he labels “garbage spill urbanism.” He uses this strongly derogative term to refer to the “disorienting visual chaos” of a cacophony of styles that, ironically, appear all over the world in urban zones as “‘ugly’ environments without identity,” or what he also describes as “white noise sameness.”
Schumacher’s use of the language of complex biological and physical systems is adept and multilayered. He describes his vision of the new parametric urbanism as a “multi-species ecology,” appropriating not only the language of complex systems and sustainability but also the most recent posthumanist feminist theory. Schumacher is not referring to nonhuman species at all but rather is using “multi-species” analogically. By this term he means that buildings, designed by different architects but all using parametric design, will each be like a new species: “Parametricism envisions the build-up of a densely layered urban environment via differentiated, rule-based architectural interventions that are designed via scripts that form new architectural subsystems, just like a new species settles into a natural environment.” No mention is made of the loss of actual species diversity in monolithic concrete urban environments (Figure 1.14). “Only Parametricism has the capacity to combine an increase in complexity with a simultaneous increase in order,” he asserts, owing to “principles of rule-based differentiation and multi-system correlation.” He coins what he calls “architecture’s entropy law: all gains in terms of design freedom and versatility have been achieved at the expense of urban and architectural order.” In response, parametricism “inaugurates a new phase of architectural negentropy.” He thus implies that freedom is incontestable but so is “order” according to his own streamlined universalist approach, which he unabashedly desires to be “hegemonic.”
Schumacher’s use of the terminology of complexity—self-organization, chaos, white noise, rule-based, entropy and negentropy, et cetera—reveals his deep allegiance to complexism as his ideological bottom line, one he uses to bolster his self-proclaimed superiority. “Parametricism is manifestly superior to all other architectural styles still pandered and pursued,” he writes in an audaciously self-promoting statement. “This implies that it should sweep the market and put an end to the current pluralism that resulted from the crisis of Modernism, and that has been going on for far too long due to ideological inertia.” With “current pluralism” he is directly referring to postmodernism and deconstructivism in architecture, but when he discussed this topic at The Politics of Parametricism symposium in 2014, he pointed to images of downtowns with historic buildings accrued over a century and not just since the 1980s. He proposes to replace such areas using masterplans that he and Zaha Hadid designed for cities such as Istanbul that aim to tear down and rebuild these zones monolithically, using swooping curved topologies to create new business districts and high-end residential development with cultural and tourist amenities. Schumacher concludes his AD article with a statement that veers toward architectural proto-fascism: “This plurality of styles must make way for a universal—hegemonic—Parametricism that allows architecture to once more have a vital, decisive, transformative impact on the built environment, just as Modernism had done in the twentieth century.”
This echo of the rhetoric of the 1930s is reified by Schumacher’s aesthetic preferences, as his version of parametric design is less component-based and more streamlined than most other generative architecture. Just as streamline designers re-formed what they considered to be “defective” ornamental designs, bringing all outstanding and protruding parts into line, Schumacher proposes the same ideal for urban makeovers. The similarities run even deeper than the surface, though. In streamlining, the new material of plastic was partially to blame for all the curves, since it was much easier to remove curved forms from molds, and more comfortable on the hands as well. Today, however, it is NURBS (nonuniform rational basis spline) software and 3-D printers that encourage the abundance of curvature. Again, like Raymond Loewy in his evolution charts (Figure I.2), Schumacher points to social and scientific evolution as the force transforming designs to the streamline, when in fact the designers are the ones effecting this change. And, although Walter Dorwin Teague claimed that the scope of a designer’s reach was “everything from a match to a city,” no streamline designers were ever able to transform a whole city because streamlining came of age during the Great Depression. The “smooth flow” of streamline design thus resonated as much with restoring economic “flow” through the sale of consumer goods as it did with eugenic concerns about the “flow” of bodies, both internally in terms of digestion and externally in terms of the “flood” of immigrants into the nation. Now, Schumacher is potentially in a position to rebuild large zones of old cities funded by the “flow” of neoliberal capital. It is as if the aerodynamics of streamlining has been replaced with the fluid dynamics of cargo ships and capital, and the resulting aesthetic is remarkably the same.
To elaborate further the potentially dangerous modernist terrain on which Schumacher’s version of parametricism treads under the aegis of self-organization and complexity, the economics and politics of today are both different from and similar to the 1920s and 1930s when eugenic design flourished. The global recession of 2008 is consistently referred to in the media as “the worst financial crisis since the Great Depression.” Historical precedent shows that economic hardship has a way of turning national politics inward, as is being demonstrated by the Brexit vote in the United Kingdom and Trumpism in the United States. This distrust of pluralism at large is not directed only at architectural diversity in what Schumacher calls “garbage spill urbanism,” but in the public realm is targeting ethnic diversity. His rhetoric of garbage echoes 1930s declarations of certain groups of people as “waste,” which implies both disposability as well as a need to begin cleaning. National political movements are again voicing strong restrictions against immigration after a period of heightened immigration, which was exactly the case with the eugenic nationalism of the 1920s that effectively closed U.S. doors for over forty years. Furthermore, Schumacher interprets complexity theory as a rationale for instilling hegemonic order to what he sees as cities in the midst of chaos. Complexity theorists actually often say that the most interesting patterns arise when systems are on the edge of disarray, but clearly Schumacher does not like the pattern he perceives and chooses to label as “white noise sameness.” In this light, Hensel’s and Hermansen Cordua’s celebration of designed-and-built localism as exemplary of a rich global heterogeneity (aka differentiation) in complexity seems a benign, even beneficial, interpretation of complexity in contrast to Schumacher’s.
This last comparison between Hensel’s and Schumacher’s use of complexity theory to argue for opposite ends—Hensel’s opposition to the homogeneity of modernism and support for stylistic heterogeneity/differentiation, Schumacher’s favoring of modernism and hegemonic homogeneity/order—demonstrates how flexible complexism is as an ideology. The same thing can be seen in the fact that Weinstock uses complexity theory to dismiss sustainability whereas Hensel and Menges argue in favor of an “advanced sustainability” based on complexity. When the same paradigm can be used to justify and naturalize positions at either end of a spectrum—be it aesthetic, environmental, economic, social, or political—that is the clearest indicator it is functioning ideologically. Such was the case with eugenics. That both eugenics and complexism happen to be scientific paradigms lends that much more power and authority to their application in other realms, especially when they become so widely accepted as popular science that many people readily believe arguments based on their rhetoric. Streamlining, after all, was based on natural principles from current science: the physics of fluid and air dynamics, the teardrop shape of a drop of liquid falling, the evolution and intentional breeding of streamlined animals to increase their speed in the competition for survival of the fittest. In light of this historical comparison, parametricism and more broadly generative architecture appear far less innovative for their attempts to instill natural vitality into design through mimicking: the physics of nonlinear dynamics, the fractal forms of branching, the genetics of morphogenetic development and evolution, the engineering of new and “improved” forms through synthetic biology. Emergence and self-organization are everywhere, as shown by the Boulder Beer Company’s recent release of “Emergent White IPA.” Take care not to drink emergence down too quickly, as occurred, figuratively speaking, with eugenics and streamline design in the 1920s and 1930s and possibly now with the idolization of complexism and generative architecture.