Skip to main content

Toward a Living Architecture?: Appendix

Toward a Living Architecture?

Appendix

Appendix

Brief History of Complexity’s Intersections with Generative Architecture

The fundamental importance of computer hardware and software for understanding complexity and self-organization is not just definitional, in terms of serving as a major source for the “rules” at play in “self-organization,” but also historical. Evelyn Fox Keller’s history of self-organization begins with Immanuel Kant’s search for a definition of a living organism in 1790 under the new science of biology; he coined the term “self-organization” to distinguish an organism from nonliving things like machines. She describes Kant’s definition of an organism: “It is a bounded body capable not only of self-regulation, self-steering, but also, and perhaps most important, of self-formation and self-generation; it is both an organized and a self-organizing being.” Furthermore, “an organism is a body which, by virtue of its peculiar and particular organization, is constituted as a ‘self’—an entity that, even though not hermetically sealed (or perhaps because it is not hermetically sealed), achieves both autonomy and the capacity for self-generation.”[1] By contrast, “inanimate matter lacks both the organization and self-organization of organisms, and machines, though organized, and even designed, are organized and designed from without. The organization and design of organisms is, by contrast, internally generated.”[2]

The first part of Keller’s history traces how the idea that organisms and machines are different, even if sometimes seeming analogous, was transformed into the idea in cybernetics that animals and machines are actually homologous, as Norbert Wiener famously argued in Cybernetics; or, Control and Communication in the Animal and the Machine (1948).[3] The development of the idea of homeostasis, which came out of physiology in the nineteenth and twentieth centuries, played an important role. As homeostasis functions in living organisms, likewise machines can be designed with control mechanisms that use feedback through communication to regulate the machine’s functions. Keller writes that if one just includes the designer and the environmental inputs as a part of the “self,” then both living organisms and machines can be called self-organizing.[4] Note that including the human designer and environmental inputs along with the machine considerably stretches the idea of a “self,” which is usually thought of as a singular living entity. Furthermore, humans have agency, so a significant difference exists between humans’ designing and making machines or anything else and that thing on its own self-organizing.

Around the time of World War II with the rise of cybernetics and the creation of the first digital computers, use of the concept and term “self-organization” significantly increased, as many ideas and events coalesced into what began to be known as complexity and general systems theory. Different histories touch on different aspects of these developments, which occurred across the disciplines of cybernetics, computation, biology, embryology, neurophysiology, psychology, psychiatry, meteorology, mathematics, engineering, physics, chemistry, cryptography, artificial intelligence, and computer science.[5] This overview only touches on some of the major developments that intersected with developments in the field of architecture.[6] These interwoven threads of the intersections of complexity theory, evolutionary computation, and architecture generally occurred in three places—London, Cambridge (Massachusetts), and New York—beginning in the late 1950s and early 1960s. This is undoubtedly an oversimplification, since scientific scholars and architects traveled and moved between institutions and read available publications. Furthermore, meetings such as the Macy conferences and the 1959 conference in Chicago on Self-Organizing Systems organized by the U.S. Office of Naval Research served as hubs where thinkers from around the world met to discuss their ideas.

We start in Cambridge after the end of the war, where Wiener was professor of mathematics at the Massachusetts Institute of Technology (MIT) when he published Cybernetics. Over the years, a number of his students and colleagues extended facets of cybernetic theory, mathematics, artificial intelligence, and evolutionary computation. Both Oliver Selfidge and John Holland studied with Wiener in the late 1940s, with Selfidge remaining at MIT to work with Marvin Minsky to found the field of artificial intelligence. Holland went on to do graduate study in mathematics and computer science at the University of Michigan, where he was on the faculty during the decades when he developed genetic algorithms and programming for adaptive systems. His research culminated in his pathbreaking Adaptation in Natural and Artificial Systems (1975), which influenced architects interested in using computers to generate architectural forms.[7]

Architectural interest in cybernetics and computer programming had a history preceding Holland’s publication, though. György Kepes was at MIT starting in 1947, a few years after he published Language of Vision.[8] He became interested in Wiener’s work, citing him and discussing cybernetics in his book The New Landscape in Art and Science (1956).[9] Siegfried Giedion, an architectural historian interested in physics and the author of Space, Time, and Architecture (1941), began teaching at MIT and the Harvard Graduate School of Design (GSD) in the 1950s. Their influence extended to the highly interdisciplinary graduate student Christopher Alexander, whose Notes on the Synthesis of Form (1964), which used set theory to explain methods of algorithmic design, was influential across fields. This is not surprising, given that he had a bachelor’s degree in architecture and a master’s of science in mathematics from Cambridge University, and that he received the first doctoral degree in architecture awarded by the GSD and also completed postgraduate study in computer science and transportation theory at MIT and cognitive studies at Harvard. His best seller A Pattern Language (1977) is one of the landmark publications applying computer programming to design. Historian Molly Steenson considers him one of the founders of generative architecture, along with Nicholas Negroponte—who studied architecture at MIT in the early 1960s, focusing on computer-aided design—and British architect Cedric Price.[10]

Alexander left Cambridge to teach at the University of California, Berkeley, the year before the Boston Architectural Center hosted the Architecture and the Computer conference in 1964. Three years later, Nicholas Negroponte and Leon Groissier founded the Architecture Machine Group in the Department of Architecture at MIT.[11] British cyberneticist Gordon Pask visited the Architecture Machine Group a number of times between 1968 and 1976; his work on conversation theory and the unusual machines he built contributed to Negroponte’s ideas in The Architecture Machine (1970). Around the same time, Negroponte and Groissier began teaching a “Computer-Aided Urban Design” studio (1968), and the following year, Pask published “The Architectural Relevance of Cybernetics.”[12] Pask also spent time collaborating with architects and teaching at the Architectural Association (AA) in London in the early 1960s, which possibly offered the first computer course for architects in 1963, before Negroponte and Groissier’s course in 1968 at what later became the MIT Media Lab in 1985.[13]

South of Boston on the East Coast between the 1940s and 1960s, other significant developments contributed to the formation of complexity theory and its influence on architecture. The branch of the Macy conferences that focused on cybernetics, held in New York City between 1945 and 1953, brought together an interdisciplinary and international group of scholars to consider such themes as Feedback Mechanisms and Circular Causal Systems in Biological and Social Systems, revisited in different guises over the years. Influential American participants were Wiener, John von Neumann, Warren McCullough, Margaret Mead, and Claude Shannon among many others, and British cyberneticists Gregory Bateson and W. Ross Ashby. After 1948, the title of Wiener’s book Cybernetics became the name of the conference. That same year, mathematician and electrical engineer Claude Shannon, who worked in New York for Bell Labs, published his influential paper “A Mathematical Theory of Communication,” which founded the field of information theory. In the mid-1960s, information theory was wedded to complexity theory by Russian mathematician Andrey Kolmogorov and American mathematician and computer scientist Gregory Chaitin, serving as one method by which scientists measure complexity today. Warren Weaver, director of the Division of Natural Sciences of the Rockefeller Foundation in New York City between 1932 and 1955, had met Shannon at Bell Labs and wrote the introduction to his essay. Weaver’s own report to the Rockefeller Foundation, written upon his retirement, summarized many of the developments over the previous quarter century that coalesced in the 1960s and 1970s into complexity theory, which is closely related to general systems theory as explored in Ludwig von Bertalanffy’s publication General Systems Theory in 1968.

Author Steven Johnson dubs Warren Weaver’s summary report to the Rockefeller Foundation as the “founding text of complexity theory.”[14] In it, he established three types of scientific problems that were being researched across different disciplines during this time: simple systems having only one or a few variables; systems with thousands or millions of variables analyzable only through the methods of probability and statistics—he called these “disorganized complexity”; and systems somewhere in between, that had many interrelated variables but simple rules that in turn created interesting patterns as part of their processes—he called these “organized complexity.” The example of organized complexity that Johnson gives is that of a mechanized billiards table, “where the balls follow specific rules and through their various interactions create a distinct macrobehavior, arranging themselves in a specific shape, or forming a specific pattern over time.” To solve problems of organized complexity, Johnson writes, “you needed a machine capable of churning through thousands, if not millions, of calculations per second. . . . Because of his connection to the Bell Labs group, Weaver had seen early on the promise of digital computing, and he knew the mysteries of organized complexity would be much easier to tackle once you could model the behavior in close-to-real time.”[15] Systems of organized complexity are apparent everywhere in nature once you learn to see them, and they are what Melanie Mitchell refers to with her definition of nonlinear complex adaptive systems.[16]

A year or two after Weaver’s report, when journalist Jane Jacobs was preparing her 1961 attack against New York’s “master builder” Robert Moses for the drastic modernist urban planning changes he was implementing in New York City, Jacobs read Weaver’s report and developed the idea of organized complexity into her book The Death and Life of Great American Cities.[17] In it, she argued that urban zones such as the West Village in New York and similar neighborhoods in other cities thrived because they demonstrated “bottom-up” self-organization that is characteristic of organized complexity. Her argument was powerful and halted much of Moses’s planned demolition. It also received significant media and academic coverage and influenced the development of theories that later became identified with postmodernism in architecture. For example, in 1962, architect Robert Venturi began writing one of the founding texts of postmodern architecture, Complexity and Contradiction in Architecture, published four years later. Architectural historian Peter Laurence argues that Venturi could not help but be influenced by Jacobs despite the fact that their common fascination with complexity theory originated out of different interests, Jacobs from social complexity and Venturi from an interdisciplinary interest in multiplicities addressed through “literary theories, New Criticism, pop art, and gestalt theory.”[18] Venturi’s text cites Christopher Alexander’s Notes on the Synthesis of Form and refers to emergence based on social scientist Herbert A. Simon’s definition of a complex system as “a large number of parts that interact in a non-simple way,” such that the whole is “the result of, and yet more than, the sum of its parts.”[19]

Complexity theory as an influence on theories of postmodern architecture is not directly related to the historical development of generative architecture, but it is important to realize the interdisciplinary and multifarious impacts of complexity theory writ large on architecture occurring at these different locations.[20] Charles Jencks, another founder and later historian of postmodern architecture as influenced by complexity theory, had earned his master’s in architecture at the Harvard GSD in 1965, and then studied architectural history at University College London, earning his doctorate in 1970. Writings of his from the late 1960s point to his prediction that biology would become a key influence on late twentieth-century architecture, an idea more fully developed in his publications beginning in the mid-1990s.[21] For example, he penned The Architecture of the Jumping Universe in 1995, based on complexity theory’s tenet that self-organization works nonlinearly, prompting rapid jumps to new levels of organization. And his 2002 revision of his classic The Language of Post-Modern Architecture (1977) included new chapters on “Complexity Architecture” and “Fractal Architecture,” in which he argued that after the founding of the Santa Fe Institute in the mid-1980s and the spread of the complexity paradigm, postmodernism took a decisive turn toward complexity architecture. Given all these developments, it becomes far less surprising that the history of generative architecture and its intersections with ideas of complexity theory and evolutionary programming occurred in the 1960s, even though it has taken fifty years for it to become well known as a mode of contemporary architectural practice.

This turns our attention to the third location, to London and to the AA, where generative architecture arguably originated at the school, which introduced the use of computers for architectural design. Of course, the AA continued to be and still is a major center teaching generative architecture through the graduate programs at the Design Research Laboratory, founded by Patrik Schumacher and directed by Theodore Spryopoulos, and Emergent Technologies and Design, led now by Michael Weinstock and others. The five-day Course on the Use of Computer Techniques in Building Design and Planning offered in July 1963 actually had to be held at University College Oxford, owing to the fact that that was where the computer was located.[22] Who initiated and who led the course is unclear, but architect Cedric Price was on the faculty at the AA at this time and was collaborating with cyberneticist Gordon Pask on the design of the Fun Palace project. This visionary endeavor was intended to be an architectural recreational space that could rearrange its modular internal configuration based on input from computer punch cards specifying users’ preferences; parts of the building would be moved around by cranes. Computation was central to its concept, planning, and intended action, and therefore Price and Pask established the Cybernetics Committee, led by Pask, to meet, theorize, and plan the project; these meetings were held in 1964. It is therefore possible that Pask’s and Price’s presence at the AA influenced the creation of the computer course for architects.

Pask is well known as one of the major British cyberneticists, a group that of course included Alan Turing, W. Ross Ashby, Stafford Beer, and many others. As early as 1949, some of these men—not including Pask—formed the Ratio Club, which met in London to discuss cybernetics after the influence of Wiener’s and Shannon’s landmark publications.[23] Ashby had created a cybernetic machine known as the homeostat in the late 1940s and was working on his book Design for a Brain (1952). His work inspired Beer and Pask, who collaborated in the late 1950s on some electrochemical experiments pertaining to feedback and adaptive systems.[24] Pask began working with Price in the early 1960s, and the introductory document of the Cybernetics Committee for the Fun Palace project, which was written by Pask, interestingly describes both the committee and the building as a “self-organising system.” It states that the meeting agenda “has been constructed to act as a genetic code. At our first meeting it will be possible for either Cedric Price or myself to indicate, in detail, the chief constraints. . . . The genetic code of the agenda is provided to initiate the evolutionary process and the constraints are not severe enough to inhibit it altogether.”[25] Although Pask’s description sounds like he is referring to methods of evolutionary computation and something conceptually similar to what Holland in the mid-1970s called genetic algorithms, he is actually talking about the meetings’ structure. Still, this sounds incredibly prescient, as only a handful of publications had developed ideas of evolutionary computation before 1964.[26] Pask and Price carried these concepts forward to the modular, computer-controlled Generator project of 1976 designed for White Oak Plantation in Florida (but never realized), on which John and Julia Frazer collaborated as computer consultants. John Frazer is undoubtedly one of the chief founders of generative architecture, so his role in this brief history is important.

First a student and then an instructor for many years at the AA, John Frazer published An Evolutionary Architecture in 1995 as the culmination of almost thirty years of research. The book describes the “emerging field of architectural genetics” that Frazer pioneered, and marks the beginning of a clearly defined, realizable, and useful computational approach to architectural design. The cover of his book features the “Universal Constructor,” built by him and his students in 1990 as a “self-organizing interactive environment,” one of a few computational machines assembled by hand at the AA under his direction. His first “self-replicating cellular automata” computer models date to 1979. His use of the terms “self-organizing” and “self-replicating cellular automata” clearly demonstrate his knowledge of and reliance on then-recent biological and computational theories, fitting his goal of investigating “fundamental form-generating processes in architecture, paralleling a wider scientific search for a theory of morphogenesis in the natural world.”[27]

Aware of Alan Turing and John von Neumann’s work, in the late 1960s Frazer began using computer resources at the University of Cambridge to develop his “repeating tile = reptile” “seed” system, for which he coded eighteen different spatial orientations. The “seeds” could be combined to form rectangular shapes, and therefore were useful for investigating architectural form genesis for structures that in theory could actually be built.[28] He “evolved” his seed system into pattern formations and structures, plotting his first large-scale 2-D print at Autographics Ltd. in 1968 and sculpting by hand corresponding 3-D models (Figure 3.2). This work is likely the very first architectural design ever prepared on a computer that was then printed out on a plotter. Ironically, in his 1974 AD publication, Frazer predicted with regard to his “reptile” analogy that “the associations of a Stegosaurus . . . with an obsolete species is intended to emphasise that such a component approach to architecture, as implied by the system, is probably only of transient significance.” Almost fifty years later, the approach is still thriving.[29] To create the “reptile” seed-based designs in the late 1960s and various column designs in the early 1970s, Frazer derived a computational method to “evolve” solutions using a “heuristic algorithm derived from an idea of Donald Michie for MENACE (an educatable OXO machine).” He had reproduced Michie’s OXO machine in the early 1960s and then again later that decade with students; owing to the success–reward techniques that allowed the machines to learn how to play against each other, Frazer used the same technique to “educate a column-generating program.”[30] In the 1980s or 1990s, he began using genetic algorithms to breed architectural forms, based on the computational system Holland developed for simulating biological evolution published in 1975 as Adaptation in Natural and Artificial Systems.[31] Images from 1993, made in collaboration with Peter Graham, demonstrate “the evolution of Tuscan columns by genetic algorithms,” in which a “gene” was substituted for James Gibb’s “carefully specified proportions” and then bred to create a “population,” on which both “natural” and “artificial selection” were applied to determine the “fittest” “perfectly proportioned” designs.[32]

Frazer was by no means working alone at the AA on the development of generative architecture. Beginning in the 1970s, “units” (courses) at the AA were led by architects and others who were interested in cybernetics, computation, biology, and ecology. In short, so many areas overlapped with developments in complex systems theory as well as in the use of computers in architecture that it would have been hard to be a student at the AA during these decades and not be aware of this growing trend of architectural design. Graduates or tutors from the AA who have played leading roles in generative architecture and design thus far besides Frazer include Zaha Hadid, Patrik Schumacher, Michael Hensel, Michael Weinstock, Achim Menges, Neri Oxman, and Andrew Kudless, as well as many others beginning their careers more recently. As is clear from chapters of this book, most of the architectural theorists discussed here are faculty at leading architectural educational institutions: University of Stuttgart Institute of Computational Design; Oslo School of Architecture and Design; AA; International University of Catalunya School of Architecture; University of Pennsylvania School of Design; Columbia University’s Graduate School of Architecture, Planning, and Preservation; Cornell University College of Architecture, Art, and Planning; Bartlett School of Architecture at University College London; University of Greenwich; and University of Waterloo. Yet, of all these institutions thus far in the history of generative architecture, the AA has played the most foundational historical role.

Next Chapter
Acknowledgments
PreviousNext
The University of Minnesota Press gratefully acknowledges financial support for the publication of this book from the Office of Research and College of Letters and Science, University of California, Davis.

This book is freely available in an open access edition thanks to TOME (Toward an Open Monograph Ecosystem)—a collaboration of the Association of American Universities, the Association of University Presses, and the Association of Research Libraries—and the generous support of the University of California, Davis. Learn more at the TOME website, available at: openmonographs.org.

Copyright 2018 by Christina Cogdell.

Toward a Living Architecture? is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
Powered by Manifold Scholarship. Learn more at manifoldapp.org