Skip to main content

Operational Images: 5

Operational Images
5
    • Notifications
    • Privacy
  • Project HomeOperational Images
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Cover
  2. Title Page
  3. Copyright Page
  4. Contents
  5. Preface: Operational Images, All the Way Down
  6. Introduction: Between Light and Data
  7. Chapter 1. Operations of Operations
  8. Chapter 2. What Is Not an Image? On AI, Data, and Invisuality
  9. Chapter 3. The Measurement-Image: From Photogrammetry to Planetary Surface
  10. Chapter 4. Operational Aesthetic: Cinema for Territorial Management
  11. Chapter 5. The Post-lenticular City: Light into Data
  12. Conclusion: A Soft Montage of Operations
  13. Acknowledgments
  14. Notes
  15. Index
  16. About the Author
  17. Color Plates

5

The Post-lenticular City

Light into Data

If one takes the images seriously the way Farocki does, the distinction between aesthetic and information value recedes. The question rather becomes what these images do and what can be done with them. If there is a metaphor that anchors these images, it is that of circulation and transport itself; a concept that acts like a gravitational centre and that applies to the city’s infrastructure as well as to the traffic of images circulating.

—Volker Pantenburg, Manual: Harun Farocki’s Instructional Work

Urban Light

A fitting opening scene for this chapter: The sky over Shanghai was illuminated by a coordinated swarm of drones, drawing an image. Such marketing stunts had been used in the past, several times already, but this image was a QR code, an executable “image” hovering over the urban landscape. Part of a marketing campaign by the Chinese media and video-streaming company Bilibili, the QR code was linked to a new role-playing game Princess Connect! Re:Dive.1 As advertisements, the rewritten sky is one version of a much more low-tech driven point that Lawrence Grossberg made about interstate highway billboards in the 1980s: billboard ads relate specifically to the highway logistical system of traffic and, as such, to their primary task of advertising, which is why a billboard is “not a sign to be interpreted, but rather, a piece of a puzzle to be assembled.”2 It is also why to look at the sky as it transformed into a puzzle and assembled into patterns and the interpretation of “what is that” turns quickly into “what does it do” as you point the mobile phone to read the screen of a sky.

For Grossberg, the point relates to how one reads popular culture without prioritizing what it “means” in the traditional sense: “I want to suggest that interpreting the effects of popular culture, and its politics, is less like reading a book than like driving by the billboards that mark the system of interstate highways, county roads and city streets that is the United States.”3 The primacy of circulation, navigation, orientation, and clicking through a variety of interfaces—readable or executable—persists even if in a different register. The executable marketing image assembled by a swarm of drones is not like reading a book. It is one form of an operational image that takes on different functions as per the main threads across the book: not just next to a highway road, but in other logistical, infrastructural, and mediated situations, it fills the sky as QR codes now fill up urban facades and other places, marking them as potential spots of executable space that disguises itself as an image.

QR codes are a prime example where the operations of the city are amplified and channeled through machine-readable surfaces. Hence the Shanghai stunt is perhaps best read as a way to realize the radically increased read/write surfaces that define a city. As N. Katherine Hayles has argued, QR codes are part of the broader augmented reality of the city as such image markers that “unlock gates to physical locales.”4 Christian Ulrik Andersen and Søren Bro Pold coin a similar pattern as the urban metainterface that “semiotizes” space anew through, for example, app-enabled property regimes such as Airbnb, Uber, and TripAdvisor.5 However, whereas they speak of the city as text, I refer to an operationalization of the city as a material process of imaging that triggers multiple levels of non-textual processes (often referred to in generic terms as “automation”). Either way, such arguments about reformatting the city underline many of the themes in this book, both about action and execution (as per the briefly earlier mentioned themes in software studies) and about digital forms of control, including property control, where images are central structuring elements in the sphere of environmental computation in and out of the smart city.6

Images are temporary roadside stops for attention in a circulation of information gathered as data for further modeling and execution. They are also sites of data aggregation (detection and capture) or channeling traffic (opening gates, executing remote orders, etc.). Contemporary scholarship also has to observe how those networks are embedded in forms of architecture at the back of the work on the networked image, where the snapshot is in circulation, and the image appears on digital screens and, even more so, as invisible (or invisual) data on platforms.7 Sometimes referred to as augmented reality8 and strongly resonating with N. Katherine Hayles’s analysis of the nonconscious cognition of automated technical systems,9 it relates to the multiscalar processes where images play a part in control and mobility, actions, and guidance on and off actual image devices. Here, the (photographic or other) image is distributed, and the whole operational apparatus is, in fact, one interlocking recursive system.

The two previous chapters have paved the way for this one. Chapter 3 focused on the measurement-image in practices of reading territorial planes, and image planes provided a way to move on to the question of the operational aesthetics of large-scale surfaces: operational images featured in photographic terms, technical drawings, and more recent remote sensing and machine learning techniques that both analyze and synthesize geographical areas as territorial planes and data. The planimetric management of the planet as an image ran through the discussion built on the argument of operational images as navigation, as processual aesthetic, as a technique of infrastructural management. The operational aesthetic proposed in the previous chapter addressed a method of training related to scales and infrastructures of images and what their institutional role tells us about their territorial situations or the sites of operational images.

In this chapter, we build on other creative examples that can be read in relation to what the operational aesthetic implied: a process-focused, dynamic insight into such images that have a fundamental impact on modeling the world and that help us to understand the links between image and infrastructure, spatial data modeling and its different links to devices. In other words, we move from the rural worlds and earlier colonial expeditions to operational images to contemporary networked images and urban events coined as “operational city” recently by Mark Andrejevic (a term he uses to discuss “smart cities”).10 We will build on earlier notions of platformed invisuality, navigation, and traffic, as well as images as data points, while using selected design and artistic practices to define contemporary images’ broad range of operationality.

Light is a crucial reference point insofar as it relates to the techniques of switching from visible light to invisual data, which is the operational element distributed across surfaces, vehicles, and other relays of action and executability. Cities are and have been full of light and images, and the urban sphere of light and technology has been a vital part of the emergence of a contemporary link between media and subjectivity. Photography brings things and events to light, and the city is itself full of light and movement, something captured in writing, theory, images, and films from Siegfried Kracauer, Dziga Vertov, and Walter Ruttman, among many others. One such take is Farocki’s on the operational, logistical city of transport in Counter Music (2004) as well as the interior space design depicted in Creators of Shopping Worlds (2001)—in essence, a film about data, urban space, design, and affective capitalism.

Early twentieth-century writing on the nexus of media, space, and the city focused on the rescaling and shifting perspectives, a spatiotemporal reorientation in views deemed phenomenologically impossible at an earlier point. The city was one site for dynamic transformations of speed, perspective, scale, and experience. Technical media thus feature not only in the scalar operations that brought the invisible to visible manipulation but also across the city as an audiovisual (design) lab of its own. As per Kracauer: “For the first time, the inert world presents itself in its independence from human beings. Photography shows cities in aerial shots and brings crockets and figures down from the Gothic cathedrals. All spatial configurations are incorporated into the central archive in unusual combinations which distance them from human proximity.”11

It was cathedral facades, photographs, and photogrammetry that featured in the chapter 3 discussion about Meydenbauer, who was interested in a “central archive” of the cultural heritage of architecture—of things measured and recorded as data. Here, in the technological city, though, data and signals take precedence. From them, one also reconstructs the city as it has happened, as it is happening, as it will (perhaps) happen from capture to prediction. Similarly, as the whole book is concerned with the various sites and situations where images and data switch into each other, here too the city is a place where light becomes an image, an image becomes data(set), a signal becomes a visual model, and a model becomes a prescription of the operational elements of the city from the control rooms to the distributed computational events.

There would be two kinds of ways—or questions—that define what I am after in this chapter. The first one would be to ask a question that sounds peculiar: What kind of light defines the city? Scholars of urban vernacular photography might have their response already in mind; theorists and architects investigating urban lighting since (at least) electric light and modern media technological spectacles of the neon city might have theirs; artists defining facades through urban projection practices might have many ideas to contribute. But here, the focus is on a particular light-pulse and laser-scanning technology of lidar that triggers computation of the city as a massive distributed light event: it sends either near-infrared laser light pulses or “water-penetrating green light,” and the returning signals are captured with a GPS receiver.12

The second ensuing question would be about the computational images that follow from the realization of this particular kind of light that becomes scannable, data-prepped, and formatted, which relates to the broader machine-readability of the city. How is light operationalized in laser scanning, as data and modeling, and thus also then the platforms that replace visible light? Here, the question of the visible or invisible is again retuned to the invisuality of those platforms on which seeing is distributed even beyond singular machine vision technologies. Designer-artist groups such as ScanLAB also feature in this chapter, presenting their own terminology through visual imaging practices, coining the “post-lenticular” landscape.

Visual theory and photographic studies have only begun to deal with the discoveries of the early nineteenth century, such as the infrared (by Frederick William Herschel) and the ultraviolet (by Johann Wilhelm Ritter) as visual realities that form an alternative genealogy to the technical media of the past two hundred years. This points to an alternative genealogy of machine vision and remote sensing, where different light spectra can be registered and used in different institutional contexts (see Figure 3 in chapter 2). The implication is this: tell me which spectra of light you are interested in, and I will tell you what your discipline is. What are the inroads for critical humanities and digital culture studies to these other realms of light, these other realms of seeing and detecting that have become far from marginal? Hence, mapping such a variantology of media is still an interesting task to understand what else, other than just pictorial images, has been at play in scientific practices of images.13 In addition to computer-generated images, data, and measurement, they constitute forms of nonrepresentational imaging.14 But, of course, the two lineages are not separated, even if photographic theory and history have tended to focus more on capturing light through things we call “cameras” instead of, for example, sensors.15

In more specific terms, following Jennifer Gabrys, we can continue this line of thought about invisual culture premised on sensor data:

These shifts in the practices of processing image data as sensor data are productive of sensor environments that create distinctly different engagements with imaging, not necessarily as an a priori fixation of visuality, whether as an epistemological or disembodied register, but instead as a processual data stream that irrupts in moments of eventfulness and relevance across data sets comprised of multiple sensor inputs.16

The interplay of images and data is thus a problem specific to the history of science and new forms of imaging. It becomes relevant to how images are institutionalized in data culture across different realms of property and governance: to own and guide, navigate and control. Sensor instruments might not always create images, but even data can be quickly converted into images—and vice versa.17 Furthermore, this is not merely a question of traditional images that depict a thing or two but the platforms that constantly read and write space and produce data clouds and visualizations, models and mediations as operational “guidance”: this is how you should see and perceive, this is where you go, this is how you go there.18 Commercial augmented reality of indoor spaces is one example of the operationalization of space as an invisual model that takes the platform model as its modus operandi. From actively taking pictures to constantly sensing the environment, for example, smartphones are thus an active part of modulating the space in ways that incorporate those key elements of platforms tackled earlier in this book. In this vein, Google’s earlier Project Tango would be a property operation of modeling that works as a Geographical Information System (GIS), expanding what we think of as images.19 Following Tango, the ARCore platform (Google) builds a possibility to read/write the world through motion tracking, environmental sensing (detecting “the size and location of all type of surfaces: horizontal, vertical, and angled surfaces like the ground, a coffee table or walls”), and light estimation.20

The Pulse of the City

This chapter also continues earlier themes of the book: images are also understood as part of the larger subset of “measurement,” whether this concerns photogrammetry (see chapter 3) or pattern recognition (see chapter 2), which form a fundamental background for the contemporary context of platforms and AI. From the singularity of an image, we shift to the mass image as it concerns the database and the dataset as infrastructures that rewrite the urban and nonurban space.21 A persistent and yet peculiar dilemma that haunts so much of Farocki’s interest in the “standard image”22 of contemporary culture: Why are there still images in computational culture? Why do we see anything? Hence the two hundred years of machine vision23 is an assemblage that has prepared some responses for this question that was asked so many times in the various operations and instruments of making images: What kinds of surfaces record visible light, what surfaces record invisible light, what forms of rays travel as straight lines and which ones are bent, and what is the form and format of aesthetic that fits this expanded sense of the sensing?24 What are shadows in images, and how do interplays of shadows and light turn into data that can be transported across distances as photometric tables or other forms of operational images?

Measurement of the intensity of light—photometry—forms a less-discussed part of the visual–invisual spectrum, but the point is clear: light is not merely there for us to see but to make things seen and measured. From photographic photometry as a technique central to astronomical and other scientific measuring to the use of experimental techniques for measuring the higher atmosphere, detection is a central part of this sensor-oriented way of understanding practices of light.25 For example, already in the 1920s and 1930s, certain kinds of “scanning” used searchlight beams and photoelectric measurement of returning radiation to establish scientific models of the sky.26 While searchlights have a central part in the military history of sensing,27 this work demonstrated the awareness of creating such artificial “scenes” of light that were not merely a Nazi spectacle of the 1930s but illuminating the sky so as to measure otherwise imperceptible wavelengths of light. Edward Hutchinson Synge’s pioneering work is often mentioned as an early form of lidar imaging and thus “laser” scanning; the link is mostly due to work developing theoretical ideas and design plans for telescopic devices and tapping into wavelengths below visible light.28 This capacity had already been part of some practices in astronomical imaging, too, with chemical photographic images being able to capture more than human visible light in some cases. Hence the task was not only making visible that which was at stake but also detecting data from the wavelengths and other realms through extended practices of light. Insights into making visible are part of how we relate light to spatial contexts of illumination—real and metaphoric—but some of the operations of light are very much bound to their temporal underpinnings evident in techniques such as the flash and the pulse. It is in temporal conditions that things are perceived.

Both distance and scale became crucial in techniques of working with light in many kinds of ways that extended the geographic measurement and developed new forms of photogrammetry. We can track a particular period of invisual culture reliant on, first, visible light and its dynamic signal-based capacities (and detection capacities) that lead to a genealogy of signals and pulses of light and sound; and second, contemporary sensor and data technologies that map the city as one intensive, complex landscape of dynamics and navigation. This same genealogy links to research into light waves and far-away planetary objects by Christian Doppler in the 1840s, with the later (1920s, 1930s) realization that the physical world is a (often light-emitting) broadcasting station of sorts.29 Later in the 1930s, the invention of radar as the technique for synesthetic transformation of sounds to images foreshadowed the recent discussions about lidar imaging and autonomous cars as a light echo the pulse of the city.

Alternative contexts of sensing or seeing, with a specific focus on technologies of pulse and light, have emerged as one key type of time-critical and posthuman visual measurement and observation.30 Of the multiple technologies of pulse and light, lidar stands out as one of the most discussed examples in architecture and urbanism. It has become a widely used technique across a range of fields in scientific measuring, including architectural modeling. Since both professional-level lidar and the scanner have been incorporated into iPad versions since 2020, the technique allows geospatial data production for building information modeling (BIM), surveys, and contexts where accurately measured 3D images are needed. Thus from 1960s aerial surveys to more recent specialized uses like “land management and planning efforts, including hazard assessment . . . , forestry, agriculture, geologic mapping, and watershed and river surveys,”31 it has become a technique that is a part of the shared lineage discussed in chapter 3. As measurement structure and shape, it is used for surface and atmospheric particles and the built environment.

Lidar might be of a different scale than techniques of cartography such as GPS, but it is part of the shared, broader aim of making “space legible and governable”32 through accurate measurement. Here, knowledge of location and features is related to potentials of action—thus such operations of “actionable geographic knowledge” became detached from mere national projects of mapping and integrated a new transnational, multilayered sense of space as an affordance. In short, and as William Rankin puts it, the territory is “defined by practices of knowledge,”33 a theme not unrelated to the earlier discussion on geodesy and photogrammetry too. However, if in that discussion (chapter 3) the issue was how to model and understand the path of light rays that can then help to measure space in and from (also pictorial) images, here the procedure is automated: incoming signals beyond visible light are used to model space in algorithmic software environments.

Furthermore, the case of lidar presents an excellent way to look at operational images through techniques of scanning. Beyond semantics, this refers to techniques capable of capturing data and forming quantified, statistical, and reproducible images of territory (a page of a book, a 3D human body, or topographic features of a landscape).34 Lidar imagery is mobilized in technical uses from modeling to scanning, design, and surveys but also in several contemporary art projects investigating the actionable urban space and its matters of sensing. This is machine vision, but not only: it also concerns the visual processing of the world as data and as such concerns the broader computational ecology—and platforms—of those operations. Furthermore, 3D scanning is incorporated into many contemporary data operations already in place. It becomes defined and redefined in alternating institutional operations. As Mario Carpo narrates, the Microsoft Kinect device for gaming included “an ingenious depth sensor that worked by triangulations, using a laser projector and a camera.”35 The Kinect is, thus, a de facto scanner that then became repurposed from gaming to architecture, too, where the sensor data could be used in CAD software platforms. Here, scanning becomes a way to sense space as time, signals, and calculation with different technical solutions to the issue of measurement.

Depth-sensing technologies are evolving quickly: some use traditional laser or infrared beams, and calculate distances based on the time of rebound (also known as “time of flight”), or, increasingly, by reading the difference in phase between outgoing and returning beams; some use triangulations, like the earlier Kinect machines did, with a laser beam sending a marker to the target and a camera, in another vertex of the triangle, to read it (variants of this method are known as structured-light depth sensors); and so on.36

In many ways, it is also clear that this form of imaging is not about visuality per se, but about a particular temporalized relation to space, even navigation: these are not images to be seen, but terrains to orient oneself in whether the navigation concerns vehicles, bodies, or other objects in dynamic interrelationships.37 In this context of images that echo across the city, to speak of the pulse of the city is not a metaphor but a technical description: lidar as light radar is a technology of millions of directed (ultraviolet or near-infrared) light pulses per second, where the returning signal is then recorded and modeled accordingly. Synge’s scientific work on capturing wavelengths smaller and smaller becomes the backbone of this invisual city. An image of the city, an image of clouds, an image of complex formations; each is measured in front of our eyes and fed into a different atmosphere than the one high up above our heads. Instead, the cloud-based, platformed computational capacity to capture light becomes a central element of the operational landscape.

Consider the city as an already complex formation; it is a pattern of dynamics upon which media technologies attempt to build their sensing, modeling, and imaging networks. Cities are full of signals and light; they are full of cameras and techniques of observation; they are full of sense and sensing, of surfaces that reflect and refract light. The city is continuously being seen, registered, and measured from shop windows to closed-circuit televisions to motion tracking and remote sensing. However, an increasing amount of the technological seeing and observation that takes place in the large-scale visual landscapes of the city works to question traditional photographic modes of understanding visual power, introducing different genealogies of what the city is, as an assemblage of materials and seeing, as movement and large-scale dynamics.

The city is thus a perfect test case—a laboratory even—for technologies and images that both relate to a genealogy of machine vision and reveal new aspects of it. Ecologies of sensing exhibit a vast multiscalar complexity that speaks to many changes in human perception; however, it is more likely to be “discorrelated”38—in Shane Denson’s words—as multiple levels of events that do not add up into a single take on what is being sensed but instead are an ecology of networked circulation of images and data that prescribe different affordances of action. Even action changes in the sense of what was considered central to photography: instead of an image formed of a singular shot, a constant background sensing models space and action in real time and in predictive time. The pulse operating in lidar images is also a good conceptual route to understand the diversity of temporal operations that take the place of the earlier conceptualizations (or build on them), such as networked images.

The city becomes a site of posthuman forms of pulsated sensing. This refers to the various autonomous systems that now process things we still call “images” for the sake of convenience and familiarity. For example, suppose the theoretically tuned media archaeology of the camera has been, until now, focused on the detachment of the seeing eye from both the human body and from the act of seeing.39 In that case, we can articulate how this camera-eye got fixed on the (autonomous) car, where it sees in ways that are not just seeing but modeling, mapping, measuring, predicting, and a range of other cultural techniques that pave the way for a wider set of infrastructural implications.

In other words, the forms of observation now introduced—whether through WiFi signals that can model space into an image or then lidar, radar, or camera as measurement instruments—are part of the operative ontologies. These do not focus on the act of seeing at the site of the device or the body of the perceiver, but in the connected networks where multiple feeds are part of the dynamic formation of an image in real time. Again, the double aspect of operations is at stake: images that operationalize their environment and images that primarily operate and only secondarily represent something, if anything.40

Here is precisely where we shift our question from What does the computer see? to What does the sensor and the scan do in the context of sensing and movement? Any discussion of the digital that obsesses with the isolated ontology of representation based on technical qualities of the image is somewhat misguided—at least when it comes to the attempt to understand the transformation of images.

Post-lenticular Imaging

“The modern lens is no longer tied to the narrow limits of our eye,”41 László Moholy-Nagy voiced in an earlier period of experimental practices in the 1920s and 1930s. But now, it might not be even a lens where the capacities of the eye are narrowed or expanded. Even more so, it might not be an organic eye that is at stake. While the term “post-lenticular” could be seen as describing a wider shift in visual technologies and photography, it is also embedded in the title of a work, Post-lenticular Landscapes (2016–17), by ScanLAB Projects. In 2016, they traveled to Yosemite National Park to produce a 3D hologram model of Yosemite Valley using laser-scanning technology. From lenticular photography to laser-based pulses, from scanning to modeling, the work was contextualized as part of the history of landscape photography, continuing Eadweard Muybridge’s famous Yosemite series of the 1870s and the series of various professional and amateur images produced since.

Besides the history of the site itself, the work is situated in relation to the legacy of photogrammetric modeling of geological landscapes where Edouard Deville’s large-scale experiments “with dry plate cameras within the Rocky Mountains beginning in 1887”42 is one of many significant reference points for the surveys and models. Such references help contextualize where post-lenticular landscapes fit in the lineage of operations of measurement and surveying. The format of the abstract landscape produced from the trigonometric measurement of the earth’s surface and shape from the eighteenth century to later abstractions of ratios from the photographic image are part of the said operationalization of landscapes as territories (see again chapter 3). While measuring produces possibilities of standardization and comparison—image relations as data and currency—they encapsulate different histories too. These histories are about not only technology but the codetermining ecologies where operations become meaningful as they hit the ground. It is, however, important to note that the location of Yosemite and the history of “wilderness” (produced through the photographic) is part of the troubling legacy that can be considered as part of colonial “extractivism” and also the consumerism of landscapes that erases native inhabitants from the picture.43 And while I remain uncertain whether the notion of “extractivism” can be extended to apply smoothly to operations of data (and images), it is very clear that such practices were also part of a production of the hallucination of a terra nullius of sorts, a violent erasure of indigenous lives and traditions in the midst of particular settler colonial ideals of conservation.44

For ScanLAB, the cameraless 3D scanning of the national park becomes a test case for the technology that emerges from aerial imaging and large-scale surveys. The abstract measurement of what is contained in an image frame (or photographic plate) is transposed onto the more advanced techniques that model space to register the returning (invisible) electromagnetic signals. This is specific to lidar and scanning, and it also links to transformations of imaging at large. As John May put it, “All imaging today is a process of detecting energy emitted by an environment and chopping it into discrete, measurable electrical charges called signals, which are stored, calculated, managed, and manipulated through various statistical methods (Bayesian, Gaussian, Poissonian).”45 It seems fair to put laser scanning in that same repertoire of techniques.

A ghostlike, distorted landscape in black and white, depicting the edge of a forest with possibly a river flowing through.

Figure 24. Equirectangular Landscapes 05: Nevada Falls (After Muybridge), based on 3D scan data captured in Yosemite National Park in 2016. ScanLAB Projects; reprinted with permission.

Laser scanning moves the focus from the apparatus of seeing to the sensors, processing, and infrastructure in which imaging is produced. While a longer discussion would be able to relate it to the legacy of cameraless photography, here we focus on the contemporary contexts of making images that hover between aesthetic sights and epistemic operations beyond lenticular apparatuses. In Peter Ainsworth’s technical summary:

Unlike traditional optics-based camera recording, there is no lens involved in the apparatus. What is recorded at the moment of capture has no fixed application beyond the retrieval of data from a given directional sensor, and the apparatus is not solely focused towards image-making—it measures distance between the sensor and the terrain in the creation of a cloud of multiple convergent points. These point clouds, however, are often also combined with a traditional photo-based process, whereby the lens-based camera is positioned in the same space as the laser scanner, with the images subsequently mapped onto the surface of the polygon mesh—the computationally joined-up point cloud—in post-production. However, in the Post-lenticular Landscapes, we are solely presented with the experience of the cloud—a hyper-detailed pointillist rendering of landscape in stark black and white.46

Large-scale landscapes and urban dynamics become sites and time-critical events of measuring and modeling integrated into contemporary contexts of the multiscalar ecology of sense, data, and light. As ScanLAB articulates the use of laser scanning, this shifts the focus from the camera to scanning, but it also returns to “Muybridge’s original endeavour to capture the scenes in three dimensions as stereograms.”47 It also relates to the operational images of photogrammetry articulated in relation to photography across the nineteenth century and early twentieth century. It harkens back to before technical images (primarily drawing but also camera obscura and lucida) and after photography: scanning.

Over two weeks, ScanLAB traversed the valley to scan the terrain with the 4x4 vehicle they had turned into a digital base camp to process images. The stereogrammatic view was technically updated, including a particular awareness of the logistics and infrastructure of imaging. In their words, even the transport of “such high tech equipment into a comparatively inaccessible environment formed a major part of the re-enactment, mirroring the epic nature of the early pioneer photographers.”48 This links to the recurring theme in this book about technologies of imaging as fundamentally about logistics and infrastructure. The focus on the image gives way to the primacy of operations that establish the possibility of any image. This point becomes even more clear in the context of lidar applications of contemporary urbanism and autonomous cars, which, as we know, are techniques of movement and navigation. The traffic of images becomes a literal part of the infrastructural arrangement.49 However, it also points out that traditional genres of imaging and photography, aesthetics, and cultural history are entangled through the many and speculative uses of new technologies. In such experiments by ScanLAB Projects, photographic and rhetorical tropes such as landscapes of Romanticism shift gear and site. As Geoff Manaugh writes: from “extreme landscapes—as an art of remote mountain peaks, abyssal river valleys and vast tracts of uninhabited land,” the new Romanticism is one not seen but sensed through “autonomous machines.”50

A clean indoors space with large windows. Inside the exhibition-type space is a car with several screens surrounding the car.

Figure 25. Post-lenticular Landscapes. Installation view at Hyundai ARTLAB, Seoul 2018, originally commissioned by LACMA. On view are Urban Diorama (holographic vehicle), 2016, Post-lenticular Landscapes (4k Animation), 2017, Equirectangular Landscapes 01–06 (Prints on aluminum), 2018. ScanLAB Projects; reprinted with permission.

A romanticism of wilderness gives way to a new romanticism of the platform city. A romanticism of nature gives way to a romanticism of the automated artificial city. ScanLAB’s Post-Lenticular Landscapes is in direct relation to their project the Dream Life of Driverless Cars, where lidar takes a central role as the standard technology of imaging, processing, and transmitting movement in movement. Embedded vision systems are an integral part of different autonomous vehicles (from cars to drones) and describe the link between the reformatting of space through their operational images and the broader data analytical infrastructure. Here, the invisual event of sensing and navigation becomes distributed across different components that make up these computational systems.

Several technical solutions such as the “Myriad 2 Visual Processing Unit” have already been mentioned in earlier scholarship as examples of embedded vision systems. As McCosker and Wilken write, “The chip is designed specifically for processing machine vision and integrating data from multiple sensors to allow the device to make sense of surroundings, avoid obstacles or track and follow objects.”51 Myriad 2, Intel Movidius VPU, and the other Graphics Processing Unit (GPU) cases are where much of the transformation of capacities of imaging is taking place. This argument is outlined in detail by Ranjodh Singh Dhaliwal in his take on computational infrastructures reliant on the parallel processing power of contemporary GPUs.52 Furthermore, invisual practices and mobility are integrated on platforms such as those developed by NVIDIA Drive, which consists of multiple hardware and software elements where operational images are not only technical aids but part of the reformatting of urban sensing. NVIDIA Drive Hyperion is a hardware testing platform that consists of “a complete sensor suite—including 12 cameras, nine radars, and two lidar sensors—and the Orin-based AI computing platform, along with the full software stack for autonomous driving, driver monitoring, and visualization.”53 At the software end, the multiple kinds of sensing, perception, and prediction processes are highlighted well: to be able to integrate map and perception data, to be able to predict movement paths and their relation to the vehicle, to monitor the surroundings and the in-cabin situation of the operator or passenger who also becomes a data point concerning their levels of “attention, activity, emotion, behavior, posture, speech, gesture, and mood.”54 The multiscalar operations of past, present, and near-time future are integrated into the different input sequences of which lidar is merely one in modeling the dynamic environment in relation to the networked AI system of the car is where the “operational” stands out, not only as an image but as an environment of synthesis that produces not one accurate representation but the real of variations.55

The intense demands on the sensor and neural network infrastructure related to autonomous cars are due to the required capacity to process, in situ, a massive amount of incoming data. The environmental dynamics is one aspect of this multiscalar operational image that is not found on the screen (only) as it is distributed and embedded in world objects. At the same time, the vehicle simultaneously transmits data in real time with the wider platform of which it is part. In this sense, autonomous cars are both an issue of data transmission while reserving a place for images and movement navigation that is not merely the case of here-now, but the capacity to construct navigable futures via the PredictionNet. What’s more, this concerns a reading capacity and an agent in traffic, a (re)writing capacity as per the feedback loops of prediction systems with the vehicle’s decisions.

Back in ScanLab’s city, the world becomes detected in different light pulses; as a scan in movement, it also becomes a recursive image for navigation:

As the scanner moves through the city, slowing for speed bumps and stopping in traffic, the city map created warps and extends depending on the speed at which we move. Stuck in traffic a Routemaster bus becomes an elongated, narrow corridor, broken only by the shadow of a passing cyclist. Turning the corner into Parliament Square duplicates Big Ben as we observe the tower for a second time.56

The Dream Life of Driverless Cars offers a ghostly apparition of the scanned city, shifting through street scenes without humans or other organic life, passing by semitransparent buildings with their collective architectural layers. The scanner moves through the city as it observes, tracks, and facilitates the vehicle among the city’s multiple other mobile and immobile parts. As a second-order record of movement moving, it senses movement within an ecology in movement.57 Movement, sensing, and calculation intersect, pointing to the invisual operational image ecology that reformats the city. City surfaces and neural networks are in computational dialogue.

Light exists as traveling energy that can also be data; light is produced so that it can be registered as mass image datasets. Geoff Manaugh proposes that ScanLAB’s Dream Life of Driverless Cars is a form of the transformation of cityscapes and visuality, images and data, movement and seeing movement, echoing some of the concerns that Paul Virilio had tracked as a central part of the transformation of visual culture since the early days of modern technical images. Manaugh also outlines the case of autonomous cars as a way to understand the visual change of environments: the use of 3D scanning lidar sensors in autonomous cars relates to a particular navigational way of mapping the city not merely as one image but as an ecology of machine-flickering signals that captures “extremely detailed, millimeter-scale measurements of the surrounding environment, far more accurate than anything achievable by the human eye.”58

Such multiscalar light emissions are premised on a particular ecology of time when millions of light bursts per second echo back their pulsations at great speeds. In many ways, this is the contemporary version of the Doppler effect, named after Christian Doppler’s mid-nineteenth-century investigations, insofar as it includes some early ideas about the echo-pulse principle of measuring objects and their movement through light signals. Doppler’s influential research, which can be quoted in relation to multiple engineering and research innovations that characterize later technical media and remote sensing, was focused on planetary movements and measuring light. According to Doppler, the color, being dependent on the frequency of light, led to the observation that objects in movement emit a different frequency in relation to each other. The remote sensing analysis of spectra of light quoted in the preface already through the Fraunhofer apparatus and then in the Harvard astronomy photograph analysis is also continued in this case. Doppler’s point implied that a transmitted pulse signal returning from a measured source could communicate the location of that object based on its frequency, leading to all sorts of implications for navigation and observation. So, while the 1842 paper “Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels”59 was about the extraplanetary scale of the measurement of light and movement, it also began to speak to measurements at other scales too: the world is measurable as a function of signals and their echoes, of pulses and their reverberations. As you have by now noticed, throughout this book the astronomical is interfaced with the surface level through measurement, calculation, and comparison. Analysis of light is an analysis of remote objects and movement, which comes to specify ways of interacting with movement, too; an operational invisuality.

Twentieth-century variations of the Doppler-type modeling of space (outer space and down on the Earth’s surface) include the military technologies that observe and see without eyes, such as synesthesia, where sound turns into image (radar) and echoes pulse signals into visual perception. As Ryan Bishop and John Phillips write, “Radar technology allows soundwaves to see,”60 which crystallizes a larger trait that runs through modernist aesthetics and the military–technological infrastructure that reorganized forms of perception, visibility, and invisibility across the twentieth century. While the ping-pulse enabled the reading of large-scale environments, the screen was only a part of the wider network of sensing, with control and warning stations peppered across strategically important locations. As one gridded and gridding response (see chapter 1) to the problem of spatial control, the radar screen and its subsequent development into interactive computer screens was a way of turning incoming signal pulses into possibilities of intervention, a “repertoire of latent actions.”61 These actions, while receiving pulses on the screen are, however, related to an understanding of the broader territory as one of screened space, as Bernard Geoghegan argues:

Radar operates according to a nested series of screening operations. The first screening typically happens at the level of the environment itself: radar transformed hundreds of miles of open space into a surface for the reflection of radio waves traveling at a rate of 186,000 miles per second before reflecting back to a precisely tuned receiver. Airplanes and other objects in open space became screens that, like an inverted X-ray photograph, return waves to their point of emission.62

From screens as technological entities to a screenable environment, from the interpretation of incoming signals to the networks of actionable intervening, the territorial scale of operative images becomes clear when reading it through a genealogy of the radar. Furthermore, it relates to how any signal can be modeled into an image or a model and how pulsating worlds form the backbone of their aesthetic–epistemic modeling.

Beyond a “natural” pulse that can be modeled, devices can send out pulses that shed light on where one is, where they are going, how they should respond. Lidar, beaming its millions of pulsing light bursts per second across urban surfaces, is one form of an environmental screening. It also acts as the upgraded laser version of flash photography and the probe light, each of which had a revelatory impact on the history of photography: artificial light that enables seeing by a pulse quicker than the human eye. The invention of flash introduced such light that escapes even the blink of an eye, producing an extension of photography and what would now be called an active sensor: the production of light to record and measure light.63 While it allows seeing, the flash itself remains beyond the register of the eye’s reaction.64 However, the autonomous car’s lidar system is not the flash of photojournalism but one of laser-based scanning operating at a different intensity. These laser images are at the center of ScanLAB’s work and include both the modern version of Muybridge’s Yosemite images and new photoscapes of cities as large-scale systems, where seeing is not limited to the humanly visible and images capture much more than that limited spectrum.65

However, the scanned lidar visualization is not of the usual scale of an image made of light; instead, the millions of tiny light bursts form the technical (and operative) ontology of this sort of imaging practice. As already mentioned, it also relates to the complexity of the technological laser scan that records both as a movement across the city and as a sensitive way of dealing with light that itself is a proxy for the complexity of the multiple surfaces of the city. The city is an image; the light is a proxy; the capture of light is a capture of a city as it is alive on different levels of the electromagnetic spectra. This is also where part of the architectural appeal of this sort of imaging through energy comes from: it helps to map and model the complexity of dynamic surfaces of built environments (as well as so-called natural environments).

Besides the apparent technological accuracy, these are also systems of imaging that are particularly vulnerable to over-seeing, where all sorts of things like “complex architectural forms, reflective surfaces, unpredictable weather and temporary construction sites”66 can confuse sensors, misperceiving the cityscape in surprising, accidental ways. Sensors need to be constantly adjusted, calibrated, in a self-reflexive, slightly paranoid loop of am I seeing this correctly? It’s a technical feature necessary for the various vision/invisual/sensor mechanisms of such vehicle systems as well as a fundamental feature of images as models (see chapter 2): the intervention into the vehicle’s movement and thus the environment is based not on a representational image but on a model that is not necessarily a future that will happen but according to which the system operates. They are a codification of possibilities that are the predicted future upon which action occurs.67 But this reflective reality, the over-seeing of light, the modeling of futures, also hints at what Manaugh aptly describes as “a parallel landscape seen only by machine-sensing technology in which objects and signs invisible to human beings nevertheless have real effects in the operation of the city.”68 Invisibility is not merely about the unseen signs and signals but also statistical predictions: the invisual that lurks not in the shadows of the optical spectrum but the neural network calculated models of the thousands of tiny futures just a fraction ahead of direct perception.

“Now light, where it exists, can exert an action, and, in certain circumstances, does exert one sufficient to cause changes in material bodies,”69 wrote William Henry Fox Talbot in the Pencil of Nature in 1844. The changes referred to the registering surfaces of light—such as emulsion-covered glass or paper—but imagine this as a narrative that prioritizes exactly that: light exerting an action. Bodies react and are transformed, even guided accordingly. Sensors pick up light and respond. Light can also exert unusual responses. Accidents of over-seeing can cascade and be multiplied, and they can be taken as guidelines for investigations into contemporary visual, photographic, scanning technologies. In other words, as glitches, they become ways to understand the functions of this form of imaging turned imagining. Dream lives can be imaginaries, but they can also be hallucinations with effective epistemic uses, such as in the machine learning calculation of what happens.70 Lidar here is only one part of the ecology of screening in actual territorial situations. Indeed, ScanLAB’s Dream Life of Driverless Cars was meant not as a technical demonstration of the accuracy of lidar as a stand-alone technical feature but rather as an experimental framework for a scanning device that also records its conditions of existence:

Their goal, Shaw said, is to explore “the peripheral vision of driverless vehicles,” or what he calls “the sideline stuff,” the overlooked edges of the city that autonomous cars and their unblinking scanners will “perpetually, accidentally see.” By deliberately disabling certain aspects of their scanner’s sensors, ScanLAB discovered that they could tweak the equipment into revealing its overlooked artistic potential. While a self-driving car would normally use corrective algorithms to account for things like long periods stuck in traffic, Trossell and Shaw instead let those flaws accumulate. Moments of inadvertent information density become part of the resulting aesthetic.71

A range of methodological implications come to the fore. On the one hand, contemporary uses of lidar rearticulate a relation even to lenticular histories of photography, thus investigating the relations and frictions with post-lenticular approaches. On the other hand, the alternative genealogies of imaging are one way to explore lidar as the echo and the pulse of the city, all the way from Doppler’s research into spectra of light to the forms of flash and photography that define the technological configurations of images in the age of photography. The pulsating screening of environments is also a way to understand that the projected plan of a smart, intelligent city—whatever you want to call it—starts already on the level of the existing city: its sentience, its materialities, its flickering lights.72 Or, as Bratton puts it, “The sensing and thinking systems are located not just in the valuable subjects and objects rolling around, they are built into the fabric of the city in various mosaics.”73 Thus what is at stake is not a machine vision of space as a representational entity of what has been seen. Instead, the ecology of models and operations starts in the fabric of the city, which itself is sensorial in the multiple meanings of the term: a multitude of computational sensing is matched up with materials and bodies of sense.

Sample Cities and Speculative Futures

Machine vision, technologies of scanning, and the broader non-human spectrum of agency of images have already entered the focus in the scholarship of photography and visual culture, questioning divisions between media-specific analyses such as cinema versus photography.74 This applies to the book you are reading as well. Operational images are not specific to only cinema or photography or, for instance, digital images. The term cuts across such distinctions in the attempt to understand the role of technical images in reformatting space as well as the role in transformations between images and data. That, already, is quite the world-embracing broad topic. Still, the anchor here is the operational image, as both a situated concept (of the late twentieth and early twenty-first century) and a potential field of forces that also facilitates media archaeological excursions, as we briefly saw in earlier chapters. As such, it has methodological force.

In intersecting discussions, theoretical and historically sourced positions cut across individual technologies and genres of representation, which have started to broaden the scope of what is understood by machine vision.75 A similar move is presented here through the case study of lidar that ushers us to the broader sensorial regimes of the city and what kind of images are being formed in that ecology of sensing. Machine vision is expanded from a set of algorithmic technologies of vision to material events of the city that are not necessarily computational in the traditional sense and the platformed invisuality that works with visual material (optical and nonoptical, lenticular and post-lenticular) but is “not of the order as visual,”76 as Mackenzie and Munster have demonstrated. Instead, traditional reference points of visuality and ordering such as “eye, lens, sensor, file, screen or database” are remixed in the “invisual image ensembles”77 and combined anew in the diagrammatic functioning of the platforms.

In this book, the notion of operation refers to this dynamic in new transversal connections and functions of images. In more specific terms, the pulsating light problematizes the idea of perception or visuality as a stable registering of forms situated in space as if it were a container instead of a field of dynamic signals. The signal-based registering of the world means a move from representations of things to modeling events. Practices of modeling have a slightly different relation to the epistemic power of what is considered real; this is done less for representations than for prediction, monitoring, and intervening purposes.

A scientific test situation with one to four people interacting being captured as images and turned into different data visualisations.

Figure 26. “WiFi antennas as sensors for person perception. Receiver antennas record WiFi signals as inputs to Person-in-WiFi. The rest [of the] rows are . . . images used to annotate WiFi signals, and two outputs: person segmentation masks and body poses.” Image text from original source, Person-in-WiFi: Fine-Grained Person Perception Using WiFi. Courtesy of Dr. Dong Huang, head of the DeLight lab at Carnegie Mellon University.

Besides lidar, other examples could be used. In passing, I already referred to “WiFi” as such a site of visuality/invisuality, which illuminates the argument I am making. In a recent paper, “Person-in-WiFi: Fine-Grained Person Perception Using WiFi,” a team of researchers suggested that WiFi antennas and signals are, in principle, sensors capable of detecting body postures and movement. This form of seeing beyond the visual spectrum is introduced as an alternative to camera-based, radar, and lidar (light detection and ranging) technologies that have already been used in the context of “people perception.”78 In a city—nowadays, almost any city—that is cut through by an extensive range of wireless signals between signal stations, this means that the city is continuously forming images. These images, however, are based on the sensing, registering, and algorithmic modeling of WiFi signals where what is usually considered transmission or communication turns into images, and where signal processing turns into spatial modeling in a reminder that everything that exists as a signal can also exist as an image.

Theirs is not the first paper to claim the usability of WiFi as a technique of sensing. The authors list earlier work in which 1D signal space has been reconstructed into “2D fine-grained spatial information of human bodies,”79 questioning the theoretical focus on the visual spectrum through the mapping of opening doors, keystrokes, dancing, and even static objects. Out of this broader ontology of urban “wirelessness”80 emerges the possibility of mapping, visualizing, and modeling space, humans, animals, objects—anything that reflects the signals from WiFi traffic and antennae. This quirky detail also reveals how vision, or even a metaphorical extension of “seeing” in this technological context, is not only a distinct sensorial capacity to be understood in relation to other human-based sensoria (hearing, touching) but also a technique of modeling according to sampling rates: “Lidars have sampling rate[s] in the range of 5–20 Hz, which is much lower than other sensors such as cameras (20–60 Hz) or WiFi adapters (100 Hz).”81

The image, then, is a sampling of a spatiotemporal situation: a constantly produced entity that cuts across the dynamics of a city as part of an operational processing of what is being seen, at what time, in which relations, and to what ends. The image is the platformed compilation of such signals into useful units of reference, whether diagrams, models, or something more pictorial for an operational purpose. In short, an image might not resemble much (or at all) what we expect an image to be like, which is why so many of the questions of art theory and aesthetics revolve around the question: What are our objects of investigation when they have switched from the visible to the invisible, and from visual to the invisual? If maps, QR codes, or for that matter, diagrams, are treated like images, the term both broadens its scope and demands a specific refocusing—that is, I claim, about the operational and infrastructural apparatus. Images are gateway drugs to data. Or, if you prefer, interfaces.82

The echo light pulse systems of laser scanning become one aesthetic channel to the complex systems of contemporary mapping of the urban world. While ScanLAB Project’s work turns toward technological and aesthetic questions, a similar short film operates in the imaginary near future of laser scanning and corporate surveillance. Liam Young’s audiovisual work Where the City Can’t See (2016), written by Tim Maughan, relates to the same bundle of issues that unfold when considering these images as interfaces. Young’s speculative design version of these visual/invisual interfaces is introduced as “the first fiction film shot entirely through laser scanning technology” and is “set in the Chinese owned and controlled Detroit Economic Zone (DEZ), in a not-too-distant future where Google maps, urban management systems and CCTV surveillance are not only mapping our cities, but ruling them.”83 Speculative fiction about automated governance of cityscapes continues the discourse about scanning, but with a particular focus on urban politics. Moving beyond QR code cityscapes, scanning is understood as a form of control, emphasizing how this survey of a landscape and this scan of a city are part of the reordering and rethinking of what images do, positioned somewhere between the traditional visual image and its role as digital measures of relations, movement, events, and management of dynamic entities across large-scale territories.

Young’s film also engages with what could be coined as post-lenticular subcultures:

Exploring the subcultures that could emerge from these new technologies, the film follows a collection of young factory workers across a single night, as they drift through the smart city in a driverless taxi, searching for a place they know exists, but that the map doesn’t show. They are part of an underground community that work on the production lines by day, by night adorn themselves in machine vision camouflage and the tribal masks of anti-facial recognition, enacting their escapist fantasies in the hidden spaces of the city. They hack the city and journey through a network of stealth buildings, ruinous landscapes, ghost architectures, anomalies, glitches and sprites, searching for the wilds beyond the machines.84

Similar to ScanLAB’s investigation of laser scanning as a technology that records its own glitches in the perceptual field/model, this speculative fiction unfolds an unstable city of moving bodies and rhythms, of perception that hovers on the thin line of continuity and discontinuity.

A ghostlike lidar image with buildings at the background and a figure at the front, like front of a DJ set.

Figure 27. Still from Where the City Can’t See. Directed by Liam Young and written by Tim Maughan. Reprinted with permission.

The city is pulsing. In the glitching of a cityscape, the WiFi transmits and registers; signals and sampling rates reveal the shifting shapes of a city. Specific techniques are used for specific operational ends. Lidar is effective in catching the multiscalar movements and events from the size of an insect to the large-scale buildings. The city is itself a connected entity of sensing and sensors, but what stands out is the transversal interconnection of mobile and immobile bodies: the question of visible and invisible is not, as argued already in chapter 3, articulated on the axis of visible light but in the infrared spectrum and in the invisuality of data that relates to the multiple platforms that form the augmented reality of the city as rewriting (as property, as experience, as value). The images that are being formed are a modeling of the incoming pulses. They are entities that enable and hinder perception as they hide and seek in the midst of ecologies of machine sensing. Paradoxically, to understand the shift of visuality and the photographic into the register of the invisual, one needs to look away from images and toward their infrastructural coupling with large-scale systems of sensing and computation. Where the city can’t see is also the operational sphere that supports sensing, observation, and the production of images.

Conclusions

The operational is not merely of the scale of an image.85 One of the points I am making as I build on and with the work of others is that machine vision should be understood not merely as a special case of vision or seeing, but (at least as much) as an operation of navigation and movement that is infrastructured across different layers of urban and nonurban environments. For example, in terms of autonomous vehicles, this relates to data pooling from the camera, light detection, lidar, radar, ultrasonic sensor, and vehicle motion data feeds. Furthermore, this data pool of combined sources—that may or may not be images in the pictorial sense—is not only for on-site processing but also for various other scales of uses, such as simulation and modeling.86 As a network platform, this system that gathers sensor data into meaningful models of the world-in-action is also part of the design and engineering problems of storage and transmission and includes rather mundane-seeming engineering dilemmas such as how to transmit such a massive amount of data in real time that consists of several moving entities in an environment that itself is dynamic on multiple scales, including the reflective surfaces of the city registered by laser scanning. This sparks an argument about images as well: to discuss such operational images as part of a bundle of other forms of measure and infrastructure where imaging is actioned. The image becomes entangled, even conflated, with real-time processes of dynamic sensing, which in turn act as the necessary prism through which even histories, media archaeologies of the image, and the photographic too can be seen anew.

In other words, I argue that the photographic discourse about the operational and the instrumental image must be updated to include the infrastructural image as was already discussed in the previous chapter in the context of operational aesthetics. Although these are not necessarily images that represent or depict infrastructure, their mode of existence as environmental media87 is premised on how they act and trigger actions in particular situations: this might be an event of execution at the moment, in the microseconds of decision time in traffic, or it might be a slowly unfolding intervention through a forecasting model. In this vein, the aesthetic of the infrastructural depicts, represents, and operates in that broad circulation of image-data-environment.

This chapter has offered some potential ways to respond to this question, especially in the context of lidar, but this can and should be extended beyond this specific technology of scanning. From the artistic works of ScanLAB Projects to those of Liam Young and a host of others, a common line runs through these practices that track images that track, observe images that observe, and try to gather a sense of how images make sense outside the scope of standardized human vision. This lineage speaks to posthuman photography and its multiple variations across the nonvisible spectrum of pulse and light.88 Hence, in addition to Young’s film, the discussion of Geocinema and the Digital Belt and Road system in the previous chapter is but one example of what I mean. Here, I have continued with different aspects of operational aesthetics. Such a form of operational aesthetic is also the educational arm of methods for investigating images of this sort: screening, measuring, intervening, operating. We do not read them like books, nor do we look at them like usual images, but we relate to them like assembled parts of a logistical system of traffic.

And if you want one more film reference for this mix of epistemic and aesthetic images, turn to Farocki’s Counter-Music again. Its multiple juxtapositions and transitions of bodies, cities, and operational images speak to the centrality of circulation and transport, as Volker Pantenburg’s epigraph of this chapter already told us: urban infrastructure and traffic of images conflate.89

Two images that depict traffic with multiple vehicles, the first as a photo and the second as a more abstract, colorful data visualisation.

Two film stills: hands holding a pen and a measurement aid over a piece of paper, and a wall of monitoring equipment from CCTV screens to a diagram.

Figure 28. Two stills from Counter-Music (2004). Courtesy of Harun Farocki and Antje Ehmann. Copyright Harun Farocki GbR.

Following from the above discussion, we can claim the following: while these contemporary images are part of a media archaeology of scientific photography and different multiscalar technical measures, they also take a particular role in contemporary computational cityscapes as scans, as infrastructure, and as interfacial skins90 that are not merely singular instances of images, even such peculiar ones as QR codes, but entire environments of sensing and action.91 Consequently, they are folded into a network of operations in large-scale systems. In this sense, these images continue the legacy of not only photogrammetry but also plotting as computational solutions to problems of measurement, analytics, and rewriting of surfaces at different scales.92 In the contemporary situation, the extent and scale of computation are of a different order, and the processing of images becomes automated as part of these operative chains. This is amplified with the automation of events from image sensors as part of the distributed network of computing (of which the mobile vehicle is only one part) that tries to keep up with the already existing complexity of the moving landscape. The city itself is a large-scale synthetic, even synthesizing, unit that presents a particular case for ecologies of images embedded in multiple sites, functions, vehicles, and passages for a multitude of programmable sequences that take the form of intervention. And it goes beyond the city, too, including the range of automated image and machine vision systems that function in large-scale, embedded, and dynamic systems, from cityscapes to meteorological systems, seafloors to ground terrain, military scanning of landscapes to development of experimental ways of environmental visualization.

As Florian Sprenger has aptly shown, lidar and related sensorial systems sit as part of the history of environmentally sensitive robotics, which is now also part of large-scale systems.93 Adaptive systems that are continually producing information about their environment have provided one solution to problems of maneuvering and movement in autonomous systems. From the experimental robotic systems of the 1950s to the new robotics turn in the 1980s, these forms of autonomous systems became conceptually and technologically dependent on sensing: instead of trying to upload a map of the environment into the machine, the goal is now to create multiple layers of perception and sensing that situate the system in its location.94 As Sprenger argues, autonomous systems and their forms of scan-based sensing, including lidar, can be seen as special cases of robotics designed to be context-aware and reliant on adaptation to environmental conditions. This also helps to address the argument about contemporary forms of sensing and sensors as mobile systems. Indeed, as mentioned earlier, the case for understanding these imaging systems as inherently about navigation becomes clear.95 Even the event of sensing is premised on the latent possibility of a movement, of coordination, orientation, and mobility.

In our case, the laser-scanned images and landscape are directly related to the instrumental96 role these systems play in coordinating (in) a city, its multiple levels of agents and events, and in other contexts, other forms of navigation in complex ecologies. Hence we have to look beyond the image, the QR code, the screen, the vehicle as a stand-alone unit, and instead understand that the image is, at best, an interface97 that allows a kind of access to other scales of infrastructural action that mobilize multiple kinds of knowledge of large-scale, dynamic systems including maps, information systems, AI, sensors, data transfer, and so on. Environmental perception and localization in relation to external data and maps become a form of synchronization that adds to the work of actual sensing and imaging. All of this can be seen as one crystallization of what images have become in the twenty-first century’s complex and distributed large-scale autonomous systems.

This infrastructure can be understood as an operational bundle of technologies that is also part of a political economy of innovation that aims to reformat the city according to its ideals of the “city as computer.” As Shannon Mattern argues, this trope assumes—and rhetorically (re)produces—a frictionless programmability of the city as its modus operandi.98 One can also track this infrastructure through its corporate financial attachments that mobilize their own views of operative ontology through the employment of data feeds, maps, location-systems, and other variations on the theme of “seeing” and “imaging” in the smart city: digital data platforms connected to technologies of self-driving cars, to multiple scales of navigational infrastructure, to urban technologies (for example, Sidewalk Labs), and to various forms of maps, robotics, engineering, and expertise that form the links between technological discourse and corporate valorization mechanisms.

Spatial data rendered as maps or other sorts of images become a component in platform invisuality: this is where questions of data, operational images, and property regimes meet up and hit the smart city ground. While it concerns the production of value in for-profit systems and often under corporate control, this is not necessarily “extraction” as has become customary to discuss (in contexts of data colonialism) so much as the production of different worlds of sensing that are prone to capture, control, and commodification.99 Data is not a finite resource, but it is produced in conditions of finite resources. To be clear about extraction helps us to seek ways out of the current logic of platform capitalism toward alternative forms of the capture and distribution of value.

Annotate

Next Chapter
Conclusion: A Soft Montage of Operations
PreviousNext
Operational Images: From the Visual to the Invisual is licensed under a CC BY-NC-ND 4.0 License: https://creativecommons.org/licenses/by-nc-nd/4.0/.
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org