A price of a trade is not a noisy observation: We introduce noise only as a mathematical idealization.
IN MAHWAH, NEW JERSEY, next to a car dealership and across State Route 17 from a Home Depot, sits 1700 MacArthur Boulevard, what appears on Google Maps to be a relatively nondescript large, rectangular building, seemingly identical to scores of similar ones within this part of New Jersey. Zooming in closer, we can see, however, a rather imposing guardhouse and what appears to be a number of hefty pop-up barriers to prevent speeding vehicles, a self-contained electrical substation, and a massive number of air-conditioning units surrounded by an extremely tall fence—all details that mark this building as other, as something special. A little bit of online searching reveals that this address is the location of the NYSE’s newest data center, a four hundred thousand square foot facility. The decrease in importance of open-outcry trading is imbricated with the increase in the importance of electronic market exchanges and electronic access to market information. New trading “floors” become the norm, this time populated by racks of servers in rooms of the most carefully controlled climate. Instead of the chaos and noise of shouted trades on the human-populated trading floor, we have the hum and white noise of air conditioning and whirring fans. Yet an additional type of noise can be found, one that resonates with the noise traders discussed earlier.
Zaloom’s book detailed the conflicts over this transition from primarily human to primarily computational markets, and it is safe to say that today, the primary location of high finance is not 11 Wall Street in New York City or 141 West Jackson Boulevard in Chicago (the address of the CBOT) but rather 1700 MacArthur Boulevard in Mahwah, New Jersey, or 1400 Federal Boulevard in Carteret, New Jersey (the site of the NASDAQ’s leased data center). A variety of sociotechnical shifts enabled this move to take place, including the move to decimalization in market prices, wider availability of powerful commodity computational technologies, and regulatory changes that opened exchange trading to more potential firms. Facilities like 1700 MacArthur Boulevard reflect important changes in the financial landscape, changes that are intimately related—yet perhaps are more impactful over the long run—to the proliferation of the exotic accumulators of profit that were some of the primary causes of the most recent financial crisis. Key to this whole field is electronic trading. In the United States, electronic trading in some form has been around since at least 1971, the year of the NASDAQ’s founding. Throughout the intervening decades, exchanges—and the ways in which traders interact with the exchanges—have become increasingly electronic and digital. Though low-latency access to information through computer terminals such as those produced by Bloomberg and Reuters have in part changed the way traders interact with the market, arguably more fundamental shifts have occurred through the development of purely electronic communication networks (ECNs), with names that fly below the public’s radar, such as BATS in the United States and Chi-X Europe. The development of these exchanges was in part enabled through regulations such as the Regulation National Market System (regNMS) in the United States and the Markets in Financial Instruments Directive (MiFID) in Europe, both of a piece with larger processes of liberalization existing since the 1970s. As anthropologist Marc Lenglet notes, MiFID restructured markets to both enhance the “competition between execution venues” and protect customers from the “‘natural’ dangers they may encounter in markets.” Such ECNs are accessible not only through human interaction but also through automated trading systems (ATSs), thereby enabling purely electronic trading and thus the development of algorithmic trading (AT).
Efficiency is again the standard answer given for AT. If an individual like myself wanted to sell, say, ten shares, I once could do so with the reasonable expectation that the price will not decrease during the time that I made this transaction. But consider the institutional customer who wishes, for whatever reason, to sell one hundred thousand shares. Such a move would almost certainly cause the price of the security to decrease while the transaction was taking place. Perhaps there would not even be one hundred thousand corresponding buy orders. These kinds of situations, of course, invalidate some of the general assumptions of the EMH and CAPM (briefly discussed in the first section), namely, that buying or selling a security does not impact its price. AT developed in part to deal with this conundrum. Rather than selling all hundred thousand shares at once, a specially designed algorithm could split this order into smaller pieces—say, twenty-five thousand shares at a time—to cause a smaller impact on the market. This process of selling could then be programmed to take place over a given time frame, say, an hour. Such an algorithm is called time-weighted average price (TWAP). But perhaps the security is rather illiquid; there might only be fifteen thousand shares being traded on average per hour. An order to sell twenty-five thousand shares in an hour, then, would have an adverse impact. Trades could then be cut into smaller pieces based on the historical pattern of volume for the given security; the goal would be not to cause an appreciable impact on the volume by the trade. This type of algorithm is termed volume-weighted average price (VWAP). Current algorithms used in AT have become progressively more complicated, taking into account more aspects of market dynamics, such as changes in the market during the execution of the algorithm, as well as attempting to position and time trades based on natural language processing (NLP) of recently released news articles. The financial economists Peter Gomber, Björn Arndt, Marco Lutat, and Tim Uhle thus describe the characteristics of AT as follows: it is trading on behalf of clients, its goal is to minimize market impact, positions are held for relatively long periods, the goal is to match a particular predefined benchmark, and a given order is executed over a particular time frame and across a number of markets.
It becomes easy to foresee how algorithms such as those just described could become ever more complex, absent negative regulatory pressures or lack of engineering wherewithal. Given that the space of potential market algorithms is practically unlimited, it is unsurprising to find out that there has developed, in the words of an official from the Bank of England, a “race to zero” that has pushed both time frames and complexity beyond normal human comprehension. These new types of algorithms are termed in general HFT and have come under intense scrutiny for reasons that will become clear in a moment.
Before explaining some of the more prominent HFT algorithms, we have to step back a bit to examine the infrastructure that has in part enabled them to exist. Besides the establishment of new exchanges like BATS and Chi-X Europe, there has been the expansion of electronic trading activities at more established exchanges such as the NYSE and the CBOT, resulting in the building of data centers such as the one at 1700 MacArthur Boulevard. Overall, this has led to a pushing of computational limits as firms with immense levels of capital hire computer engineers and purchase specialized equipment to ensure that their algorithms are microseconds faster. For example, colocation is one of the latest trends in electronic trading. Although light traveling down fiber-optic cables is fast, it is not instantaneous. Therefore one of the reasons for the size of the NYSE data center is to provide rack spaces for interested firms (who possess both the expertise to manage the systems and capital to pay the fees) to be closer to the actual machines executing the trades. Evidently, much effort goes into ensuring that servers in racks nearest those running the trading system do not have an advantage over those located a bit farther away, even going so far as to ensure that all cable lengths are the same. For those unable or unwilling to co-locate their servers, straighter and faster fiber-optic lines have been laid between New Jersey and Chicago by a company called Spread Networks, enabling them both to cut three milliseconds off the previous time (or distance) and to describe their distance from a New Jersey data center and the NASDAQ as “8 microseconds away.” Even faster connections now exist between the NYSE and the NASDAQ through laser transmission that shaves nanoseconds off the previous trip over fiber-optic or microwave links. Injecting a bit of seeming science fiction into the mix are attempts to take into account relativistic conditions to choose the optimum placement of new data centers with respect to existing ones so as to create potential arbitrage opportunities, raising troubling regulatory issues in the process. And data centers, although relatively “self-sufficient” in the sense that they have extensive systems for electrical backup, are still reliant on extraordinarily precise timing provided by GPS signals. Such signals, though originating from geostationary satellites, can be spoofed by a more powerful signal closer to the receiver. Some have suggested that such jamming could cause havoc for financial organizations dependent on HFT, potentially causing timing confusion that could have a ripple effect throughout a market.
This sort of infrastructural investment, couched in the language of financial return, must enable firms that engage in this work to capture some additional bit of profit that would not otherwise be possible. So why might these properties—of low latency, of close proximity to market computers—be so attractive? There must be a separate logic at work than with AT, as the AT algorithms previously discussed do not necessarily depend on speed. It is precisely minuscule fluctuations in price—a form of noise to which I will return shortly—that enable HFT to command so much attention in today’s financial climate. In contrast to the qualities of AT described earlier, HFT involves an extremely high number of bids or asks, rapid cancelation of existing orders, proprietary trading, capture of spreads, no desire to hold a position for a long period of time (thus meaning that positions are held on the order of minutes or seconds or less rather than days, months, or years), and very low margins. We can now understand a bit why the question of location is key to HFT. Given the material limits of communication networks, and assuming all else is equal, it is a truism that it will take longer for data to travel between a machine in New Jersey and one in Chicago than it will to travel between two machines in the same data center in New Jersey. Therefore, if an algorithm can take advantage of this latency delay, then it might be possible to enact some sort of arbitrage opportunity. For example, HFTs might work as a market maker (described earlier in the context of open-outcry trading) to capture the spread between the bid and the ask, an activity that offers very low returns for each trade but can add up to large profits over time. In the HFT domain, this capability is improved by being able to react to market data more quickly than other participants can. Other forms of so-called technical analysis, such as statistical arbitrage, enable HFTs to use predefined statistical models of securities to detect situations where the price seems to be out of line with its expected value, enabling an arbitrage opportunity. Again, being able to get in and out of a position quickly, made possible in part by fast computers and low latency to the market system, can produce small profits that add up over time. Latency itself can become equivalent to profit in other strategies whereby an algorithm discovers a price discrepancy between the same security available on multiple ECNs, that is, one traded on, say, both BATS and the NASDAQ. More esoteric and, from the point of view of some participants, problematic strategies come under what has been termed the “darker arts.” For example, stuffing is when a HFT algorithm submits more orders to the market than the market can handle, potentially causing problems for so-called slower traders. Smoking involves submitting orders that are initially attractive to slower traders that are quickly changed to less generous terms, whereas spoofing is when, for example, a HFT algorithm posts orders to sell, when the actual intent is to buy.
It has become clear, however, that even if HFT algorithms are staying on the legal side of what is permissible, there are still major concerns regarding fairness, especially as it relates to quote availability. Nanex LLC, one of the firms most active in critiquing HFT, has shown that a large percentage of quotes for securities are canceled within milliseconds, preventing these quotes from reaching the West Coast of the United States. This is one of the most important pieces of evidence regarding the acceleration of HFT and contemporary finance, as it indexes a speed far beyond the capability of a day trader or even an institutional investor. The capital required to perform at these speeds ensures that the best quotes will only exist for those with the vast sums available to invest in this computational infrastructure, creating a vastly uneven playing field.
Although we now know much more about HFT than we used to, for many years it indeed was a “dark” practice, both in the sense of the shadow that hid open discussion of these techniques and in its obscurity to the general public. All that changed on May 6, 2010, the day of what became known as the Flash Crash and the main reason for my exposition of AT and HFT. The full details of this day are beyond the scope of this essay, so I will only outline them schematically, following the findings of the official U.S. report produced by the CFTC and the SEC, although recent developments have cast doubt on the official story. In short, between the hours of 2:00 and 3:00 P.M. Eastern time, the NYSE had its largest single-day loss up to that day, a loss of nearly one thousand points, with a subsequent rebound that resulted in the second largest intraday swing up to then. The actual dynamics of the event were difficult to reconstruct after the fact, but it would appear that a single large sell-off (to the order of $4.1 billion) of a particular index security, the E-Mini S&P, caused a cascade of trading activity by HFT algorithms using many of the techniques just described. This initial trade was, in retrospect, due to a relatively poor choice of a given AT that did not take into account its own potential impact on the market. Activity by various HFT algorithms responding to this trade resulted in the buying and selling of more than twenty-seven thousand E-Mini S&P contracts with a net change of only two hundred points. Although built-in trading pauses occurred because of the activity on the E-Mini S&P, liquidity still evaporated as time passed, causing share prices on some stocks to go to extremes, such as a penny or $100,000, which were the computational limits on prices on these exchanges. Even so, the pauses in trading enabled the same algorithms and participants to buy up seemingly erroneously priced securities, leading to the trade of more than 2 billion shares over a twenty minute period and the recovery of the market.
Speculation around the cause(s) of the Flash Crash began immediately, with much of the blame directed at HFT. Although the report of the CFTC and the SEC did not lay blame on HFT in particular, it did indicate how HFT algorithms contributed to the large price swings, the immense number of shares traded, and the drying up of liquidity. Trying to work out the exact dynamics of the Flash Crash has become popular, but determinations are made difficult by a number of factors. First, because trading on these exchanges is done relatively anonymously, there is no way to gather, after the fact, the distribution of trades due to all of the participating firms. Second, and as a result, reconstructions of the event must rely on a number of assumptions of what exactly constitutes HFT and which sets of trades might be due to individual participants. Nevertheless, most of the postmortems—as well as studies published prior to the Flash Crash—affirm relatively positive contributions of HFT algorithms to the markets, specifically in their ability to provide liquidity—the very thing that evaporated during the Flash Crash. In short, the consensus among most financial economists is that the Flash Crash was a particularly extreme event and that HFT does not in general increase volatility in the market, and thus HFT ultimately improves the “efficiency” of the market. Some, however, are beginning to have reservations. A recent article in the New York Times written after another AT meltdown notes the following:
Terrence Hendershott, a professor at the University of California, Berkeley, said he had been an advocate for technological innovation in the past, but had begun to wonder if the continuing battle for technological superiority had become too much.
“You’ve got arguably too many people, in too small a space, and they just keep spending enormous amounts of money,” Professor Hendershott said. “Can I convince myself that we are really seeing a lot of benefits? No.”
What is clear is that HFT, along with other changes in the market as a result of different behaviors by humans and machines, as well as regulatory pressures pushing for more competition, has made the markets more interconnected, leading to definite challenges for the authorities in assigning clear blame. Recent studies commissioned by the U.K. government suggest that it therefore might be necessary to understand markets today within an “ultra large-scale system of systems”—similar to nuclear power plants or highly complex technical artifacts such as the Space Shuttle—that requires appropriate modeling or an “ecology of practices” that would recognize multiple market equilibria with multiple paths toward efficiency.
If HFT were only important in situations such as the Flash Crash, then it might be considered simply a contributing factor to so-called black swan events, events whose probability is extremely low yet not nonzero. However, it is estimated that HFT is responsible for anywhere between 40 and 70 percent of all trading volume in the United States, 35 and 40 percent of trading volume in Europe, and slightly less in Canada, although it is difficult to know how these percentages are determined. Given that HFT seems to contribute to volatility of the market, and that HFT strategies depend on taking advantage of minuscule, millisecond-level changes in price, it behooves us to ask how a concept of noise might contribute to our understanding of the phenomena. A recent paper by Frank J. Fabozzi, Sergio M. Focardi, and Caroline Jonas draws from a concept from econometrics known as microstructure noise to help explain the activity of HFT and what they term high-frequency data (HFD), HFT’s necessary counterpart. A definition of microstructure noise is difficult to pin down, but one quant suggests two important distinctions: first, to an economist, microstructure noise is whatever makes it difficult to estimate the value of some particular time series of data; second, to market participants, microstructure noise is whatever causes observed values to deviate from the “fundamentals.” I am interested in this second aspect of microstructure noise, as it is precisely this assumed deviation that enables HFT to work as well as what could connect HFT to the earlier discussion of noise traders.
Fabozzi, Focardi, and Jonas’s paper is one of the few to pay attention to the role of noise within HFT. Their study is both a meta-review of other papers investigating HFT and a series of interviews with market participants themselves. Importantly, their interviews show the role that infrastructure plays in constructing and propagating this noise. Information received from the exchanges must go through a process of “cleansing” to remove “erroneous data”; similarly, the amount of noise within a sample will depend on the exchange it came from and the types of securities being traded there. More fundamentally, noise would seem to be the corruption of what is assumed to be an ideally perfectly observable process. As indicated by this section’s epigraph, noise is for some researchers simply a “mathematical idealization” that, through its removal in their models, enables one to provide a better measure of the “true” nature of the process.
Yet for others, microstructure noise does have an independent existence from the mathematics that make it necessary. Consider, for example, the comments of Ravi Jagannathan, codirector of the Financial Institutions and Markets Research Center at Northwestern University:
If markets are frictionless, that is, if there are no microstructure effects, the higher the frequency, the better the measurement of values as volatility. However, in rare or severe events, HFD are of no help; microstructure—the way people trade, the strategies used, lack of knowledge of what the others are doing—becomes more important.
Microstructure noise becomes a necessary deviation as a result of human activity, of interconnected systems, of processes that do not perfectly follow mathematical idealizations. Whereas it would seem from this quotation that Jagannathan understands microstructure noise as occurring only at the level of the rare event—such as the Flash Crash—others are beginning to understand this noise as a continual component of the market. Frederi Viens, professor of mathematics and statistics at Purdue University, says, “My guess is that microstructure noise is real, so that we simply have to deal with it, that is to say, account for the added uncertainty in our prices.” At a more stark level, Nikolaus Hautsch of Humboldt University observes,
HFD are affected by a lot of noise, lots of data with no information content. What matters is the ratio between the signal to noise. The signal-to-noise ratio must be greater than 1. If not, we have more noise than signal, and no gain. In the very beginning, the role of noise was overlooked. Over the past four, five years, we have gained a better understanding of this.
Nevertheless, for Hautsch, noise becomes reinscribed within standard information theory as the dual of signal, of something to be removed, something lacking in “information content.” Microstructure noise, in the accounts of both Hautsch and Viens, is a continual component of the market yet remains an impediment to ever more precise estimates of the “actual” price.
Other expressions of noise, beyond the narrowly informatic, have also come about as a result of the Flash Crash; these noises interfere with the attempt to cleave noise from signal. Recall from the previous section my discussion of squawk boxes, audio feeds from the open-outcry pits, and, specifically, the services provided by companies such as Traders Audio. On the day of the Flash Crash, these feeds were of course live, resulting in recordings of the sounds from the pits during the event. Ben Lichenstein of Traders Audio has become somewhat of a minor celebrity on financial blogs because of his reporting that day; a recording of his reporting is available for download from blogs such as Zero Hedge. Listening to this recording is a disorienting experience, not least because of the problem of the specialized terminology of the pits. More important is the affect, the tone, of Lichenstein’s voice as the recording goes on. He begins in what I would characterize as an incredulous voice, questioning the very trades that he can see and hear before him. But shortly this shifts into pure anxiety and fear, the gravel in his voice bouncing from lows to highs in pitch. Though the microphone is clearly directed at him, the shouts and cries from the pits can be heard in the background. Heavy breathing fills what would normally be heard as pauses. The intensity of the volume—perhaps a mirror of the volume of the securities—is distorted by what sounds like a poor-quality microphone. In short, the archive of Lichenstein’s reporting produces a bodily trace of the anxiety of that day, one that cannot be captured by the plethora of graphs, tables, and commentary produced in response to the event. It is a trace that, again, we can accept as affectual because of the ultimate effects of the financial crisis.
A different yet just as entrancing marker of that day comes from the French artist collective rybn. For a number of years, their work has explored the concept of “antidatamining,” that is, use of the data-mining techniques of computational capitalism to shed light on the intersection of data and society. One of their recent exhibited works, ADM8, is an automated “trading bot performance” that uses AT to predict price movements in stocks to capture profit. The performance is meant to end if the bot becomes bankrupt, but as of the time of writing, it had a net profit of €1,475 since August 2011. Yet it is an earlier work I want to listen to, namely, their direct response to the Flash Crash called FLASHCRASH SONIFICATION. Sonification—the translation into sound of data collected for nonsonic purposes—is a well-known practice within experimental music and is being taken up in the sciences. For FLASHCRASH SONIFICATION, rybn took trading data from nine different exchanges on the afternoon of the Flash Crash and created an austere, digitally sharp, yet undulating soundscape that recalls the work of Ryoji Ikeda or Carsten Nicolai, but without their rhythmic precision. The data rybn used came from the market data firm Nanex. It’s important to listen to their online-available, two-channel mix on headphones to appreciate the details of the piece. Beginning with a loud uncorrelated noise, the piece quickly becomes quiet, punctuated in a seemingly random fashion with high-frequency bursts. About six minutes in, a ghostly wave of mid-frequency noise starts to wobble, joined by lower-frequency rumbling. Four minutes from the end, the high-frequency pulses become louder and more rhythmic, sounding as if the spaces between them were slowing decreasing. A few seconds before the sonification ends, the pulses rapidly start to smear together until they merge into a continuous sound, thereby ending the piece.
The collective rybn constructed the piece specifically for the installation environment, a planetarium, at Le Lieu Multiple in Poitiers, France, for an exhibition in spring 2011 titled Raison d’Agir. The two-channel version is a mixdown of the complete nine channels presented in this space, in which the sounds from eight different exchanges surrounded the sonification of the NYSE in the middle. While the recording is somewhat quiet, the live version was louder, not so much to recall noise musical acts, such as Merzbow, as to emphasize the intensity of the bass levels. The building toward the end of the piece was meant to “emphasize the moment of the crash, [by] adding an effect of resonance, which propagates slowly, making it more tense, as the krach goes on.” Thus, instead of merely transparently translating the data into sound, rybn constructed the sonification to bring out this resonance: “resonance is pointed [to] as one of the major risk[s] of HFT by many economists and the feedback phenomenon was in the center of our discussions when we were preparing the piece.” Isolating the Flash Crash was important for rybn, as it was perhaps the “moment when people started to understand financ[ial] orientations more clearly,” thereby highlighting the symptomatic nature of the “speculative short-term loop finance seems to be stuck in.”
Noise thus works here via multiple interfering fields. There is on the surface the resonance with various strands of noise music and contemporary sonic practice that take any form of data and transform them into sound. But there is informatic noise in the digital signal as well, a trace that we encountered earlier as microstructure noise. In rybn’s view, this is because “HFT brings in more confusion and chaos (in mathematical terms).” This is not something “natural,” however: “the whole signal remains fabricated, and is based on very complex phenomen[a] of feedback interactions. . . . Financial noise is created by the sum of all its internal feedbacks, anticipation process[es], and mimetic forces. The noise we can produce in the framework of antidatamining, is based on the matter we explore. HFT provides a wide range of frequencies, infinite structural composition sets, and a strong symbolic and metaphoric matter.” Noise is to be found in this materiality of data, the same material that is located in places such as the NYSE’s data center, the same material that can be translated into pressure waves in the air.
Paired with the sonification on rybn’s website is a “natal chart” of the stock market that suggests that the divination of prices can be done through the consultation of astrological charts. This is a clear comment on contemporary financial discourse, as rybn argues that “news and media try to interpret the obscure behavior of ‘the markets’ as in the ancient practising of Haruspicy,” or divination through the entrails of sacrificed animals, with the image meant as “an attempt to criticize the degree of mysticism that finance has reached.”
While not explicitly intended by rybn, FLASHCRASH SONIFICATION also recalls Black Shoals Stock Market Planetarium (2002–4) by Lise Autogena, Joshua Portway, Cefn Hoile, and Tom Riley, an installation consisting of an overhead projection of stock market data in the form of constellations that are constantly changing due to the calculations of artificial-life creatures who “feed” on the joint market activity of related companies. While the name of the piece references the Black–Scholes–Merton equation discussed earlier, the project itself is about stock market valuations and not derivatives per se. Black Shoals has garnered much critical attention in subsequent years, specifically in its attempt to understand the constructed nature of the financial system via the feedback dynamics of its alife creatures. However, I think it is important to delineate the ways in which FLASHCRASH SONIFICATION differs from Black Shoals, specifically with respect to funding and support. Lise Autogena notes that Black Shoals raised approximately £70,000 and required special agreements from Reuters for access to its data feed and to the “sensitive closed data handling systems” of the Copenhagen Stock Exchange. FLASHCRASH SONIFICATION, conversely, is of a piece with rybn’s practice of using publicly available financial data, either published by corporations such as Nanex, as in FLASHCRASH SONIFICATION, or scraped from sites such as Yahoo! Finance with repeated requests masked by IP proxies. Access to market data is big business, and thus it is important to ask in what ways a project like Black Shoals, with its necessary interrelationships with major market producers, can function as a critical intervention, no matter the sophistication of its “allegory of the trader’s condition,” in the words of Brian Holmes.
Additionally, I disagree with Rita Raley’s interpretation of Black Shoals, specifically her contention that it is a “socially engaged, participatory, and pedagogical intervention into the discourse on financial markets.” More so, the rhetoric of Black Shoals’s generative techniques sits comfortably close to that of Hayek discussed earlier; in the words of Black Shoals’s artificial-life programmer Cefn Hoile, the “organisms” are produced by a “decentralised evolutionary process—a result of limited resource availability combined with co-evolutionary interactions”—in other words, via processes that are similar to those of Hayek’s market. This is perhaps only a parallel tendency, in line with a particular zeitgeist associated with alife research at the turn of the most recent century; nevertheless, the historical antecedents of this within economic thought need to be noted. To do so might enable us to question whether the insight Raley draws from Black Shoals (“we are always caught within a paradigm that is too complex and that in effect manages us”) is all that we can hope for within art that engages with financial markets.
FLASHCRASH SONIFICATION is a layered comment on this state of affairs: a direct critique of the obscurantism of contemporary financial language; a foregrounding of the ways in which sacrificial, seemingly “wasteful” loss of value is translated into meaningful discursive signs; and a noisy environment that pulls human perception into the time frame of algorithms. In FLASHCRASH SONIFICATION, sonic noise becomes a translation of the data from the market—abstract yet eminently material—into a different abstract form that does not immediately signify. Like the recording of Lichenstein from the pits, FLASHCRASH SONIFICATION suggests rather than indicates. Listening to it cannot provide us with rational information regarding the dynamics of the Flash Crash; instead, it produces a dark foreboding of the mechanisms at work, the high-frequency pulses first recalling heartbeats that soon speed up beyond any ability for distinction. This darkness is heightened through what Steve Goodman might term “bass materialism,” or the tactility of the sub-bass levels that are coupled to the increasing speed of the higher pulses at the end of the piece. FLASHCRASH SONIFICATION is slightly off-kilter yet still analogous to genres of precisely ordered electronic music, and thus comments on the inability for computation—and, by extension, the market—to be the perfectly rational, ordered space it is ideally understood to be.