Noise makes financial markets possible, but also makes them imperfect.
CONTEMPORARY CAPITALISM IS IMPOSSIBLE without the contribution of information theory and the “cyborg sciences,” as termed by the economic historian Philip Mirowski. The discourse of modern economics and finance is rife with references to questions of “information”: who has it, when they have it, how it is transferred from one location to another, and how it can be acted upon to realize a profit. Although Claude Shannon’s work on information theory—and his sharp distinction between the signal and the ever-present noise that corrupts it—is our most well known exposition of information as a concept, in economics, the Austrian economist Friedrich Hayek had been pushing since the 1930s for a slightly different definition of information to be considered as the term for economic messages. Hayek’s interest in the problematics of economic information was crystallized in his well-known 1945 paper “The Use of Knowledge in Society.” Considering a rational economic system, “if we possess all the relevant information, if we can start out from a given system of preferences and if we command complete knowledge of available means, the problem which remains is purely one of logic.” For Hayek, such access is ultimately elusive, and thus prices become the prime medium of market information: “We must look at the price system as such a mechanism for communicating information if we want to understand its real function—a function which, of course, it fulfills less perfectly as prices grow more rigid.” As the end of the previous quotation indicates, Hayek argued in his paper against price controls, such as those you might find within a planned economy. His policy suggestion would not come as a surprise to anyone versed in contemporary neoliberal rhetoric—because of the difficulty in collating all of the information in a society, centralized planning by a single actor can never work:
This is not a dispute about whether planning is to be done or not. It is a dispute as to whether planning is to be done centrally, by one authority for the whole economic system, or is to be divided among many individuals. Planning in the specific sense in which the term is used in contemporary controversy necessarily means central planning–direction of the whole economic system according to one unified plan. Competition, on the other hand, means decentralized planning by many separate persons.
“Decentralized planning by many separate persons”—or algorithms. For indeed Hayek’s watchers of the price signal are like little engineers—or governors—who keep an eye on the dials that reflect minute changes in information: “It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement.” Decentralized individuals acting independently on observed fluctuations in price: this is an approach that would resonate later with complexity theory but is written at a time in which negative feedback—the governor—has contributed to the winning of the war.
In the intervening decades, information technology expanded Hayek’s dream of decentralized “competition” to the realm of the computer. For finance, computation is vital for derivatives and the other exotic financial instruments that are key to understanding contemporary finance. Derivatives, as their name suggests, derive their value from another security, such as a stock, bond, mortgage, or other commodity. Derivatives enable “hedging” of bets by constructing positions that, for example, limit potential losses. For example, a farmer might enter into what is known as a “futures contract” that specifies now a particular price for a given amount of wheat delivered at some date in the future. Although the farmer is thus limiting potential profits by stating a price for the commodity today, he or she also limits potential losses in case of a decrease in prices in the market. More recent types of derivatives include options, which provide the right (but not the obligation) to purchase (or sell) a given security at a given price at some future date, or swaps, which exchange future cash flows dependent on some underlying instrument, such as interest or exchange rates. The complexity of derivatives trading has additionally required the contribution of mathematically sophisticated financial analysts, termed “quants,” as well as high-powered computers to crunch through computational models.
How to determine a fair price for options was a difficult question. For the past few decades, options pricing has been reliant on the Black–Scholes–Merton equation, developed by Fischer Black and Myron Scholes and independently by Robert C. Merton (son of the well-known American sociologist Robert K. Merton). In both of their models, Black and Scholes and Merton assumed that stock prices follow what is known as a continuous time random walk or geometric Brownian motion. The details of such a process delve into complicated areas of mathematics and physics (some of which rely on Norbert Wiener’s precybernetics research), but in short, the idea is as follows. Consider a decision to take a step forward or a step backward, with your decision dependent on the flip of a “fair” coin: heads you move forward, tails you move backward. Even though the coin is fair, and you might assume that, over time, your average location will be exactly where you started, in fact, it is more likely that you will “drift” from your position to some number of steps away from where you began. What I just described is what is known as a discrete random walk; Black and Scholes and Merton considered a more complicated form that is both easier to work with mathematically and aims to capture more of the “dynamics” of actual stock prices. In the model, the random walk is in continuous time (therefore without the discrete steps of my simple example), the walk is geometric (meaning the random prices can never go below zero), and movements are based on sampling from the Gaussian or Normal distribution. Not only does the Black–Scholes–Merton equation depend on stochastic assumptions that resonate with work in thermodynamics and statistical physics from the nineteenth century but the resulting equation can itself be massaged into what is known as the heat or diffusion equation that is also well known to physicists. This is analogous to Shannon’s derivation of information theory; just as Shannon constructed his theory on the basis of the stochastic relationships between English words, Black and Scholes and Merton used the assumption of a random walk to construct their options pricing formula. In both cases, assumed “regularities” of human–machinic systems are simplified and codified to produce a manageable representation of reality. In the financial case, it would be of a piece with the simplifying assumptions underlying then cutting-edge financial economics.
These assumptions were vital to the two key frameworks then underlying mathematical finance: the capital asset pricing model (CAPM) and the efficient-market hypothesis (EMH). I will only briefly explain the CAPM, as the EMH is more important to my argument. In short, the CAPM relates the expected return on a risky asset (such as a stock or bond) given the so-called risk-free rate (i.e., the rate of return on an asset, such as U.S. government bonds, that is assumed to be riskless) and the expected return on the market as a whole. This is governed by the risky asset’s beta (β), a factor that is meant to capture the relationship between the volatility of the market and the volatility of the risky asset. If the volatility of the risky asset is higher than the underlying volatility of the market, the beta for the asset will be greater than 1. Because there is underlying risk in assets with a beta greater than 1, investors in assets with higher betas will demand higher rates of return. The CAPM rests on a number of problematic assumptions, many of which also underlie the EMH, namely, that purchasing or selling assets does not affect their prices, new information is available immediately to everyone in the market, there are no trade and transaction costs, and investors can lend or borrow at unlimited amounts at the risk-free interest rate.
Cursory rumination on these assumptions will immediately invalidate them: not everyone has equal access to capital for investment, intermediaries charge transaction costs, information percolates at differential rates. Yet such problematics did not bother many economists of the time, partially as a result of a persuasive paper by a young scholar named Milton Friedman. In his essay “The Methodology of Positive Economics,” Friedman distinguishes between “normative” economics, the description of what ought to be, and “positive” economics, the construction of possible testable hypotheses and named as such to reference positivism in the philosophy of science. Friedman further distinguishes, in the positive program, between the assumptions of a hypothesis and the attendant predictions the hypothesis makes: “To be important, therefore, a hypothesis must be descriptively false in its assumptions; it takes account of, and accounts for, none of the many other attendant circumstances, since its very success shows them to be irrelevant for the phenomena to be explained.” Noting that the assumptions surrounding the equations of motion for a freely falling object in a gravitational field are most definitely unrealistic on Earth, Friedman suggests that critiquing an economic theory on the basis of its assumptions is a logical error:
The entirely valid use of “assumptions” in specifying the circumstances for which a theory holds is frequently, and erroneously, interpreted to mean that the assumptions can be used to determine the circumstances for which a theory holds, and has, in this way, been an important source of the belief that a theory can be tested by its assumptions.
Friedman’s riposte against those who would critique economic theory on the basis of its assumptions has become standard over the past fifty years. By making analogies with the practice of the physical sciences, Friedman’s arguments are of a piece with then contemporary attempts to place the social sciences on more solid footing; indeed, Friedman notes that the inability of economics to construct controlled experiments is similar to the problem faced by astronomy. In the introduction to a collection of papers on the EMH, the financial economist Andrew Lo used references from engineering (engine efficiency) and statistical mechanics (thermal equilibrium) to argue that “the EMH is an idealization that is economically unrealizable, but which serves as a useful benchmark for measuring relative efficiency,” a statement that resonates with both Friedman’s scientism and his understanding of economic hypotheses never being absolutely true.
Further expounding on this aspect of the philosophy of economics will unfortunately pull me too far afield, so I will instead return to a description of the EMH to show how its own inefficiencies (as defined by the developers of the hypothesis themselves) lead to a consideration of noise. The EMH is dependent on the random walk properties presented earlier; if stock prices did not follow a random walk, the reasoning goes, then it would be trivial to exploit the underlying trend to make a profit. Eugene Fama, in one of the most well known expositions of the EMH, titled “Efficient Capital Markets: A Review of Theory and Empirical Work,” argued that although knowing the distribution of past prices is important to understanding the distribution of future prices, “the sequence (or the order) of the past returns is of no consequence in assessing distributions of future returns.” Fama’s exposition of the EMH considers three different potential informational efficiency situations. In the first, weak form of the EMH, the market is said to be efficient if it immediately incorporates information about past prices of a stock. The second form of the EMH is known as semistrong, and in this situation, the market is efficient if it incorporates not only past price information but all public information about the firm (such as company earnings announcements). The third and most stringent form of the EMH is known as strong and is when the market immediately incorporates all information known to insiders or groups who have special access. In sum, according to Fama, the evidence up to that point suggested that capital markets, at least in the United States, supported at least the weak and semistrong forms of the EMH and, in many cases, the strong form as well. In fact, Fama could at that point find only two situations when the strong form of the EMH did not hold. The first was corporate insiders in general, in which securities regulations already provided hefty consequences for trading on this information. The second situation was “specialists” on the floor of exchanges, those who had access to the limit order book. As a result of this informational asymmetry, specialists could advantageously order trades to eke out small profits based on minuscule price fluctuations. While Fama suggests that this type of activity is evidence for market inefficiency, he indicated that it could be eliminated through electronic market exchanges, exchanges that were only under development at the time.
The EMH, and, to a lesser extent, the CAPM, had become dogma by the late 1970s, with one economist stating, “I believe there is no other proposition in economics which has more solid empirical evidence supporting it than the Efficient Market Hypothesis.” Nevertheless, the EMH began to be attacked, not only for its inability to explain certain financial anomalies but also as a result of new forms of economic and financial research that paid attention to what were termed psychological biases; this form of research came to be known as behavioral finance and is linked to the early work of Daniel Kahneman and Amos Tversky in examining how people’s expectations of future events do not match the assumed underlying probabilistic models. As a result of this, a small number of financial economists began to ask how such inefficiencies—such as the inability to correctly estimate risk based on probabilistic models—might function within actual markets and whether they were a stabilizing or destabilizing force.
Perhaps surprisingly, one of the most cogent early discussions of these inefficiencies was by Fischer Black himself. In a 1986 presentation to the American Finance Association titled simply “Noise,” Black constructed a binary between noise and information, suggesting that there were traders in the market who could not distinguish between the two:
In my basic model of financial markets, noise is contrasted with information. People sometimes trade on information in the usual way. They are correct in expecting to make profits from these trades. On the other hand, people sometimes trade on noise as if it were information. If they expect to make profits from noise trading, they are incorrect. However, noise trading is essential to the existence of liquid markets.
As Black admits, his theory is not based on mathematical formalism and might appear to be “untestable, or unsupported by existing evidence,” an oblique reference to Friedman’s positive economics. But this does not matter: Black ultimately suggests that, in a prescient nod to later performative theories of finance, “someday, these conclusions will be widely accepted.”
For Black, the concept of “noise trading” is an attempt to rescue the EMH in the face of the “irrationality” of human actors. In a world that was governed exclusively by the EMH, there would be no potential of making a profit on information: market prices would instantaneously reflect existing information, making arbitrage impossible. However, the assumptions underlying EMH are not valid within existing markets, and thus those engaged in, for example, fundamental analysis can expect to make a profit trading on existing information. Yet these traders must trade with those who “think the noise they are trading on is information.” This implies, then, that the “price of a stock reflects both the information that information traders trade on and the noise that noise traders trade on.” Black ultimately suggests, however, that even if prices incorporate noisy information, they are, for the most part, never more than a factor of two away from their value. Black’s distinction between “correct” and “incorrect” information implies, then, that over time, the noise trader most likely will not earn a positive return because of his or her erroneous beliefs.
Shortly after the publication of Black’s speech, Andrei Shleifer (economist and early researcher in behavioral finance) and Lawrence Summers (economist, U.S. Treasury secretary under Bill Clinton, former president of Harvard University, and nephew of Nobel Memorial Prize in Economic Sciences winners Paul Samuelson and Kenneth Arrow) laid out the potential situations when noise traders might in fact do better than seemingly more informed investors. For example, unlike in the assumptions of EMH and CAPM, buying and selling securities is not “frictionless” (i.e., there are transaction costs and limits to the amount one can leverage in short selling), meaning that better-informed investors might not be able to take advantage of incorrectly priced securities. In these cases, what appear as arbitrage opportunities (as a result of noise traders pushing the price of a stock up or down) could be too costly for the better-informed investor. In fact, over time, noise traders and the informed arbitrageurs become indistinguishable:
When they bet against noise traders, arbitrageurs begin to look like noise traders themselves. They pick stocks instead of diversifying, because that is what betting against noise traders requires. They time the market to take advantage of noise trader mood swings. If these swings are temporary, arbitrageurs who cannot predict noise trader moves simply follow contrarian strategies. It becomes hard to tell the noise traders from the arbitrageurs.
Shleifer and Summers, along with their colleagues J. Bradford DeLong and Robert J. Waldmann, incorporated these suppositions into two econometric models. One model showed that “noise traders can earn higher expected returns solely by bearing more of the risk that they themselves create. Noise traders can earn higher expected returns from their own destabilizing influence, not because they perform the useful social function of bearing fundamental risk.” Another model suggested that, contra the suggestion of Milton Friedman that unsophisticated investors will quickly exhaust all of their available capital,
if a small number of noise traders are introduced into the population, their relative wealth tends to grow. Noise traders can successfully “invade” the population. In a world in which investors occasionally “mutated” and changed from noise trader to rational investor or vice versa, it would be surprising to find a population composed almost entirely of rational investors.
This idea of the noise trader is now entrenched within the world of financial economics. Recent papers have, for example, performed empirical studies showing that noise can become systematic in a market, correlated across distinct investors and subject to the same types of biases first shown by Kahneman and Tversky. As Black surmised, the noise trader has become accepted by financial economists.
Indeed, the notion of the noise trader is additionally understood by some traders themselves. To come to grips with the economic and financial details of the most recent financial crisis, the writer Keith Gessen, in conjunction with the magazine n + 1, began a series of interviews with a person he calls “Anonymous Hedge Fund Manager” (HFM). HFM, embedded within the world of derivatives, arbitrage, and speculation, provided Gessen with an easy-to-understand primer on fundamental concepts that were obscured by a lack of in-depth discussion in the general press. (HFM would eventually leave Wall Street altogether.) HFM, in a series of later interviews posted on the n + 1 website, noted how market inefficiencies, produced through Black’s noise traders, enabled the capturing of large profits during the Internet bubble:
Yes, you want to be in an inefficient market, with “noise-traders”—people who believe that they have some skill but they really don’t. A great time for stat-arb [statistical arbitrage] was during the inflation of the internet bubble, because so many people, so many average retail investors decided “I’m a stock market genius!” They were just crazy, they were just noise-traders that were creating a lot of distortion. They were sloppy in the way that they traded, and they were also doing things that were just foolish and that created a lot of anomalies that stat-arb guys were able to exploit. After the internet bubble collapsed, that next year was a much tougher year for stat-arb because those noise-traders were gone. It’s sort of effectively functioning like the house in the casino, the gamblers are all like that, when there’s more of them you do well.
Like Black, HFM understood the noise traders as being necessary for normal functioning. Noise becomes a vital component of the system, the unpredictable activity that paradoxically powers the equations that underlie modern finance. Additionally, HFM sees noise traders as part of a binary: those who have information and those who do not, the latter being the noise traders and thus able to be taken advantage of by those who trade on “real” information. Yet unlike Shleifer and Summers, HFM suggested that those better informed will ultimately be able to take advantage of the noise traders.
This distinction between noise and information is in the end, of course, untenable. All markets possess noise to some degree, as transactions do not occur without some friction, either in time or space. Noise is an undeniable aspect of trading as a result of the material, embodied world—embodied in the sense of humans making the trades or writing computer algorithms, material in the sense of the intersection of humans and machines within real systems rather than idealized equations. Noise, then, is not easily assignable to those who are potentially duped into believing so-called false information; rather, it is precisely a result of the factors just mentioned.
The interference between noise and information additionally arises because what “information” and “noise” mean to these financial economists or hedge fund managers remains elusive. Although the EMH makes clear predictions regarding how information is supposed to be absorbed within capital markets to become efficient, it has difficulty defining precisely what information is, leading to the tortured attempts to cleave noise from information. In a world where people do not act like rational agents every moment of the day, where behavior is not predictable to infinite accuracy and precision, some form of noise is inevitable. In the next section, I discuss some situations when this noise becomes sonic, when information ceases to be quantifiable but is rather affective, raising questions as to how we determine what sounds, what noises, might in fact be information.