The first encounters of criminal justice bureaucrats and computing technology occurred early in the twentieth century, a time when debates raged over what it was exactly that authorities should compute. Public officials and researchers argued about which types of crime should be counted; what attributes of offenders were worth recording; and whether criminality was a quantifiable trait of people, populations, and places. It would take decades of political and intellectual posturing before criminal justice agencies would produce uniform statistical data on a national scale. It also required technical advancements. While debates intensified over how crime and criminality should be calculated, a cutting-edge technology was gaining popularity throughout state bureaucracies. The device was the electromechanical tabulating machine, ancestor to the electronic digital computer.
From the late nineteenth century through the early twentieth century, each branch of the criminal justice apparatus lobbied to establish its data as foundational for measuring crime. For most of the 1800s, different states and municipalities compiled different types of data in their official crime indexes. In the 1830s, New York, Massachusetts, and Maine produced official data on offenders using court and county attorney records. The 1850 census was the first to incorporate data on the offender population, which drew from prisoner records. The first debates about standardizing criminal justice data production on a national scale occurred at a National Police Association conference two decades later. Prior to the conference, Congress stipulated that the newly formed Department of Justice supply the attorney general with statistical data on crime on an annual basis. The mandate sent law enforcement and correctional authorities into a frenzied dash to establish an official data structure to be used across the country. The 1880 census designed reports by police departments in major cities that required officers to enter information on twenty-four different offenses and a dozen inquiries about arrests and subsequent dispositions. The 1880 census also included a report titled Defective, Dependent, and Delinquent Classes, which included data on 231 offenses classified as against the government, against society, against the person, against property, and on the high seas. After launching in 1902, the Census Office commissioned a draft of a manual for preparing crime statistical data for all three branches of the criminal justice apparatus. The commission was, according to the office, issued in “response to general demand for more complete and satisfactory statistics relation to crime and criminals.”
This demand for statistical data was stimulated by various historical developments. It was spurred to some extent by the professionalization and bureaucratization of the state apparatus typical to the late nineteenth century. State reformers during this period placed heavy emphasis on nonpartisan scientific methods to select and evaluate civil servants and on establishing legal science according to positivist principles. But the fixation with statistical data was also spurred by technological advancements, the most influential of which was Herman Hollerith’s electromechanical tabulating machine. The machine was the outgrowth of Hollerith’s ambition to furnish the Census Office with a more efficient and accurate way of tabulating census data. To do so, he developed a machine that punched holes in cards, each hole representing an age group, conjugal status, English proficiency, nativity, occupation, race, or sex (Figure 2). The cards were then fed into a circuit-closing device where each perforation was recorded by an electromagnetized counter. This invention gave birth to “information in combination,” or the ability to ascribe multiple variables (e.g., age, race, sex, occupation) to a single “instance” at a stroke. It also made it possible to search for individuals using one or several of these traits as search terms. Hollerith conducted his first census-taking pilot with the tabulator in Baltimore in 1887, before receiving a contract from the Census Office for the 1890 U.S. census. Six years after the census was completed, Hollerith founded the Tabulating Machine Company in Washington, D.C. By 1911, the Bundy Manufacturing Company, Computing Scale Company of America, and International Time Recording Company amalgamated with his company to form the Computing-Tabulating-Recording Company. Fifteen years later, the company was renamed the International Business Machines (IBM) Corporation.
The electromechanical tabulator was classical epistemology made into a machine. Its function was to represent populations through tabulation, most of all through differentiating individuals according to predefined traits. The power of tabulation had already been put on full display in the mid-eighteenth century, when Linnaeus used tables to differentiate biological organisms and Quesnay to differentiate economics agents. With Hollerith’s tabulating apparatus, state administrators nudged the differentiation of ethnoracial populations into the coming age of computing. In fact, Hollerith’s application of electromagnetic theory to social differentiation proved so effective that the Nazi Party used his tabulators to register Jews and Romanies in censuses and concentration camps.
The eruption of human tabulation found its way into urban administrations. Amateur statistician Frederick L. Hoffman capitalized on the new wealth of 1890 census data in a study on black criminality called Race Traits. Hoffman postulated that only by “thorough analysis of all the data that make up the history of the colored race” could the true nature of the so-called Negro Problem be ascertained. This project was a response to the Great Migration, which witnessed 6 million blacks flee the racialized totalitarianism of the post-Reconstruction South to northern industrial cities. The migrants sought work in the factories, packinghouses, and steel mills of cities, which generated great trepidation on the part of low-waged native and nonnative European workforces. Hoffman sought to prove black inferiority using statistical methods, which could in turn justify segregating black migrants. Hoffman drew heavily from the abundance of demographic data generated by the 1890 census, but he took considerable liberties with the way he tabulated and subsequently calculated the data. In one column, populations categorized as Bohemian, German, Irish, Italian, Norwegian, Polander, Russian, and Swedish in the census were transformed in Hoffman’s tables into “Whites.” In the other column, populations categorized as Black, Octoroon, Mulatto, and Quadroon in the census were transformed in his tables into “Coloreds.” He used these racially binarized tables to compare and contrast data on anthropometrics, conjugal status, education, imprisonment, literacy, occupations, populations (e.g., state, city, county, ward), religiosity, and vital statistics (birthrates/death rates) between the populations (Figure 3). Although Hoffman’s study omitted socioeconomic data, lacked controls, and was bereft of intragroup comparison, it nonetheless spread throughout the disciplines of anthropology, demography, sociology, and even history. It also shifted the center of scientific racism from anthropometry and biology into the domain of statistics-based social science.
As the nineteenth century turned, statistical tabulation slowly made its way into the production of criminal justice data and criminological knowledge. Within a couple of decades, the FBI received annual statistical reports from almost all national and state adult reformatories, penitentiaries, and prisons. But the quality of the data was under scrutiny, particularly for lack of controls for sentencing variations across municipalities. Many statisticians argued in favor of using court data from reports made by clerks and state’s attorneys working in criminal courts. Others lobbied to make arrests and complaints to police the bases for crime indexes. Advocates for police data argued that they provided a more direct, unadulterated view of conditions on the ground. Toward the end of the 1920s, police incident report data finally emerged as new fundaments of criminal justice datasets. The ascent of these data was partly due to the influence of the police professionalization movement, which trumpeted the application of sciences and technologies to reduce police graft and break up ethnic nepotism in urban police departments. Champions of police professionalization made the case that the application of technology to policing could also enhance criminal apprehension. Considerable energy was spent on standardizing criminal justice data production at the annual meeting of the International Association of Chiefs of Police (IACP) in 1927. It was there, during the police professionalization movement, that the idea that “objective and scientific counting of crime” could be used to enhance police legitimacy was first articulated. Influential reformers, such as Leonhard Fuld, Raymond Fosdick, and Bruce Smith, argued that scientifically cultivated incident report data could counter sensationalized crime reportage and the iconic status of criminals that permeated popular culture at the time. The fruits of the meeting bore in subsequent attempts by the Bureau of Hygiene, the FBI, the Laura Spelman Rockefeller Memorial, the Social Science Research Council, and urban police departments from across the country. The meeting was called to devise a national crime reporting system. After rigorous debate and dissension, the parties decided to base the system on offenses known to police, which were divided into crimes against persons and against property. Architects of the system’s data structure, which was eventually called the Uniform Crime Reporting (UCR) system, decided against including property offenses like counterfeiting, embezzlement, fraud, and forgery.
Two years after the IACP’s fated meeting, the association declared that “the need for compiling and analyzing persons charged by means of tabulating machines [was increasing] with the size of the city.” Electromechanical tabulation, reported the association, was the only practical solution for metropolitan police departments to record and analyze the social characteristics and dispositions of their arrestees. “If it were desired to know the sex and color of persons charged,” an IACP researcher explained, “the cards [could] be sorted in numerical sequence for this field.” “With aid of automatic tabulating and listing devices,” continued the association, “the facts punched on the cards can be tabulated in any manner desired. The machines are almost human in their accomplishments; they list grand totals, totals, and subtotals for any fact or combination of facts.”
For the police apparatus, the curiosity with computing technology was to a great extent a response to the surge of Eastern and Southern European immigrants into industrial cities. These groups were blamed by city officials for the ills of industrial urbanization. The focus on European immigrants was precipitated by the fact that some 27 million immigrants, notably from Germany, Italy, Poland, and Russia, entered the United States between 1880 and 1930. The so-called Negro Problem was not once mentioned at the IACP conventions of 1919 and 1920 and scarcely touched upon by key figures of police professionalization. With the Second World War looming, the new European immigrants were construed in political rhetoric as security threats. The flames of anti-immigrant sentiment were also fanned by ethnicized labor conflicts, the Red Scare, and the surge of anarchist violence punctuated by Leon Czolgosz’s assassination of President William McKinley. Police experts across the industrial North convened to devise ways to manage those “ignorant of our language, laws and customs [and possessing an] inborn suspicion of all police officers.” Police officials in districts where reports on crime were higher complained that they were up against immigrants for whom “liberty had no meaning other than gross license.” Some police officials ascribed 85 percent of all crimes to Jews. Others ascribed crime to the broader population of immigrants who “can’t talk the English language . . . don’t know our customs . . . and are in general the scum of Europe.” Influential police reformer August Vollmer, himself the son of German immigrants, reported that excessive urban growth and so “many millions of immigrants, ignorant of our language, laws and customs, and necessarily adhering in their racial segregations,” made criminal detection increasingly difficult for police. And so police reformers envisioned the electromechanical tabulator as a means of administering these stigmatized surplus workforces. Vollmer explored different uses with Hollerith tabulators as early as 1921, for he belonged to a larger movement convinced that more descriptive and reliable data were necessary to transform urban “social waste” into “desirable and useful social beings.” But tabulating machines were not yet widely available. Two vendors, the Computing-Tabulating-Recording Company and Powers Accounting Machine Company, controlled the market, and their main clients were limited at the time to census bureaus, businesses, and railroad companies.
Spatializing Criminal Justice Data
While it would take another half century for automated computing technology to be integrated into the criminal justice apparatus, the first part of the twentieth century saw significant developments in the way that urban criminality was computed and analyzed. The most influential developments in this regard arose from University of Chicago sociologists who chronicled the supposedly criminogenic characteristics of poor and stigmatized parts of the city. In retrospect, one of the more striking characteristics of criminal justice datasets during the nineteenth century was their general lack of spatial data. This seems peculiar given that the UCR system was originally designed to highlight crimes associated with urbanization. What is more, several nineteenth-century theories about urban crime included observations of the social spaces in which they clustered. André-Michel Guerry’s statistical maps, Henry Mayhew’s rookeries, Archibald Alison’s low neighborhoods, John Glyde’s poor law unions, Charles Booth’s poverty maps, and Breckinridge and Abbott’s Hull House maps are some examples. Even criminologist Cesare Lambroso, a key architect of biological racism, considered village profiles when examining criminality in Italy. But although police scientists and state reformers began to explore the electromechanical production of crime data, they remained for the most part preoccupied with compiling data about social rather than geographic characteristics.
While police reformers and social statisticians contrived biological and cultural explanations for urban crime indexes at the turn of the nineteenth century, the University of Chicago’s social ecologists offered sociospatial ones. Founded in 1892, one year before Hollerith received a runner-up prize at the Chicago World’s Fair for his machine, Chicago’s sociologists formed their historic school. It was during a period of urban mutation. Between 1890 and 1930, the city’s population more than tripled. During this period, as much as 77 percent of the city was first- or second-generation immigrant. The city itself was a factory for ethnoracialized proletarianization, which lured labor power from Czechoslovakia, Germany, Greece, Ireland, Italy, Norway, Poland, and Sweden. A hub for low-waged labor, almost one-third of the city’s workforce was funneled into manufacturing and construction. Workers from Mexico also arrived in large numbers, accounting for nearly one-quarter of the railroad labor force by 1920. And although the city’s population was under 3 percent black, Drake and Cayton calculated that black labor power made up 60 percent of garage work, 60 percent of railroad labor, 40 percent of coal yard labor, 50 percent of packing and slaughterhouse work, and 35 percent of stockyard work.
The Chicago School emphasized interactions between physical environments and social values to decipher criminality in the hope of moving beyond the biological racism that dominated research on urban crime. One-time police beat reporter and founding member of the Chicago School Robert E. Park boldly asserted that if you “reduce all social relations to relations of space [it] would be possible to apply human relations to the fundamental logic of the physical sciences.” Density, distance, proximity—these emerged as the central variables of a new, spatially inflected science of urban crime. To be sure, such a spatial perspective was not completely without precedent. The Chicago School’s conceptual framework traces at least back to Adriano Balbi and André-Michel Guerry’s 1829 tract Statistique comparée de l’état de l’instruction et du nombre des crimes. Published just two years after French mathematician Charles Dupin invented choropleth maps, or maps that display average values through shading, Balbi and Guerry represented crime data cartographically to provide new ways of interpreting criminality. Guerry, an amateur statistician who invented a mechanical device that generated statistical summaries and data correlations, went on to publish an internationally renowned manuscript using district-level data to study correlations between crimes, motivations, and social characteristics. Several aspects of his method—its data sources, scales of analysis, variables, perceived casual connections—resonated in the Chicago School. But where space was static in Guerry’s work, the Chicago School saw it as something dynamic. This was due in no small part to the fact that industrialization was warping Chicago’s landscape.
In devising a data structure that could account for the highly dynamic character of industrial urbanization, Park imported concepts of invasion, dominance, and succession from ecology. Population density, diversity, and turnover, both tabulated according to ethnic/racial categories, emerged as dominant data points. Their data structure, which omitted how the behavior of the criminal justice apparatus influences crime indexes, established parameters for understanding cities, crime, and race that persist in today’s crime-mapping software (see chapter 2). Though ethnoracialized divisions of labor were acknowledged by the Chicago School as having influence on geographic distributions of urban crime, the ecologists lacked an adequate theory of why labor markets were partitioned as such in the first place. The sociologists thus identified self-interest and personal preferences as the causes of racially differentiated labor markets. Burgess viewed racial labor segmentation as the effect of racial temperaments. Park argued that clusters of unemployed people were naturally formed aggregates of people who specialized in begging as a vocation. These heaps of “human junk” were seen as organic products of individual choices enacted within the conditions of industrial urbanization. The Chicago School’s focus on the neighborhood scale reinforced the tendency to blame industrialized poverty on impoverished communities. While the notion that crime was an intrinsic property of poor urban neighborhoods was expressed before by several British proto-urbanists, the Chicago School developed a standardized system for quantifying how criminogenic an area was. The system classified neighborhoods by ethnoracial composition, homeownership rate, median income, and turnover rate, among other variables. This assumption that one can understand crime solely by calculating neighborhood-level phenomena was present throughout the classic ecological works of William Thomas, Florian Znaniecki, Frederic M. Thrasher, and E. Franklin Frazier (a notable exception is Edwin Sutherland).
Social ecology achieved its most iconic expression in Burgess’s concentric zone model, which taxonomized urban space according to social composition and economic functionality (Figure 4). Located simultaneously in the urban core and at the periphery of the urban social order, Burgess’s infamous Zone II was where the most disposable elements of the industrial workforce resided. Clifford Shaw and Henry McKay looked for correlations between juvenile delinquency and the physical, economic, and demographic characteristics of Zone II. A central goal in their landmark work was to conduct a comparative analysis at unprecedentedly small geographic scales. Building on Park’s natural areas and Burgess’s concentric zones, the duo drew data from fifty-six thousand court records to make multilayered cartographies of delinquency. The project was an immense undertaking for the time, involving manually plotting the home addresses of some twenty-five thousand juvenile offenders who had passed through the Cook County Juvenile Court between 1900 and 1933. The Department of Political Science was the university’s only department using Hollerith tabulators to code and analyze voter surveys at the time. The university did not own a tabulator at this point. Faculty had to use research grants to use a machine in the Comptroller’s Office in City Hall. Chicago’s sociologists did not use electromechanical tabulation until the late 1930s in studies to predict marriage success and failures. Shaw and McKay thus made spot maps of the home residences of alleged juvenile delinquents with overlays of Burgess’s concentric zones, demolished buildings, vagrancy, rate maps of alleged delinquents (in square miles), and zone maps based on the concentric model (Figure 4). Delinquency was interpreted as a function of proximity to these territories, which were typified by ethnic heterogeneity, high residential turnover, and low income. Shaw and McKay argued that these traits engendered antisocial values, incentivized deviance, and ultimately normalized criminal behavior.
It is difficult to overstate the influence of social ecology on the discursive construction of negatively racialized, criminalized urban areas. From the Chicago School onward, these sectors were seen as intrinsically criminogenic, while the broader politics of crime and punishment was brushed aside. Through the ecological model, discriminatory laws, police profiling, prosecutors, judges, and sentencing biases were all barred from consideration. As if by methodological injunction, the very mechanism of criminalization, that is, the criminal justice apparatus, was excluded from statistical measurement. Datasets on police–juvenile contact did not include information about geographic dispersions of police patrols; the data on juvenile court referrals were not scrutinized for evidence of juror bias; and the data on youth commitments were not examined for sentencing bias. Published not even a full year after passage of the Johnson–Reed Act (1924) and its national origins quota system, in the immediate aftermath of the Red Scare, Burgess’s genre-defining 1925 article had nothing to say about the systematic criminalization of immigrants and migrants. None of this context was factored into the ecological framework, yet it emerged as the dominant model for understanding social and spatial patterns in criminal justice datasets and was easily codified into geographic information systems in decades to come.
Criminalistics and Instrumentation
Physical scientists introduced industrial technologies into the production of criminal justice data while social ecology made its ascent. Long before the computer-managed environmental sensors in contemporary cities, criminal justice institutions measured aspects of physical environments through criminalistics. The criminalistics movement of the early twentieth century opened new doors for linking criminalization with cutting-edge science and technology, and added scientific credence to law enforcement. In fact, by 1923, the U.S. Supreme Court had ruled that evidence obtained through criminalistic techniques were admissible if they were accepted in scientific communities. What made criminalistics unique was its emphasis on instrumentation, which generated so much information that relations between science, technology, and criminalization began to alter.
The merging of science, technology, and criminalization is attributed to the FBI’s premier director, John Edgar Hoover, a technophile who staked his legacy on suppressing the Mafia and Black Power, civil rights, and various labor movements. After assuming power in 1924, Hoover promptly announced that his main objective was to inject scientific and technical analysis into the heart of criminal detection and identification. He added scholastic modules into agent training, opened a criminology library in the bureau, and took over the administration of UCR data from the IACP. Hoover also funded academic programs, such as the Scientific Crime Detection Laboratory at Northwestern University, which served as a model for the FBI’s Technical Laboratory. Technical evidence was the raw material of criminalistics. Fancying themselves natural scientists interested in how science could be employed in the juridical sphere, pioneers in the field embraced techniques from analytical chemistry, ceramics, geochemistry, organic chemistry, metallurgy, and thermochemistry. Blood, fingerprints, hair, paint specimens, residues in fire remains, semen, subatomic particles, and vapors were its fundamental elements. This development was an effect of applying industrial-sector technologies to criminal justice identification and apprehension. Electron microprobes, emissions spectrometers, gas chromatographs, nuclear scalar counters, ultraviolet and infrared spectrophotometers, and X-ray diffractometers emerged in forensics laboratories as choice instruments. Going a step beyond the Chicago School, which studied topological urban space, criminalists studied space in its physical dimension. Combined gas chromatographic–mass spectrographic analysis, differential thermal analysis, and neutron activation analysis provided methodological support for these new scientists of crime. Consonant with wider developments in the Second Industrial Revolution, the electromagnetic field was wielded by forensic scientists as a medium through which criminal investigation and apprehension could be extended. By the early 1930s, most major cities had some type of crime laboratory for criminalistics.
Biometric technologies were also central to criminalistics. The FBI designed polygraph machines that measured and recorded blood pulses, pulse rates, and, eventually, respiratory rates to detect criminal guilt. The bureau’s core accomplishment involved establishing the first national fingerprint database, which introduced the large-scale registry into the heart of the criminalization apparatus. Many of the basic principles of modern fingerprinting were established in the late nineteenth century by English statistician Francis Galton, who also invented the mathematical correlation while working on criminal identification techniques for a eugenics project. But it was French police administrator Alphonse Bertillon’s fingerprint system that first arrived in the United States in 1887 in a penitentiary in Illinois. His system, bertillonage, marked the birth of modern criminal identification by arranging anatomical measurements and photographs of offenders in statistically organized filing systems. The Henry fingerprint identification system, which originated in Calcutta’s Anthropometric Bureau for colonial administrative purposes, also became common in the United States at the close of the nineteenth century. By 1915, the International Association for Criminal Identification appeared in Oakland, California. Nearly a decade later, the FBI unveiled its Identification Division, which was made up of thirteen hundred fingerprint technicians who matched more than one thousand prints with criminal files per day. The bureau’s position was that identifications were necessary given the anonymity of the urban condition, and by the time President Lyndon Johnson announced the first War on Crime in 1967, the division had generated more than 15 million fingerprint cards. At the time, this made for the largest biometric database in human history.
Racial Deproletarianization and the War on Crime
The great surge of interest in criminal justice departments for digital computers was a result of many contingent, contradictory developments that followed World War II. On one hand, it occurred during a period of incredible technological advancement. Ambitious criminal justice technocrats and technology companies demonstrated great zeal for applying computers to crime control alongside the arrival of microprocessors, integrated circuits, high-level programming languages, and eight-inch floppy magnetic storage disks, not to mention the internet’s forerunner, ARPANET. On the other hand, while computing technology was rapidly developing, black and latinx enclaves were systematically underdeveloped through policies of benign neglect and planned shrinkage. The “black ghetto” fully eclipsed Zone II in the eyes of urban criminologists, law enforcement, and urban researchers. The shift was a function of several contradictions. First, the pools of black labor power furnished by the Great Migration grew functionally obsolete as manufacturing industries relocated from major cities to suburbs and rural areas. A national Commission on Civil Rights reported that between 1940 and 1960, labor participation for black males between the ages of twenty-five and sixty-four years decreased at four times the rate their white counterparts experienced. By the 1960s, the former were twice as likely to be unemployed as the latter. By the middle of the decade, according to a Labor Department study, almost 15 percent of “nonwhite separated women” were unemployed, compared to twice as many “Negro males.” Nearly 15 percent of children classified as Negro received American Families with Dependent Children assistance, compared to 2 percent of children classified as white. The numbers were indicative of a social crisis of such proportions that it catalyzed the technological restructuring of urban administrative power.
Various mechanisms of racialized dispossession and containment emerged amid the flight of industrial capital. Post–World War II urban renewal programs and the extension of the highway system supplemented the economic marginalization of urban blacks with physical marginalization. Redlining strategies by banking, insurance, and real estate agencies coded blackness as unfinanceable in redevlopment projects from the late 1940s to the 1960s. Moreover, racially distributed mortgages and home equity loans accelerated the depreciation of housing markets in black communities. Though billed as a low-income housing program, the Housing Act (1949) fueled the massive displacement of black and latinx populations via so-called slum clearance projects that destroyed more low-income homes than they created. These programs to revalorize urban economies of space displaced tens of thousands of families across deindustrialized cores to make room for new commercial centers, luxurious apartments, university facilities, and other risk-averse land uses. Such processes were at odds with the massive suburbanization projects that paralleled urban renewal following World War II. Through concerted efforts by the Federal Housing Administration, Veterans Affairs, and Federal Deposit Insurance Corporation, municipal governments established a distinct form of racial segregation in the industrial North via suburbanization. Between 1950 and 1970, the country’s suburban population increased 30 percent, which put considerable strain on city revenue generation. This was to a large extent due to the racial state accommodating the exodus of white professionals from deindustrializing cities. New suburban businesses, labor markets, and property markets gestated through this colossal welfare program, which redistributed rates of homeownership, state funding, and wealth to cultivate the white middle class that we witness disintegrating today.
But the story of urban blacks would not be one of passivity. Revolts erupted across industrial cities throughout the middle of the 1960s. Dozens of insurgents mobilized in Rochester, New York City, Philadelphia, Jersey City, Paterson, Chicago, Watts, Cleveland, San Francisco, Newark, Detroit, Houston, Milwaukee, Minneapolis, Baltimore, and many other cities. These revolts signified socially stigmatized and economically discarded populations rejecting the material conditions foisted upon on them once capital no longer needed their labor and proved unwilling to support their social reproduction. In spectacular fashion, tens of millions of dollars’ worth of urban capital was destroyed. The episode signified a not-so-subtle shift from civil rights tactics exemplified by the Conference on Racial Equality, Southern Christian Leadership Conference, and Student Non-Violent Coordinating Committee to those of the Black Power movement. The upheaval involved nearly a half million people, 60,000 arrests, 10,000 serious injuries, and 250 deaths. These insurrections represented negative moments of urban accumulation in the very heart of capitalism’s precious cities. This episode was so fraught with implications for the political economic system in cities that it demanded a digitially enhanced mode of urban administrative power.
Some of the most spectacular transformations in urban administration to arise in reaction to black insurgency in the 1960s occurred at the interface of social and penal policy. In the economic field, the fusion of social policy and penal policy had already been under way under John F. Kennedy’s New Frontier programs. Historian Elizabeth Hinton demonstrates how the antidelinquency programs of the Juvenile Delinquency and Youth Offenses Control Act of 1961 acknowledged the role of racism in economic disparity, on one hand, and codified racialized theories of social pathology in policy, on the other. Despite Progressive encouragement from the Civil Rights Act of 1964, Office of Economic Opportunity, and Voting Rights Act of 1965, Kennedy’s successor, Lyndon Johnson, nevertheless framed the urban rebellions in New York during summer 1964 as a matter of law and order. He formally announced the War on Crime the following year. Though some liberals warned of choosing between a social state and police state, it was the latter that arose from the turmoil, and it was on such a grand scale that it required the powers of IT to build and administer it.
Urban administrations were fortunate that a model for enrolling criminal justice apparatuses to suppress political movements already existed. Southern segregationists fashioned a suite of juridical instruments to criminalize civil rights activities a decade prior. The exportation of this “southern strategy” to northern industrial cities was inaugurated with Republican presidential candidate Barry Goldwater’s 1964 pledge to save northern cities from the “bullies and marauders” of the civil rights and New Left movements. Not to be outdone, President Johnson announced the War on Crime shortly thereafter. Politicized criminalization was further advanced in the North by Richard Nixon, who, completely lacking for irony, exhorted the need to militarize law enforcement lest the country become the “most violent in the history of the free peoples.” Nixon announced the War on Drugs two years into his administration. In the university apparatus, “black ghettoes” emerged as analytical objects of social scientists. Lawrence Mead, Patrick Moynihan, and Charles Murray were the most visible ghettophiles. It is not enough to point out the pseudo-scientific nature of their work. What is important is the strategic function of their publications in the context of the crime and drug wars. Much like Hoffman in the post-Reconstruction era, theorists of black social disorganization provided rationalizations for racialized forms of governance. But unlike Hoffman, the social disorganization theorists of the 1960s had the bipartisan support of the racial state.
Digital Computing Enters the War on Crime
In 1968, a year in which assassinations, electoral realignment, imperial frictions, and social conflict sent shockwaves across the U.S. political field, computer scientist Melvin Conway published an influential article in Datamation magazine that laid the groundwork for what eventually became known as Conway’s law. The piece belonged to a wider though barely noticeable development taking shape in universities during the turbulent 1960s—the birth of computer science. Conway’s law stated that the communicative structure of an organization shapes the way its members conceptualize systems. The thesis resonated deeply with state bureaucrats and provided a guiding principle for criminal justice restructuring following the violent contradictions of the decade.
The police apparatus was the first branch of the criminal justice system pegged for computerization in the 1960s. Before digital computers arrived, the communicative structure of the police was command and control, a system where designated commanders exercised unidirectional authority over adjunct agents to execute assignments. Drawing from the military, police incorporated command and control with the assistance of the Federal Communications Commission and American Telephone and Telegraph Company (AT&T), who teamed to implement a centralized service call system (911) in the late 1960s. The calls were routed to a communications center, analyzed for relevant details, and then used by commanding officers to direct patrol units. This system was envisioned as a “nerve center controlling the minute-by-minute deployment of the police force.” Reformers of police administration, such as O. W. Wilson, aggressively promoted the expansion of automatic dialing equipment to lessen the workloads of human switchboard operators. Telecommunications systems were also regarded by Wilson and others as mediums to bolster coordination between administrative offices and motorized patrols. Such a communicative infrastructure, law enforcement bureaucrats and technologists maintained, would blend seamlessly with command-and-control logics found in computing. With computers, command functions emanated from logic boards instead of human commanders. Technology corporations and technocrats quickly took up the stance that automated command posts would make the police apparatus unbelievably more efficient. Companies saw a new opportunity on the horizon, so they joined forces to stimulate effective demand.
Lyndon Johnson’s Law Enforcement Assistance Administration (LEAA) was at the center of programs to insert digital computing technology into the police apparatus. Initiated by the Omnibus Crime Control and Safe Streets Act of 1968, the LEAA was formed in response to the rising incident crime reports associated with the civil rights movement, second wave feminism, resistance to urban renewal, and Vietnam protest. One of the conclusions drawn from Johnson’s Commission on Law Enforcement and Administration of Justice was that urban police would benefit greatly from a digitized command-and-control system. In such a system, the computer would maintain records of street address locations and the current whereabouts and availability of patrol units to determine which unit was best positioned to respond to the call (Figure 5). It would also automatically send a dispatching order to the selected patroller via computer-generated voice messages or teletype. If the patrol unit did not acknowledge the message within a specified period of time, the system would contact another unit.
Johnson’s commission also recommended a “computerized data bank of policy information” and identification network for line officers and supervisors. At the time the commission’s report was released, the Justice Department was the sole cabinet department that did not receive research and development funds. Speaking at a hearing on the Law Enforcement Assistance Act of 1965, Edward Kennedy (D-MA) bemoaned the Justice Department’s technological deficiencies. Kennedy was especially critical of the fact that there had been a “revolution in technology and in the behavioral sciences over the last 30 years. [Thus] the corpus of knowledge necessary to explain and cope with antisocial behavior has grown tremendously.” Research communities made similar observations. For instance, IIT Research Institute’s Law Enforcement Science and Technology Center chronicled how the scientific community recognized that courts, corrections, and law enforcement were not using post–World War II technology, nor were criminal justice technocrats in conversation with the wider community of scientists and engineers in the country. To rectify the situation, the Office of Law Enforcement Assistance (OLEA) assembled a task force whose core objectives involved reforming criminal justice agencies to foster technological development, translating crime control problems into languages of quantitative analysis, and creating new types of data to accommodate these changes. Moreover, the Organized Crime Control Bill (1970) increased the LEAA’s budget from $75 million to $500 million to expand police agencies equipment and technology. The LEAA recommended that large criminal justice organizations prepare themselves for computerization by creating operations research groups made up of engineers, mathematicians, scientists, and statisticians. One of the more influential ideas the administration proposed was centered on developing digital command-and-control systems that linked command centers to field patrol units. Once a universal emergency phone line was established, the administration exclaimed, information coming from cities could also be autonomously analyzed via computers and sent to the nearest available patrol units. The police apparatus was finally privy to the third revolution in computer technology.
The commercialization of digital minicomputers beginning near the end of the 1970s created opportune conditions for the IT industry to burrow its way into the crime and drug wars. The introduction of hard drives, silicon thirty-two-bit chips, and personal computers to commercial marketplaces was hailed by the LEAA, the National Bureau of Standards, the FBI, and law enforcement officials as a cost-effective means of modernizing the criminal justice apparatus. Authorities and fledgling IT companies, such as Advanced Data Systems, Bendix Corporation, and Northern Research and Engineering Corporation International, explored ways of instituting information systems in everything from background checks to sentencing. In merging criminal justice, science, and technology, the OLEA proselytized systems analysis, the modeling of large complex systems to regulate relations between their constituent parts, as the new administrative matrix. Systems analysis had its roots in industrial science. It was centered in the principle “carried out in the factory system, of analyzing the process of production into its constituent phases, and of solving the problems thus proposed by the application of mechanics, of chemistry, and of the whole range of the natural sciences.” This principle was formalized and further developed by the Ford Motor Company and the Midvale Steel Company, the creators of Fordism and Taylorism, respectively. Following World War II, computer-operated systems analysis—pioneered largely by Jay W. Forrester, who designed computerized management systems for AT&T, Air Defense Command, Air Material Command, Air Research and Development Command, IBM, and Western Electric—was used in the Department of Defense to assess cost-effectiveness. Assistant secretary of defense, economist, and RAND Corporation researcher Charles J. Hitch also developed models for the Defense Department, which culminated in the highly influential Planning, Programming, and Budgeting System. Systems analysis was also introduced to welfare reform in the late 1960s to ensure public assistance satisfied the criterion of cost-effectiveness. The figure of the system became so central during this period that it was ascribed an almost mythical autonomy by thinkers as different as Niklas Luhmann, Talcott Parsons, and Michel Foucault.
The mid-century explosions of systems theory cannot be understood apart from the emergence of the digital computer, whose strange life began to take shape during military research to calculate atomic implosions and ballistics trajectories. The framework of the system, which fused political logics of warfare with economic logics of cost-effectiveness, eventually came to define the logistical dimension of the crime and drug wars. And logistical knowledge, political geographer Deborah Cowen has shown, is characterized by harnessing the tactical logics of the military to accommodate circulations of capital. Thus, as political economic mutations and social antagonisms intensified in the 1960s, the LEAA reformers embraced computers as tools of applying systems analysis to criminal justice to make it more streamlined, and more martial. Technocrats in the administration encouraged law enforcement officials to reimagine criminal justice apparatuses as an enormous complex of operations working in harmony across space and time. Efficiency was understood in market terms. At the first National Symposium on Law Enforcement Science and Technology in 1967, sociologists asserted that the “passage of offenders from one agency to the next, from arrest to release, somewhat parallels the passage of raw materials from one firm to another as they are converted into finished products and are distributed to consumers.” In the final analysis, making the criminal justice system look like a digital computing system was part of broader projects to establish a logistically sound form of racial management.
The combination of computers and systems analysis left its imprint on the ways technocrats conceptualized criminal justice administration. Many insisted the War on Crime be firmly rooted in the principles of positivist science and conducted logistically with the aid of digital computers. One of the main benefits of this new approach, argued the LEAA, was that it could economize the many criminal justice activities required for Johnson’s impending War on Crime. For one, computers were meant to economize the amount of brainpower that criminal justice personnel expended on making decisions. A report from the Criminal Courts Assistance Project enthusiastically explained that one of the benefits of using computers in courts was that, for court personnel, “it is not necessary to learn how a computer is engineered or constructed, nor is it advisable for court management to learn how to program or operate computer equipment.” Another benefit was that using computer-aided system analysis was construed as a way to help authorities appraise the cost of internal operations throughout the criminal justice apparatus. Everything from low-level employees to large-scale initiatives were valuated with an eye toward streamlining War on Crime operations. Moreover, system analysis was seen as a means of revealing the projected costs of new practices and policies. It was used to predict, for instance, how increasing clearance rates or providing treatment to convicts would affect the overall efficiency of the criminal justice system. But to achieve this thoroughly rationalized apparatus, high-level criminal justice activities first had to be translated into mathematical equations.
Applying the analytic of the system to police administration was the first order of business for the LEAA. This was because police were determined by federal officials to benefit from computerization the most. An LEAA task force considered how methods of scientific analysis and experimentation could be used to evaluate data from case clearances, emergency calls, incident reports, and patrol field activity. It also called for the development of new mathematical models to assess individual patrol forces. The task force criticized distributing patrol forces on the basis of incident-to-officer ratios and emphasized the need to distribute them on the basis of data analysis instead. Patrol dispatch was thus one of the first areas of the police apparatus to be exposed to the ideology of systems. Up until the late 1960s, patrol dispatch was performed by human operators who manually filed, examined, prioritized, and transmitted information from emergency calls to patrol units. Computer-aided dispatch (CAD) systems changed this. CAD was first developed and most fully elaborated by the South Bay Regional Communication Center, California Crime Technology Research Foundation, and Planning Research Corporation in the late 1960s. The original CAD system automatically extracted and prioritized information from emergency calls while simultaneously tracking the movement and activities of patrol units. It also recommended specific patrol units to respond to specific calls in an efficient manner. CAD represented one of the earliest attempts to determine the relative policeability of urban areas through digital computing.
Technology Firms and the LEAA
The LEAA’s commitment to computerization created new opportunities for technology firms. Giants of industrial production and emerging technology corporations eagerly offered their expertise and services. California was naturally a hub for private firms working on criminal justice information technologies. For instance, the California-based System Development Corporation offered its own vision of how criminal justice agencies could capitalize on advancements in digital computing. Such a union, corporate enthusiasts maintained, was not simply a matter of automating existing criminal justice practices. It was also regarded as the birth of a new vision of criminal justice where activities, machines, personnel, and procedures were modular components in a rationally administered process. The corporation asserted that increased crime rates would lead to increased criminal justice activities, which required increased computing power. In the late 1960s, the Alameda County Data Processing Center unveiled a real-time “police information network” that involved almost seventy law enforcement agencies. Lockheed Missiles and Space Company, a former California-based branch of Lockheed Martin, inventoried data items in each state’s criminal justice agencies and offered to analyze them to maximize efficiency. Hughes Aircraft Company designed an automated filing system replete with addresses, aliases, distinguishing marks, driver’s licenses, fingerprints, racial descriptors, and Social Security numbers. The Department of Defense also aided these corporations and California in rolling out an information management system to respond to rapid urbanization, increased individual mobilities, and an “aggressive sensitivity for civil rights.”
Such developments were not confined to the West Coast. As the War on Crime kicked into gear, the New Jersey–based firm Computer Technology Incorporated developed a digitized ballistics identification system for investigative units throughout urban departments. The system combined computers, electromechanical scanning, and applications that assigned numerical values to ballistics markings. It stored the values on magnetic tape and could match new markings against previous files. In Connecticut, the Woodley Company explored equipping patrol units with car locator devices to further optimize the dispatching system. The company evaluated how efficient patrol units were—in terms of arrests, fuel, and response times—by analyzing police incident data, call box sensors, and car-borne position reporting. Some of these systems were fully automated, while others required the active participation of humans. One example of the latter type involved using geographic information systems to divide jurisdictions into quarter square mile patrol areas. Each area was assigned a number, which patrol units were required to report manually as they moved about the city. IT was also used to optimize this movement. In Maryland, the IIT Research Institute was contracted by the LEAA to design a semiautomated dispatch system for the Washington, D.C., Metropolitan Police Department. The result linked nearly four hundred cranes, cruisers, harbor patrol boats, motorcycles, patrol wagons, and scout cars, each route of which was partially determined by computers in control rooms.
IBM also figured prominently in police computerization. Specifically, it demonstrated the far-reaching powers of the database to police departments gearing up for the War on Crime. Record repositories date back to Scotland Yard in 1829, the first professional police department in the Western world. For more than a century, repositories were used to store information about cases prosecuted and incarcerated persons. The New Orleans Police Department was the first to use an electronic arrest and warrant data processor in 1951, which was made from a vacuum tube–operated calculator connected to a punch card sorter and collator. But following the popular insurrections of the 1960s, IBM showcased a database that could provide patrol forces access to digitized files on people (e.g., criminal records and warrants) and places (e.g., property owners and building blueprints). In prescient fashion, IBM outlined a future where digital computers analyzed geographic, temporal, and type-of-crime patterns in incident data; evaluated personnel performance; managed the allocation of manpower; and processed inquiries sent from remote data terminals. It also prototyped a system that applied military command system technologies and techniques to apprehend criminals, prevent crimes, and inhibit social unrest. The system revolved around a Communications Input-Output Center that circulated to and from geographically tracked patrol units. But the application of such systems at the time was limited as they had to be maintained manually, could only be afforded by large departments, and had to be shared with other state entities.
Biometric technologies also evolved during the 1960s, most of all through advanced criminalistics. Three decades prior, the FBI’s Identification Division developed a punch card system for searching fingerprint files using the same system as British colonizers. But each print was identified by ulnar loops, a pattern found on only 60 percent of all fingerprints. As a result, the technique, dubbed the Henry system, was not adequate for the War on Crime. One of the LEAA’s main tasks was to develop a new system based on ten-digit fingerprint files. Moreover, the administration sought to automate the entry and retrieval of high-resolution fingerprints given the impending expansion of offender processing. In achieving this, the FBI and the New York State Identification and Intelligence System teamed with titans of engineering, energy, and research, including General Electric Company and IBM’s Center for Exploratory Studies, to investigate digital fingerprint recognition technology. The FBI also collaborated with the Bureau of Standards, Department of Commerce, and U.S. Air Force to design a device that automatically identified the position and orientation of bifurcations and ridge endings in fingerprint cards and convert the information into digital form. Alongside these endeavors, the Bureau of Standards outlined a plan to develop a computer-aided system to automatically read, categorize, serialize, and store ink-based fingerprint cards. Other biometric technologies were developed in Illinois, where the Argonne National Laboratory prototyped automated optical scanners that scanned fingerprint files, translated them into code, and then fed them into datasets. Also during the same time, a division of Litton Industries prototyped the Fingerprint Automatic Classification Technique, which linked fingerprint card scanners, criminal record databases, data terminals, mainframe computers, and printers into an integrated machine. General Electric went so far as to explore ways of using holography to enhance fingerprint recognition. In 1963, a special agent called Carl Voekler realized that the national fingerprint database was so vast that it was no longer humanly manageable. As a result, he approached the National Institute of Standards and Technology to help automate it. Cornell Aeronautical Labs Inc. and North American Aviation Inc. were awarded government contracts to develop digital processes of detecting, encoding, measuring, comparing, and matching fingerprint minutiae. Development continued until the mid-1970s, when the bureau digitized its mushrooming fingerprint database.
As arrest rates escalated toward the end of the 1960s, attention within the state–technology nexus turned to finding ways that technology could be used to expedite processing felony defendants through courts. The vanguards of digital restructuring looked into computer simulation techniques to explore how to enhance court procedures. Juridical technologists were determined to find ways that information communications systems could slash the time it took offenders to go from appearances to dispositions to receiving grand jury indictments. Professors from Harvard’s medical and law schools sought ways to integrate scientifically irrefutable proof into the judicial process. The National Center for State Courts’ Court Improvement through Applied Technology initiative and the Criminal Courts Technical Assistance Project explored how to use the information about felony offenders in databases to expedite criminal processing and how the information could be analyzed to determine which counselors were best suited for trial and appellate counsel.
Applications of IT in corrections also gained headway following the rise of the LEAA. Technocrats throughout parole agencies devised disciplinary reporting systems to assess the outcomes of correctional supervision program. In 1964, the interstate Advisory Council on Parole of the National Council on Crime and Delinquency devised a standardized data collection system for correctional assessment. Some advisors suggested that dataphones be installed in each decision maker’s office to provide information on whether an offender was a latent recidivist. The rising enthusiasm for categorizing offenders according to demographic and latent characteristics marked the first step toward the database becoming the central technology of the punitive state. In fact, the rise of this trend roughly paralleled the rise of Edgar F. Codd’s relational database management system, whose key innovation was to organize data in compounds (Figure 6). Before Codd, database systems arranged data in trees or graphs that required high levels of technical competence to maneuver. Different databases structured data in different ways, making data sharing quite a laborious task. To further complicate things, before relational databases, criminal justice agencies relied on city- or state-controlled data-processing centers. These higher administrative levels were responsible for distributing processing power throughout public agencies, which generally did not privilege police departments.
Codd’s invention changed this state of affairs. Much like Hollerith’s machine, the relational database tabulated data such that organizing, querying, and retrieving data became unimaginably easier. Codd’s relational system laid the foundation for Structured Query Language (SQL), a programming language that allows one to add, search for, combine, and manipulate different data tables within and across multiple databases. With SQL, creating new datasets and cross-tabulating old ones became effortless. In fact, the language is so proficient in organizing data that it enabled low-storage databases to store millions of rows and columns of data. This made the size of database files into an afterthought, which allowed for the production of structured data on unprecedented scales. Such technologies allowed agencies to interface with each other’s databases and even populate each other’s data fields. The technology was a revelation for the FBI’s identification record system. It allowed users to search for and extract data irrespective of the location or structure of the database. This provided a technical nucleus for the National Crime Information Center (NCIC), a centralized database that absorbs data from enforcement agencies all over the country (see chapter 4). The NCIC database houses information on property (e.g., articles, guns, securities, vehicles) and persons (e.g., gang affiliates, fugitives, immigrants, missing persons, sex offenders, terrorist suspects). These files were hailed by the ascendant generation of criminal justice technologists for containing the “most valuable information currently available for the differentiation of offenders,” including type of offense, age, race, and sex. Urban police departments also began to implement these new systems for warrants in the 1960s, such as with the Los Angeles Police Department’s Automated Wants and Warrant System and San Francisco’s Police Information Network.
For correctional apparatuses, the appearance of relational databases came during a period when “theoreticians, practitioners and researchers increasingly [sought] some classification system—some meaningful grouping of offenders into categories” so as to manage them more effectively. Parole administrators developed scientific methods for evaluating parole board decisions. In 1964, a consensus emerged from the National Advisory Council that parole boards needed to develop common data-reporting systems, vocabularies, and standards of evaluating programs. This took place during a time when technocrats began to conceptualize the organizational structure of the correction apparatus as an integrated, multimodal system. And while computer-aided evaluation and tabulation slowly permeated the correctional field at the dawn of the War on Crime, a series of research articles toward the decade’s latter half offer a glance into the rational kernel of an increasingly digitized mode of social management. In a revelatory treatise, criminologist Harland Hill laid out a prophetic vision of digitized correctional power toward the end of the 1960s. At a 1967 meeting of the Commission on Law Enforcement Science and Technology, he lamented how far corrections departments were behind the private sector with respect to IT. Hill proposed several ways of closing the gap. One of the more interesting applications of information systems he imagined involved computerizing correctional case management. He painted a picture of how digital computers could optimize receiving, diagnosing, disciplining, monitoring, and evaluating the burgeoning offender population. This included perfunctory tasks like assessing custodial, counseling, educational, and medical practices. It also included feeding correctional decision makers, diagnosticians (educators, social workers, psychiatrists, psychologists), and staff continuous updates on evaluations, recommendations, and relevant details for each individual prisoner. But the most intriguing part of Hill’s document is where it outlines a digitized form of correctional control inspired by air defense systems. He imagined the day when the correctional apparatus would be made up of a “sensing system which will maintain surveillance over all offenders, whether in institution or in the community, so that . . . those who are deviating from expected behavior, either in a positive or negative sense, will be called to the attention of correctional authorities.”
Hill’s vision represented a new way of thinking about criminal justice, whereby criminalized human was but an element of an increasingly automated logistical apparatus designed to shuttle human beings from one institutional node to another. Hill advanced the concept of the “correctional sequence” to illustrate how crime control could be understood as logistical management. In this sequence, the criminalized subject was an object to be secured, transported, processed, and distributed with maximum efficiency. Just like the economic field, surmised the new technologists of racialized punishment, crime control involved collecting, bulk processing, packaging, labeling, storing, and transferring human-made products. This convergence—or, even better, collision—of systems analysis and criminalization reflected what sociologist Alain Touraine called social rationalization, as it articulated criminal justice as complex machinery in which each individual component (decision maker, correctional officer, criminal suspect, police officer, prisoner) became progressively interchangeable. This established conditions for an interlocking chain of state apparatuses steered by bureaucrats and technology corporations. Such a state of affairs was touted as the beginning of an unprecedentedly efficient ability to intercept, classify, and micromanage people, populations, and places targeted in the War on Crime. One is almost tempted to think that the overproduction of racialized criminality, which crystallized in the form of the mass incarceration, was inevitable given the newfound emphasis on efficiently patrolling, intercepting, processing, and transporting human subjects.
The Emergence of Computational Criminology
Plans to import digital microcomputers into criminal justice did not only include representing criminal justice processes as mathematical abstractions to maximize their functionality. They also included mathematically representing criminalized territories. Thus the slow introduction of digital computing into the War on Crime went hand in hand with the quantitative modeling of the “black ghetto.” Enter a new generation of architects, criminologists, sociologists, and urban planners devoted to modeling, analyzing, and representing criminalized communities to aid War on Crime initiatives.
One of the most impactful transformations during the early phases of criminal justice computerization occurred in the sphere of police knowledge production. Academic researchers across disciplines were mobilized as appendages of the police apparatus, tasked with finding elegant solutions to patrolling the criminalized territories. The living spaces of poor minorities were subsequently dissected and analyzed with unprecedented granularity by social scientists as the War on Crime kicked into gear. Streets, sidewalks, and housing units slowly but surely eclipsed the city-scale and concentric zones as the focus of criminological knowledge production. It is not a coincidence that geographic information systems made a forceful entrance into the criminal justice apparatus at the onset of the War on Crime. Using grant money from the Law Enforcement Assistance Act (1967), the St. Louis Police Department’s Resource Allocation Research Unit was among the first to use geographic information systems to organize beat structures according to statistical analysis of incident report data (Figure 7). This involved creating new nine- to twelve-block areas to cultivate and assess emergency call and incident data. These so-called Pauly Areas, named after the unit’s project implementation officer, were then correlated with street locations using a computerized street file system in which each street had a unique code number based on recorded incident activity. The unit then turned to Harvard’s Laboratory for Computer Graphics and Spatial Analysis to translate the data into contour and flat-tone maps, which was made possible by its vaunted Synagraphic Computer Mapping Program.
The St. Louis Police Department eventually adopted IBM host and communication computers, IBM 1050 computers, and remote teletype terminals to enter and retrieve data on adult arrests, bench warrants, and vehicles. The department also developed a centralized mainframe computer that generated statistical reports on arrests, dispatch call frequencies, and patrol response times. Inasmuch as the system analyzed police data on scales that were smaller than the department’s existing beat structure, it became possible to create a new grid for policing. Each cell in the grid was defined by its number of reported incidents, which in turn was supposed to dictate how intensely it was patrolled. The spread of digital patrol maps preceded geographer Stan Openshaw’s articulation of the modifiable area unit problem, which explained how patterns in spatial data were partly determined by classification schema and geographic scale. For instance, analyzing incident data at the level of precincts will yield one number of “high-incident areas,” whereas analyzing them at the level of census tracts will yield another. Thus high-incident areas, their size, shape, and density, are partially determined by the idiosyncrasies of geographic information science.
The practice of reducing criminalized territories to matrices of policing and capture brought about new modes of criminological knowledge production. This conceptual shift occurred alongside the birth of crimes of place theory and hot spot criminology, both based on criminologists’ discovery that incident reports are not spread evenly across urban space. Following Claude Shannon’s disregard for understanding the meaning of the information at hand, these criminologists deliberately abandoned sociological understandings of crime rates. Understanding why incident reports were geographically clustered was of no interest to these criminologists. From the start, the social function of computational criminology was to produce targets for patrol forces; it is one-dimensional thought operating in two-dimensional space. Thus, by way of Reagan’s War on Crime, the spread of personal computers and user-friendly mapping software such as MapInfo established technology as the unmoving mover of criminological knowledge. “Quite soon,” criminologists predicted, “[digital] crime mapping will become as much an essential tool of criminological research as statistical analysis is at present.” Banishing all context from consideration, ignoring all discoveries in human geography in the past four decades, these theories introduced their own scalar terminology suited for the geographically concentrated War on Crime. Macro denoted census tracts, neighborhoods, and square blocks; meso denoted block faces and street segments; and micro denoted facilities, specific addresses, and street corners. And it is the microscale that reigns supreme in computational criminology. The fetish for the micro—which rippled across biology, material science, and physics—had not only proven a lucrative source for criminologists documenting the details of criminalized microspaces. Preoccupation with the micro also formed a shield against structural analysis. No thought was given to the relations between hot spot microareas and the externally imposed social and punitive policies used to manage them. All sources of illegal activity are seen as intrinsic characteristics of microspaces. Even the physical characteristics of these communities have been construed as if they were causes of criminogenic behavior (Figure 8).
Criminologists’ affinity with mathematical abstraction seemed to hold a negative correlation with their interest in the social forces behind rising crime rates. Their veneration for geometry provided an intellectual framework for a pseudo-physics that attributed increased levels of criminalization to attractive forces created by vectors between the residences of offenders and crime locations. Some social scientists invoked astrophysics to explain patterns in crime datasets. Such approaches are of course absurd applications of Newtonian mechanics and explain virtually nothing about geographic patterns found in police datasets in the age of mass criminalization. By completely ignoring the influence that criminal law, crime control policies, and criminal justice data generation techniques have on incident report rates, crime science forecloses its own scientific pretense. But we would do well to avoid underestimating arbitrary uses of science to legitimize apparatuses of state violence.
The political function that underlies the steady march of digital computing into criminological knowledge is twofold. The process is conservative, as it ignores the conditions in which populations are criminalized, and liberal in that it does so in the name of scientific objectivity. What is more, the application of programming languages to criminological knowledge production also maintains a barrier against critical thought. Like any language, programming languages like SQL cannot help but warp the social phenomena they represent. The inherent differences of social phenomena get lost in databases as they are dissected and organized in classification systems, data dictionaries, data models, and the like. Such systems must, as a matter of technical necessity, divide that which they catalog into predefined categories. True differences give way to false equivalences during this process of translating social reality into programming language. “Equivalence itself becomes a fetish.” As such, computational criminology is incapable of grasping internal differentiations within the classes of offenders it tabulates. No distinctions are made between the individual who sells narcotics to support a family and the individual who sells them for a gun cartel in computational criminology’s system of reference, yet both are coded as targets for heightened police scrutiny. Nor can this criminology grasp the qualitative differences between the crime events it analyzes. No meaningful distinction is made between a chemically addicted person who commits a drug offense, the homeless person who steals food, or the youth who murders for sport. Each subject is an interchangeable target for differential patrolling and punishment. This is extreme positivism, as that which is posited, that which is given, that is, the state’s datasets, establishes the limits of intelligibility. By simply sticking to the facts supplied by police departments, computerized criminology “repels recognition of the factors behind the facts, and thus repels recognition of the facts.” Between 1975 and 1990, the quinquennial growth index of carceral populations in county jails and in state and federal prisons tripled. From 1980 to 1989, the total number of drug arrests increased by nearly 133 percent. From 1980 to 2000, mandatory minimum sentences quintupled. But such processes fall beyond the purview of corporatized-bureaucratized criminology, as its essential function is to assist population management. This radical erasure of the state’s influence on the demographic and geographic patterns in criminal justice datasets is so conspicuous as to be informative.
The convergence of state power, information capital, and positivist social science begat a distinct form of racialized information power in which the incessant measurement, classification, and evaluation of criminalized populations and territories became central to statecraft. In the field of knowledge production, computerization facilitated the century-old practice of decontextualizing the outcomes of racialized governance, this time through digital computing architectures. Criminal justice policy, criminal law, discriminatory profiling, and a century of racist criminal justice practices had absolutely no place in computational criminology. From 1985 to 1990, state expenditures on corrections increased over 150 percent. By claiming that the causes behind rising crime indexes are internal to criminalized communities and unrelated to the detonation of punitive legislation, policies, and practices, bureaucratic criminologists, themselves functionaries of the crime war, erased the criminal justice system from criminological thought. One can draw a straight line from deindustrialization-era criminology to the demonological theories of early modern Europe in terms of ignoring the social processes that determine who is labeled a criminal. But the Americans also made out criminality to be an intrinsic, measurable property of social space—in particular, the social spaces typical to superfluous and socially stigmatized labor power. This set the grounds for an entire criminological knowledge industry based on calculating the attributes of criminalized urban areas on ever smaller scales. The more data that are produced in this industry, the more reasons there are to flood such areas with patrol forces.
The use of statistical data to rationalize racist deployments of criminal justice is not new. Khalil Muhammad powerfully demonstrates how the statistical method allowed the racial state to move away from explicitly racist rationalizations of racial criminalization to covertly racist ones. But in computerizing the criminal justice field, the tripartite force of bureaucrats, technology corporations, and intellectuals did more than reproduce the legacy of racialized criminal justice. Statistical data were not merely signifiers produced and circulated to legitimize anti-blackness. Upon being geocoded and time-stamped, the data were reborn as lifeblood for a digitized complex of patrolling, monitoring, cataloging, evaluating, predicting the behavior of, and intercepting unwanted elements in the most marginal sections of cities.