Two years before Rudolph Giuliani was sworn in as mayor of New York City, the Committee on Information Technology of the New York Police Department (NYPD) outlined a vision for a “fully automated police department.” While hagiographic accounts single out Giuliani, police commissioner William J. Bratton, and deputy commissioner Jack Maple as the catalysts behind the digital restructuring of the NYPD, a much wider constellation of forces set the process into motion. To be certain, what the NYPD and other police departments have realized is nowhere near being fully automated. The problems with data entry, inefficiencies, and malfunctions associated with police technology are well documented. But the eruption of information technology in urban policing is an event that demands attention, for it signified a new way of normalizing racialized policing that was driven in part by a new partner—the IT sector.
Irrespective of public officials, technocrats, and technology corporations who claim that computer software results in more precise policing tactics, it has been used by cities to justify policies of establishing blanket control over entire populations and places. For instance, geographic information systems are used to justify the differential patrol of block corners, deployment areas, drug consumers, homeless encampments, “impact zones,” night clubs, “notorious adult use locations,” panhandlers, public housing units, public schools, open-air markets, sex workers, street intersections, subway stops, “squeegee men,” the visibly indigent, and the visibly mentally ill. The technology has thus become a force in the production of what geographer Rashad Shabazz calls prisonized landscapes, which are managed in the first instance by the police apparatus. Such was certainly the case in Chicago and New York City.
Catalysts of Police Automation in New York and Chicago
Information capital began to rise near the same time northern cities were upended by deindustrialization. Where industrial capital ceased to find low-income minorities exploitable, information capital found a lucrative opportunity. Marx’s analysis of value in motion spent a great deal of time on relations between the movement of capital and differentiation. He illustrated how the continuous valorization of capital requires the differentiation of circulation periods and of forms (money, constant, fixed, fluid, variable, etc.) invested in industries that constantly change over time. It is no wonder that Marx developed a preoccupation with differential calculus toward his latter years. But he critically overlooked how the movement and valorization of capital also relies on differentiating populations. Capital does so, we now know, primarily along axes of gendered, racialized, regionalized, nationalized, and physical difference. Different groups are ascribed different levels of value, be it positive or negative, with respect to rates of profit and, ultimately, the right to live. But what happens when entire communities become bearers of antivalue, that is, hindrances to the movement and valorization of capital? What modes of sociospatial differentiation rectify the situation?
Restructuring Global North urban economies toward the end of the twentieth century required transforming the police apparatus to manage devalued populations. The most problematic of the lot had to be identified, quarantined, even relocated, to make way for postindustrial forms of accumulation. Not only did this project call into being new political discourses, ordinances, and policies but it also gave rise to digital infrastructures of policing and punishment. Under such conditions, a digitally enhanced police apparatus programmed to securitize nodes of global capital from the growing mass of surplus populations began to take shape.
New York City
In New York City, police computerization emerged alongside city projects to rebuild its political economic system. Until then, police were primarily deployed to monitor and suppress drug violence and the organization of radical groups, including the Nation of Islam, the Young Lords, and the Youth International Movement. But more than a decade of social dislocations wrought by deindustrialization, multiple recessions, and stock market crashes catapulted the NYPD to the front lines of an indefinite War on Crime. From the early 1990s onward, the department would be characterized by biopolitical “quality-of-life” measures for the privileged and necropolitical “zero-tolerance” measures for the stigmatized others.
The magnitude to which the NYPD was transformed in the 1990s reflected the magnitude to which industrial and public workforces lost their economic utility. The resolution of this crisis was to a great extent dictated by the financial sector. The firms that bailed out the city did so while imposing stringent conditions that reshaped its economy of space. The financial sectors obtained first claim over tax revenues, which they used to pay bondholders; implemented wage freezes; rolled back public employment and services (education, housing, transportation); and required municipal unions to invest pension funds in city bonds. This also led to an ever-increasing reliance on credit-rating agencies, municipal bond markets, and their concomitant “debt machine” governances to feed the finance sector. But this rising tide of financialization proceeded in lockstep with the backwash of deproletarianization. More industrial jobs were lost in New York during the early 1990s than in any other major U.S. city, save Philadelphia. Middle-wage jobs began a decade-long decline, while low-wage jobs began a decade-long increase. Between 1990 and 1991, losses in state revenue prompted a twenty-one thousand person-cut in the public workforce. Despite improvements in the national economy, the census tallied the city’s unemployment rate at 12 percent, its highest since the 1975 financial crisis. By the mid-1990s, one-quarter of New Yorkers received incomes below the federal poverty line, and almost 45 percent of children lived at or below the poverty line. Of the almost 2 million New Yorkers living beneath the poverty line, 33 percent were classified as Hispanic, 25 percent as black, and 10 percent as white. Estimates suggest that up to fifty thousand people were homeless at one point in the early 1990s—who were 62 percent black, 25 percent Hispanic, and 8 percent white—an almost fivefold increase from the mid-1980s.
Despite the sudden amplification of economic inequality, the city’s private sector experienced its largest economic boom in a half-century as it approached the new millennium. New York City’s IT industry grew considerably during this period. By the mid-1980s, the city had the third highest information-intensive employment location quotient, measured by the absolute size of the IT sector and its relative share of IT industries in the region. As a central node of global financial power, the city required a vast information communications infrastructure. Such infrastructure was supplied by AT&T, Motorola, Teleport Communications Inc., and other technology corporations. Internet-ready real estate like the Rudin Management Company’s New York Information Technology Center proliferated in the mid-1990s, which provided interactive media enclaves for the CD-ROM developers, web designers, and virtual reality artists that would come to populate emerging labor markets. From 1996 to 2000, the city’s private-sector employment and wage and salary earnings rose at a faster pace than the national average. Service-sector employment expanded considerably moving into century’s end, adding 160,000 and 110,000 new jobs in the business and consumer sectors and health and social service sectors, respectively. Wages and salaries in the securities industry increased at more than double the rate of the rest of labor markets during this time.
But things looked different for the negatively racialized poor during this time, as they’d come to exercise a negative economic function in the emergent economy. By the turn of the century, the unemployment rate of New Yorkers classified as black was more than double that of those classified as white, and it was only slightly less than double for those classified as Hispanic. Poverty rates among black and Hispanic subjects were approximately double and triple that of whites, respectively. In the Bronx, Hispanic poverty was nearly five times greater than white poverty, whereas black poverty was about four times greater. And with the rise of Clintonian workfare programs in New York City, well over a half million New Yorkers were cut off of public assistance. Removing these remnants of a bygone era from the urban core became a prime objective of the city’s power structure. Crime commissions founded by the business community began to monitor the effects of crime on economic growth. Commercial and real estate capitalists were among the first advocates of hyperaggressive policing tactics to eliminate social disorder inhibiting commerce. Sociologist Alex S. Vitale illustrates how Bid Improvement Districts (BIDs) concerned with the effects of visible homelessness on commerce and rent values hired private police forces. The Grand Central Partnership, an influential BID comprising property owners, tenants, and public officials, was especially determined to remove the disorder pervading Midtown Manhattan. Private patrols coordinated by the NYPD appeared in Rockefeller Center, South Street Seaport, and Roosevelt Island. In 1991, the legal bureau launched its Civil Enforcement Unit program called the Civil Enforcement Initiative, which engaged merchants and attorneys to eradicate business-threatening, low-level offenses, including visible signs of sex work, drug sales, public consumption, intemperance, and loud music, through nuisance abatement, forfeiture, and loitering laws. Such were the material conditions in which the digitized NYPD was born.
The catalysts of police computerization were somewhat different in Chicago. Responding to the hyperviolent narcotics trade, Mayor Richard M. Daley’s administration (1989–2011) proposed a flurry of ultrapunitive anticrime policies that targeted gang members. Moving into the 1990s, Daley, like his mayoral counterpart in New York City, cast anticrime initiatives in the mold of the Clinton administration’s Violent Crime Control and Law Enforcement Act, which eventually passed in 1994. The administration tied Chicago’s fate to the omnibus bill, as it provided Chicago almost a half billion dollars to upgrade criminal justice institutions. Daley’s bill introduced a variety of provisions, including wiretaps of gang leaders, harsher sanctions for gang-related crime, and loitering ordinances that authorized police to disperse gang suspects from street corners. The mayor also proposed to implement the three-strikes rule and capital punishment for subjects convicted for federal crimes. As in New York, these laws, and the objectives they were designed to achieve, were mediated by the interests of commercial, information, financial, and real estate capital. Once a central hub for industrial labor power, Chicago was soon trying to restructure its economy around condominium construction, finance, gentrification, start-ups, and tourism.
But as the 1980s unfolded, Chicago and its core counties also established the fourth highest concentration of employment in information-intensive industries in the metropolitan United States. The dot-com boom gave rise to numerous, albeit ephemeral, consumer-facing firms. The early 1990s saw cloud services, cybersecurity, online insurance services, software consulting firms, trading technologies, and web design firms crop up throughout the city. Chicago also was home to various professional service industries for management consultation, IT/business process outsourcing, and technology consultancies that helped major corporations around the country build digital infrastructures. The slow and steady growth in new sectors stood in direct opposition to the decline of the old sectors. Once an exemplar of Fordist accumulation, Chicago’s slow industrial decline spanned the 1960s to the 1980s. From the late 1960s to 1990, Chicago hemorrhaged 60 percent of its manufacturing jobs. Philadelphia was the only city to lose more. Between 1970 and 1990, the poverty rate increased nearly 20 percent, and the population declined by the same percentage. Saskia Sassen notes that during this time, the city lost over 25 percent of its factories and 45 percent of its manufacturing jobs. Unlike New York, Chicago saw rapid declines in service industries, losing more than thirty thousand between 1991 and 1992. Its financial sector also contracted, evidenced by a 20 percent office vacancy rate during the period.
These contradictions gave rise to heterogeneous social responses. In the register of formal politics, in 1984, Jesse Jackson’s National Rainbow Coalition made him the first black presidential candidate to make the national party’s election. Black nationalism also found rejuvenation in a resurgent Chicago-based Nation of Islam, which sponsored the Million Man March in 1995. Narcocapitalism and self-medication also emerged to mitigate the material and psychological effects of deindustrialization in Chicago’s Black Belt during this period (much like in the case of post–Great Recession rural opioid markets). Throughout the 1980s, the total number of recorded offenses increased by nearly 65 percent. In the next decade, the total number of recorded murders rose by nearly 40 percent. The police superintendent argued that the department should adopt Chinese policing strategies and curtail constitutional rights where they encumber law enforcers. Ironically, what the Chicago Police Department (CPD)—and the NYPD—helped create was a digitized mode of policing that was eventually replicated by China’s Nationalist Party.
Data Processing in the NYPD
A historically detailed account of digital computing in the police apparatus might begin in New York. The NYPD’s history with computers commences in the 1960s and 1970s, which were characterized mostly by applying emerging computer technologies to internal administration. The NYPD’s data processing starts in 1963, when its Electronic Data Processing Division first used an IBM 1401 mainframe computer for fingerprint files, personnel records, and Uniform Crime Report (UCR) data. The division, which was renamed the Management Information Systems Division (MISD), underwent modest expansions during the following decade through portable data terminals for the patrol fleet, an online booking system, and an online personnel system. MISD, which oversaw research on emerging trends in the IT industry, eventually came to manage nine mainframe databases and more than two dozen applications for administering personnel. The databases included an automated property management system to log and track property vouchered as evidence, a fleet management system to help administer the department’s automobiles, an automated alert system that notified the District Attorney’s Office when former misdemeanants were arrested, and a narcotics recidivist database that connected narcotics databases throughout the five boroughs. Through MISD, police personnel, recidivists, and suspects were incrementally drawn into the NYPD’s slowly evolving data ecology. The department’s Special Police Radio Inquiry Network (SPRINT), which automatically assigned patrol units to incident areas, prioritized emergency calls, and kept track of available patrol forces, also underwent modest expansion during the decade. In 1975, mobile terminals were installed in a small number of the Radio Motor Patrol fleet for officers to perform license plate and name inquiries from state and federal file systems. Teletype terminals in precincts were replaced by the Field Administrative Terminal Network, a closed system that allowed NYPD personnel to send messages between commands to the New York State Information Network and National Crime Information Center. However, the city’s descent into bankruptcy during this decade prevented the widespread adoption of new technology by bureaucratic agencies.
The arrival of commercial microcomputers in the 1980s established conditions more conducive to NYPD computerization. But in New York City, fiscal conservatism posed a formidable obstacle to upgrading the NYPD’s digital repertoire. In fact, the decade commenced at the end of a six-year period in which the police department shed nearly 30 percent of its personnel. In terms of technology, NYPD technological advancements in 1980s were defined mostly by a new telecommunications network, FINEST, which connected data terminals across borough command centers, patrol precincts, and specialized units. In the middle of the decade, the department was using 50 personal computers and 204 mobile data terminals in radio motor patrols. Near the end of the decade, the department began prototyping an online complaint system that captured data from incident report forms and missing persons reports. Coincidentally, one of the computers being used during prototyping had geographic mapping software. The Committee on Information Technology casually suggested that the software could be used to replace pin maps. So influential was the committee’s report that the mayor conducted a private-sector survey to find out what other technologies on the market could be used for policing.
The forces of globalization encircled the NYPD and metamorphosed it in the early 1990s. This had several repercussions in terms of police technology. The early part of the decade saw the NYPD deployed to surgically remove criminalized populations from Manhattan’s core. Homeless sweeps were conducted in Central Park, FDR Drive, Madison Square Garden, Penn Station, and the Manhattan sides of the Brooklyn and Williamsburg Bridges. Mayor David Dinkins (1990–93) passed his omnibus anticrime bill, Safe Streets, Safe Cities, which was cast in the mold of Clinton’s Violent Crime Control and Law Enforcement Act (1994). The act was the largest crime bill in U.S. history and allocated nearly $11 billion federal matching funds to police departments. Tapping into this ultrapunitive zeitgeist, Dinkins’s Safe Streets was designed to enlarge and energize each component of the criminal justice apparatus, speed up arraignment processes, and make sentencing protocols harsher. The crux of the bill involved assembling the largest patrol force in U.S. history. It made provisions for hiring three and a half thousand officers in addition to the three thousand already slated for the next fiscal year. Moreover, patrol car deployment was to increase by 80 percent, and transit and public housing police were to increase substantially as well. Some 80 percent of these new forces were to be concentrated in about a dozen precincts with the highest crime rates. This strategy was to involve the block-by-block patrolling of these precincts so as to produce twenty-five hundred extra arrests in its first six months and nearly fifty thousand in its first four years.
But while the administration rattled its saber by quoting arrest statistics in public discourse, it sought other, subtle ways of reconfiguring the police apparatus. It began to expand its technical repertoire and adopt computerized modes of human and resource management. However, the NYPD’s data infrastructure was far from optimal. Institutionally, police data were generated at this time to satisfy FBI reporting requirements rather than to make the department run more efficiently. At this point, NYPD headquarters received compilations of precinct data in the form of quarterly Management Information History reports. All NYPD data were between three to six months old at the time of collation and analysis. To complicate matters, data visualization techniques were limited at the time to hand-made pin maps of drug and serious crimes, including grand larceny, murder, robbery, and shootings. Different precincts mapped different crimes at different time scales. Such a lack of shared standards was unsuited for a massive project the likes of urban restructuring. Despite the fact that Rudolph Giuliani and William J. Bratton feuded over credit for the NYPD’s technological transformation, the Committee on Information Technology explained two years prior to their arrival that the NYPD’s “vision for information technology consists of a fully automated police department.” The committee insisted that MISD could significantly enhance community patrol forces through new relational database architectures that allowed patrol officers to access and analyze complaint, arrest, warrant, and previous incident data. Digital computers were also regarded as mediums to enhance police administration by providing access to crime analysis, resource planning, beat redesign, and rapid response planning applications through handheld computers, mobile digital terminals in vehicles, and local area networks.
By the mid-1990s, the NYPD had launched several projects to develop database management systems for accounting, complaints, firearm ownership, fleet administration, fuel monitoring, and property tracking. It assembled a Committee on Information and Technology, which was tasked with comprehensively upgrading the department’s digital infrastructure. Its recommendations were manifold. The committee suggested that the NYPD obtain an integrated relational database for all incident reports; a mainframe central processing unit to support all databases; an online complaint system to support crime analysis, criminal investigations, crime reporting, and resource planning; a precinct-to-precinct microcomputer network linked into the central mainframe; and a local area network for NYPD headquarters. The committee also recommended that the department establish automated and online warrant systems, automated communications systems, digitized beat books, and digitized court orders of protection, among other technologies. It also requested funding from the state for dozens of additional projects, including, but not limited to, bioelectrical impedance analyzers, digitized department forms, driving simulators, cellular phones, statewide automated fingerprint systems, vehicle tracking devices, and videoconferencing equipment.
In rationalizing using tax dollars for this ambitious plan, police technology was presented to the public as a means of improving NYPD–civilian communication. IT was said to be necessary to support the analysis of local problems and the subsequent development of appropriate solutions. The technological core of the project was a “computerized database of community-related information . . . made available through designated precinct terminals. Data would include local crime statistics, [and] procedures on how to report crime incidents.” MISD also sought to equip patrol officers and detectives with access to state liquor authority databases, cellular phones, microwave video transmission devices, pagers, and photo imaging systems, among numerous other devices. Most of the recommendations during this period revolved around the internal operations of the police. But the technocratic spirit of the Committee on Information and Technology eventually came to define the NYPD’s relation to urban space.
Automating Gang War in Chicago
While the application of IT to police patrols is widely viewed as an invention of the NYPD, the CPD was ahead of the curve in several ways. In contrast to the NYPD, computing technology was conceived by CPD personnel from the outset as a tool to establish a logistical police apparatus for the city’s war against gangs. As early as the mid-1980s, the Illinois Criminal Justice Information Authority was funded by the Bureau of Justice Statistics to develop Spatial and Temporal Analysis of Crime (STAC) software. STAC laid overlapping circles on city maps and then counted the number of reported incidents in each circle (see Figure 14). Police targets emerged at the center of circles with the highest rates of reported incidents. The program also used probability theory and geometric techniques to predict future high-incident areas. This was a predictive policing prototype during the era of Reagan’s War on Drugs, and it was first tested in the poorest sectors of Chicago’s Black Belt.
The CPD’s implementation of geographic information systems was far from a seamless affair. It was retrofitted into the department in fits and starts. This lack of smoothness was due to happenstance and institutional rigidity. In the late 1980s, the CPD collaborated with the National Institute of Justice (NIJ), Chicago Alliance for Neighborhood Safety, and Chicagoland university professors to pilot the Microcomputer-Assisted Police Analysis and Deployment System (MAPADS) in the West Side community of Austin. At the time of its inception, the city classified Austin’s residents as 85 percent black, 30 percent of whom were living in poverty. The MAPADS pilot was an unintended consequence of patrol officers in the CPD’s Twenty-Fifth District tinkering with STAC and coming across correlations between hate crimes and stolen vehicles. The officers ended up exposing an entire auto theft ring, and the district commander was instantly convinced of the revelatory power of geographic information systems. Not before long, the police cum technocrats boasted that geographic information systems could be used to generate “institutional memories,” or criminogenic rankings for each police beat in the district. “The maps,” a task forcer later declared, “are the only place where one can see everything that is going on in an area. . . . Only on a map can the entire beat experience be put together and the pattern discerned from the individual incidents.” This epiphany was the spark that eventually lit digital transformations throughout the entire department.
Projects such as MAPADS accelerated as Mayor Richard M. Daley (1989–2011) ratcheted up Chicago’s Gang War. Daley stressed that Chicago’s fight against gang violence required technological upgrades to cut off international flows of drugs and firearms. Lobbying for military technology, he likened the most violent parts of Chicago to the Colombian drug war. Aldermen promising constituents high-tech policing compared violent parts of the city to Vietnam. Citizens were encouraged to organize Block Clubs to assist CPD in the street war against gangs and ultraviolent drug markets. But tooling up for the Gang War first required harnessing the power of computer technology. “For all the advances in criminal science and technology,” Daley bemoaned, “ours is still a police department that is mired in the past.”
Soon after taking power, Daley’s administration began acquiring land to construct a 61 million dollar high-tech police headquarters with the Illinois Institute of Technology. The request was made on the heels of the department revamping its incident database for beat officers. City officials spun the move as a community policing initiative, meant to assist a “wholesale transformation of the department, from a largely centralized, incident-driven, crime suppression agency to a more decentralized, customer-driven organization.” “Computer-aided dispatch systems, onboard computers, in-car video systems, personal computers for report writing and other computer technologies,” the state’s Information Authority exclaimed, “are becoming part of the mainstream in law enforcement. The cyberworld of the information highway also has become part of the law enforcement technological age.” Using a grant from a private insurance company, the CPD began assembly of its Information Collection for Automated Mapping (ICAM) system. It was completed in 1995 (see chapter 4). The first public mapping interface to use Esri’s digital mapping software, ICAM did not only cartographically represent CPD incident data but also overlaid buildings, citizen complaints, churches, liquor stores, schools, taverns, and other facilities. Soon after launch, ICAM emerged as a central part of CPD strategy. It was quickly expanded so that detectives, narcotics officers, and the general public could access and map data. Subsequent versions were installed on laptops in each patrol vehicle, which allowed patrol officers to conduct temporal analysis and map hot spots, incidents according to different categories, and incidents according to distance to specified locations. The CPD also used ICAM to help School Patrol Units determine where to deploy security dogs, metal detectors, and school patrol cars.
As the new millennium drew closer, the CPD sought consultation from Oracle Corporation to upgrade its central database, the Criminal History Records Information System (CHRIS). Oracle was founded in 1977 in California by an entrepreneur who was perusing the IBM Journal of Research and Development only to come across an article on relational databases. But the corporation that emerged was less an oracle than a priest, as it both pronounced the cause of evil (hot spots) and proposed a way of exorcising it (hot spot policing). At the time the CPD approached the corporation, Oracle commanded roughly three-fourths of the federal database market and was the CPD’s lone consultant for computer-related matters. It also built CHRIS, with which the CPD encountered several problems because of its lack of user-friendliness and hefty training requirements. Moreover, rank-and-file officers had no input in the development of its interface, and the templates on data entry screens differed greatly from incident case report sheets. After determining the potential market value of an updated database system, the CPD and Oracle set out to modernize CHRIS. The corporation was given full access to CPD operations, the CPD was given ownership of the software’s proprietary version, and Oracle retained ownership of the generic version. Oracle contributed $35 million to bringing the system up to date. It also committed ninety thousand consulting hours and five hundred hours of basic training to CPD staff at Oracle University. The CPD, for its part, reallocated 9 million dollars worth of funds from its Community Oriented Policing Services to the project. Over the course of research and development, the CPD stressed that the new database needed to be more user friendly, online compatible, and equipped with geographic information functionality. The result was the Citizen and Law Enforcement Analysis and Reporting (CLEAR) system, whose crown jewel was its mapping application, CLEARmap. For CLEARmap’s architects, the main benefit of the program was its ability to produce targets for patrol units at unprecedentedly small geographic scales. It could also generate new categories of hot spots by correlating CPD data with data taken from other bureaucratic datasets. CLEARmap was even linked up with surveillance cameras so that maps could be supplemented with video feeds in schools, hospitals, street intersections, transportation hubs, and vanity buildings, such as the Daley Center, Sears Tower, and Shedd Aquarium, at a resolution of five hundred square feet.
Digital mapping was not CLEAR’s only function. Its Gang Book application provided a registry of known gang members’ activity spaces, alliances, and symbols. CLEAR also sported a web-based arrest management system, the Automated Incidence Reporting Application (AIRA), which was originally used by custodial personnel to enter information about prisoners during intake. Before CLEAR, CPD bureaucrats were tasked with recording arrest information. But AIRA offered patrols the ability to use wireless data portals to enter information about arrests, digital mugshots, and follow-up reports directly into the department’s case reporting database. CLEAR included an arrestee-detainee tracking function, which gave officers instant access to biometric data (fingerprints, hair color, height, weight), central booking numbers, criminal history reports, demographic information, and mugshots. One of CLEAR’s server subsystems allowed rank-and-file officers to view and edit data in real-time. The technology effectively made patrol squads into mobile criminal processing units, which laid the groundwork for new ways of administering criminalized populations and places.
The arrival of upgraded database technologies coincided with a blitzkrieg of police sweeps, or dragnet operations carried out by patrols on street segments, at street intersections, and in housing complexes targeted by CLEAR. One such tactic, called hot spot saturation, involved infusing upward of 250 tactical officers in hot spot blocks targeted by STAC software. In 2002, the department introduced an initiative in which hot spot saturation teams were deployed for the express purpose of conducting mass arrests on a monthly basis. The CPD also began to deploy personnel to talk to residents in every house or building located on hot spot blocks. Initiatives like these were used eight times within the first four months in MAPADS’s home neighborhood of Austin.
The CPD’s computer-aided sweeps occurred around the same time that it first implemented a mapping application on its closed intranet, which was managed by the Deployment Operations Center (DOC). The DOC introduced a logistical mode of racialized policing that formed around a software application that generated a ceaseless stream of targets for patrol units. The targets emerged from mining data on arrests, convictions, gang affiliations, incident reports, juvenile records, warrants, and vehicle registration and biometric data on more than 17 million entries. DOC predicted future violence by analyzing data on parolees, probationers, youth offenders, and sex offenders. DOC targets appeared in the form of Level II Deployment Areas, areas where violence was deemed probable. Every week, DOC analysts identified these areas by scouring data on gang activity, local stakeholders’ concerns, and the whereabouts of persons of interest, such as ex-convicts, high-level gang members, relatives of gang members, and relatives of victims of violence. The center produced hundreds of deployment areas ranging in area from 0.06 to 4.08 square miles—about 60 percent larger than police beats—on an annual basis. After identifying deployment areas, the DOC also made recommendations on patrol tactics. These included aggressive enforcement of low-level disorders, the permanent patrolling of suspected drug sites, and the issuance of loitering dispersals to people in targeted spaces. Another tactic was strategic traffic enforcement, which involved stopping vehicles for traffic law violations and establishing checkpoints in deployment areas to conduct car searches. In 2003, the DOC unleashed a fifty-person Targeted Response Unit (TRU) to saturate deployment areas and exhibit zero tolerance on drug, gang, and gun suspects. In 2005, the number of TRU personnel was increased by 50 percent, and the TRU conducted 4,774 missions that generated a total of 7,402 arrests. While the unit was originally assembled to focus on violence, guns, and car theft, nearly one-third of all arrests in 2005 fell into the “other” crime category. Three years later, the department launched its Mobile Strike Force, a citywide unit tasked with supplementing ordinary patrol forces according to spatial analyses. In a little over a year, the Mobile Strike Force had conducted 1,190 missions, made 4,271 arrests, and impounded 934 vehicles.
The DOC was used to rationalize not only uneven distributions of patrol units but also uneven distributions of police–civilian interaction. Some deployment areas were marked specifically by DOC bureaucrats for increased “contacts.” In such areas, specialized units were encouraged to proactively engage civilians on the street. In these instances, patrol units were required to document interactions that did not lead to arrest on field contact cards that included the subject’s name, nickname(s), vehicle information, and gang affiliation. Once digitized, the field contact cards were supplemented with fingerprints and mug shots and given a serial number. Between 2003 and 2014, the production of nongang contact cards increased by 80 percent and gang contact cards by 260 percent. These increases resonated with other instances of the CPD using geospatial technology to normalize its warlike approach to suppressing narcotic activity. The department’s 2003 Operation Just Cause, named after the 1989 U.S. invasion of Panama, targeted latinx gang members for minor infractions. “If they are caught drinking, urinating, or throwing a candy wrapper out the window, they are subject to arrest,” a CPD spokesperson explained. Computer targeting assisted similar sweeps in the Near South Side’s Harold Ickes public housing, Bronzeville’s Dearborn Homes, Little Village, and the Far South Side. Many of these surges took place during the Safe Summer initiative, which involved the police establishing ninety roadside safety checkpoints on Friday nights. The police attributed the initiative to a 17 percent increase in gang dispersals, a 30 percent increase in curfew arrests, and a 47 percent increase in graffiti arrests. An invention of the Gang War in the 1980s, the DOC’s geographic database technology was central to rationalizing a new cartography of racialized administrative power.
Much ink has been spilled on the NYPD’s data-driven administrative system, CompStat. It has been praised in police circles as the “single most important organizational innovation in policing during the latter half of the 20th Century” and a “revolutionary paradigm shift” in U.S. policing. Nearly every major city in the United States now uses some form of CompStat, and it is spreading to urban administrations across the planet. But to be certain, CompStat was only part and parcel of a much larger project to manage the sociospatial contradictions of urban restructuring. In New York City, its ascendance signified a drive to establish a degree of territorial order against the deterritorializing effects of capital circulation. As with the urban renewal programs and organized revolt three decades prior, the city found itself confronted with the problem of populations whose hindrance on profit rates was increasing. City officials responded in part by eviscerating public aid while revolutionizing the instruments of policing.
The computerization of the NYPD gained momentum during a period when tough-on-crime hyperbole was in full effect. At the time, mayoral hopeful Rudolph Giuliani, a former U.S. attorney, propelled himself into office by promising to establish social order through a militarized police apparatus. In his 1994 inauguration speech, Giuliani conveyed the image of a revitalized city “built around stricter enforcement of the law to reverse the growing trend of ever-increasing tolerance for lawless behavior.” Quality-of-life enforcement, zero tolerance, and broken windows theory were the policies introduced to achieve this. From the onset, one of quality-of-life policing’s most vocal proponents was a business association masquerading under the name of the Citizens Crime Commission. Since 1990, the commission had implored the city to place greater emphasis on regulating low-level incivilities and victimless infractions. Its main concern was how visible drug addiction, homelessness, indigence, and sex workers hurt businesses. With support from Business Improvement Districts and residential associations, the commission recommended that municipal assistance corporations or special levy taxes fund NYPD reforms, the lion’s share being allocated toward expanding the department by five thousand patrol officers who would be “permanently assigned to the same small pieces of the city, day after day, making themselves, knowing and being known by the residents, upholding the standards of civilization.”
Zero-tolerance policing was predicated on subjecting criminalized people and places to extraordinary levels of patrol forces. Its essence was distilled in the NYPD’s Police Strategy 5 to “reclaim public spaces,” which targeted lower-grade offenses across core areas in Manhattan. Police Strategy 5 was directed at “symbols of disorder,” such as beggars, loiterers, visibly mentally ill persons, truants, subway fare cheaters, and disruptive motorists. Upon taking office, Giuliani announced that the NYPD would begin to observe the disorderly conduct statute with renewed vigor, which, pursuant to §240.20 of New York State’s penal law, stated that a person is in violation of public order for a variety of vague behaviors, including failure to disperse while in a group, unreasonable noise, or obscene language or gestures. Urban geographer Neil Smith famously called the program behind these changes revanchism, named after a French group organized against the liberalism of the Second Empire and the Paris Commune in the late 1800s. Giuliani recited all the platitudes of French revanchists—respect for traditional authority, fear of progressivism, resentment for disadvantaged classes. The mayor attributed the city’s seven-year economic decline to gender nonconformists, homeless people, immigrants, racial minorities, pornographers, and putative “welfare queens.” Giuliani claimed that these groups, along with Mayor Dinkins, the city’s first black mayor, were taking the city from the white middle class. Revanchist policies, with the NYPD at the tip of the spear, were means of reversing the trend. And like twenty-first-century surges of U.S. white nationalism, the 1990s revanchist movement spread virally through heartless shaming, scapegoating, and extraordinary cruelty. The city swiftly arranged itself to cut public school funding, dismantle rent control, eliminate homeless shelters, and shutter mental clinics, on one hand, and conduct militarized homeless sweeps, squatter evictions, and subway sweeps, on the other.
Broken windows theory, which posits that criminality is a function of urban blight, was the third core logic of Giuliani-era NYPD reform. First articulated in criminologist George L. Kelling and political scientist James Q. Wilson’s 1982 article in the Atlantic, the theory explains crime as an effect of exclusively neighborhood-level phenomena, including social disorder (public intoxication, profanity), physical disorder (untended property, graffiti, litter), and anomie. The theory was based on a New Jersey police department’s Safe and Clean Neighborhoods initiative in the mid-1970s that found that foot patrols, in contrast to motorized patrols, increased citizens’ perceptions of safety and willingness to assist police. Kelling and Wilson also drew on psychologist Philip Zimbardo’s experiments in the late 1960s, which hypothesized that untended property in public spaces attracts vandalism and more serious criminal behaviors. The monumental errors characterizing broken windows methodology are well documented. But what often escapes scrutiny is the racial project for which the theory was wrought. Broken windows theory was another example of corporate-bureaucratic intellectuals explaining criminal justice datasets by completely sidestepping analysis of the criminal justice apparatus—the very same apparatus that renders subpopulations unemployable, rips family units apart, and cultivates true hatred for juridical authority. Such a radical reified frame of thought translated seamlessly into computer code.
To make quality-of-life, zero-tolerance, and broken windows principles amenable to programming languages, the department had first to mathematicize the theories. This occurred in the mid-1990s through spatial statistical science. In most accounts, the NYPD’s embrace of computer-aided spatial analysis is attributed to a deputy commissioner, Jack Maple, who ordered precincts to make pin maps of burglaries, car thefts, gun-related crimes, murders, narcotics activity, robberies, and shootings in 1993. At this point, the NYPD used acetate overlay maps of different crimes at the crime meetings. Budget cuts, however, made this approach too costly and time consuming. To defray costs, the deputy commissioner purchased a Hewlett-Packard 360 with funds from the Police Foundation, an organization founded by the Ford Foundation to research law enforcement innovation and science. At first, the NYPD’s patrol borough staff used the computer to log hand counts of UCR crimes and civilian complaints. The staff compiled data for the first six weeks of 1993 and 1994 in a computer file called both “compare statistics” and “computer statistics” by department staff. Soon after its inception, the small file was renamed “CompStat Book” and integrated with the Online Booking System, a centralized repository of data coming from precincts across the department. The file was also merged with a geographic information application that extrapolated spatial patterns from precinct datasets that were not coterminous with existing administrative boundaries. These labors resulted in the translation of quality-of-life, zero-tolerance, and broken windows theory into digital dialects, which ensured that War on Crime logics persisted even if policing policy and rhetoric changed.
Soon after the birth of CompStat, the NYPD’s Office of Management Analysis and Planning enrolled the expertise of the Center of Urban Research and Center for Applied Studies of the Environment at the City University of New York. The team’s objective was to build a new geographic information system that was capable of organizing, analyzing, and visualizing a greater variety of NYPD data. The Crime Mapping and Analysis Application was the fruit of these labors, which integrated software applications from MapBasic, MapInfo Professional, Microsoft Excel, and Vertical Mapper SDK. In designing the software, the group started by vectorizing NYPD datasets to code longitudinal/latitudinal coordinates of reported incidents, which it then superimposed onto police beats, precincts, census block groups, and street grids. The group visualized police data through maps that depicted aggregations of incidents by precinct and sector; geographic distributions of incidents overlain on maps of the city; and clusters, distributions, and pathways of reported incidents. The spatial statistical analysis centralized NYPD internal operations. “Very few operations outside the Soviet Union,” marveled a policy expert working with the NYPD, “were so centralized. [It was] like something dropped from outer space in terms of what it could do.”
For NYPD administrators, computer analysis also provided an instrument for human resource procedures that revolved around processing all types of data to evaluate commanding officers. Commanding officers were made to defend their precincts’ performance in front of the top NYPD officials and under a neon canopy of charts, graphs, and tables on a biweekly basis. During the notoriously grueling meetings, commanders’ profiles showing their dates of appointment, educational levels, specialized training, and years in rank were displayed for all to see. They also displayed statistics on the commander’s precinct, including available resources, average response time, community demographics, crime statistics, domestic violence incidents, integrity monitors, officer absences, and unfounded radio runs. NYPD headquarters began to keep digital score cards on commanders, which cultivated a new age of predatory policing. Commanders (and officers) were now tasked with producing numbers. In 2013, it was revealed that rank-and-file officers had stop-and-frisk quotas that were verified through case management data. The ordeal revealed that the practical function of NYPD data was not to document crime but to provide proof of police aggression to department brass under pain of demotion or job termination. “The CompStat system,” Bratton later explained, “was police Darwinism; the fittest survived and thrived.” Put simply, CompStat was developed in part to governmentalize police behavior.
In addition to its uses in centralization, CompStat was embraced by NYPD technocrats to decentralize decision-making. It was a key piece of the commissioner’s “reengineering” initiatives, which were characterized by applying post–World War II management theories to NYPD administration. Just as police reformers of the Progressive Era adopted Taylorist principles of management, Commissioner Bratton adopted the gospel of horizontal decision-making that gained traction during the 1970s in Silicon Valley companies such as Apple, Atari, and Oracle. The NYPD thus “decentralized its operations, not to the police officer on the beat . . . but rather to the precinct commander,” who, because of CompStat’s ability to track and analyze precinct performance, was afforded greater discretion in planning, staffing, and implementing initiatives. In a 1996 speech titled “Decentralizing and Establishing Accountability,” the commissioner explained that a decentralized NYPD meant precinct commanders “were empowered to assign officers as they saw fit, to focus on the priorities of the neighborhood. Whatever was generating the fear in their precinct, [commanders] were empowered to address it by prioritizing their response.” Decentralized decision-making shifted the way that the NYPD allocated resources from a logic of functional specialization (detectives, forensics, youth officers) to one of territorial specialization (census blocks, hot spots, street segments). Beginning with CompStat, the department began to conceptualize the NYPD as part of a larger social management infrastructure overseen by digital machines.
As management science tells us, an organization’s turn to data analytics increases its demand for additional data. As a consequence, CompStat was eventually used to manage a vast and diverse ecosystem of data. In 1995, the department created the Pattern Identification Module (PIM), a team made up of the detective, housing, organized crime, patrol, and transit bureaus that scoured data in search of geographic trends. PIM used CompStat’s mapping function to carve up the city into two dozen precincts with the highest quality-of-life offenses, 29 precincts with the highest sex work offenses, and more than 250 hot spot areas with “metastasized disorder.” Data on active bench warrants, civilian complaints, daily summonses, parole residences, parole warrants, and desk appearance tickets for minor violations and misdemeanors were mapped and scrutinized. Data on drug sales, homelessness, littering, unlicensed vending, panhandling, and public consumption came next. The city also used a geographic information system in efforts to establish a “rational distribution” of unlicensed street vendors across the city. Furthermore, it built geographic databases to track the movements of registered chronic misdemeanants, disorderly youth, disruptive students, drug offenders, sex workers, shoplifters, truants, and violent offenders across the city. In sum, the computerization of NYPD administration was deeply wound up in dividing urban space according to varying levels of policeability. Moreover, it allowed the city to rationalize differential law enforcement through the newspeak of spatial analysis and postindustrial management theory.
Calculating Problem Populations and Places in New York City
As the millennium dawned, New York City’s Mayor Michael Bloomberg (2002–13) announced that the city would take full advantage of digital technologies to expand the frontiers of the Giuliani-era NYPD apparatus. Of course, the biopolitical policy of improving life was shadowed by its necropolitical counterpart, zero tolerance. Bloomberg announced that managing “problem people and problem places” was the new centerpiece of his administration’s crime control policy. One is tempted to think that the policy was the product of some mischievous reading of Foucault, who identified the problem of populations, administered through statistics, as the definitive feature of modern governance. For the NYPD, the problem people–problem places strategy was designed to exploit the “ability to mine, refine, and use data, from CompStat and other sources, about problem people and problem places.” The police were to do so by identifying disorderly areas, increasing tactical forces in these areas by the thousands, and tracking disorderly persons as they maneuvered about the city.
Part of the problem people–problem places initiative revolved around continuously tracking targeted individuals. In early 2002, city officials announced an initiative to intensify the surveillance of repeat quality-of-life offenders convicted of drug possession, property crime, prostitution, sex offenses, subway fare evasion, or trespassing. The NYPD also began making digital identification cards of persons of interest that, although the cards were not authorized by courts, were at times used like arrest and bench warrants. But it was the NYPD’s digitally generated impact zones, or areas identified as criminogenic through spatial statistical analysis, that positioned police as permanent administrators of impoverished areas. In 2002, impact zoning was made operational through deployment of fifteen hundred patrollers in two dozen areas across the city. Through the program, police were tasked with probing areas for minor offenses, including disorderly conduct, loitering, and a range of imprecisely defined behaviors. The prime objectives of the operation included increasing summonses for homeless encamping, defecation, marijuana consumption, panhandling, public alcohol consumption, public urination, and unlicensed window washing. Between 2003 and 2006, thirty zones were marked for this type of enforcement. Two years after its launch, the initiative was proudly credited by the mayor for generating more than a half million summonses and fifty thousand arrests.
Impact zones provided not only a scientific pretext for inundating negatively racialized communities in patrol units but also a rationale for micromanaging them through hyperactive tactics. In point of fact, geographic information systems supplied the police with spatial coordinates to deploy the stop-and-frisk technique, the erstwhile centerpiece of the digital age NYPD apparatus. The number of times that minorities were halted, queried, and frisked by patrol officers leaped orders of magnitude following the advent of impact zoning. From 2005 to 2006, there were nearly 510,000 stops in impact zones, a 500 percent increase from the prior year. A network of checkpoints and chokepoints also crystallized in CompStat’s computers. In 2010, the ten precincts with the most stops totaled almost the entire number of stops in the other sixty-six precincts. In the Brownsville section of Brooklyn, an area with the densest concentrations of public housing in the city, upward of fifty-two thousand stops were executed. Males between the ages of fifteen and thirty-four in this area were stopped on average five times a year. Between 2006 and 2010, the NYPD conducted 329,446 stops on suspicion of trespassing, only 7.5 percent of which resulted in an arrest, and 5 percent in summons. Over 80 percent of people stopped were classified as black or Hispanic, though they made up a combined 53 percent of the population. Only 10 percent of those stopped were issued summonses or arrested. Between 2002 and 2012, more than 85 percent of all stops were performed on black and Hispanic subjects. Patrol units in impact zones were tasked with doling out punishments for graffiti writing, littering, loud music, marijuana smoking, panhandling, and unlicensed vending. Of the twenty precincts with the largest number of frisks, eleven were majority black, six were majority Hispanic, and three were majority white. The NYPD also analyzed emergency calls to identify “high noise zones” because “noise nuisances [were] increasingly an indicator of a lack of civility and urban disorder.” Patrol forces were systematically deployed to eliminate noise pollution from bars, boom boxes, car alarms, clubs, disorderly persons, incomplete construction sites, and vehicle engines and horns. Moreover, the police used several technologies to enforce noise nuisance ordinances. The city proudly announced that the noise elimination program yielded eighteen thousand arrests in its first thirty months.
This seemingly self-expanding apparatus of racialized policing penetrated public housing, which secreted a quasi-correctional architecture around the living spaces of the city’s castaways. Housing complexes located in impact zones were subjected to “vertical patrols.” Drawing from the 1991 Operation Clean Halls, these patrols comprised police deployed in at least 3,895 low-rent private housing complexes and generated more than 20,000 arrests and 209,000 summonses in its inaugural year. The city justified the patrols by invoking the New York City Housing Authority’s do-not-enter bylaws, which prohibited nonresidents from entering public housing uninvited. Vertical patrols stopped trespassing suspects and asked if they lived in the building or were visiting a resident, and requested identification. NYPD analysts also identified impact schools, which further carceralized marginalized communities. Impact schools were patrolled by the School Safety Division (SSD), a unit led by Patrol Borough Commands that enforced school disciplinary codes and securitized school grounds. At the height of impact zoning, the SSD had increased school safety agents by over 60 percent, making the total head count of safety agents and police in New York public schools one of the largest enforcement entities in the country. Agents and police were deployed to perform a series of patrol maneuvers inside schools, such as regulating flows of students through entrances and exits, clearing school perimeters of unauthorized individuals, creating checkpoints for ID cards, conducting searches and seizures for contraband, maintaining safe and orderly cafeterias, and reporting suspicious persons and activities. Agents also performed building sweeps in bathrooms, classrooms, exits, offices, and stairwells for suspicious activity, unauthorized persons, and disorderly behavior. With the exception of two, impact schools were located in zip codes that were on average 13 percent or more black, 4 percent or more Hispanic, and 8 percent or less white than city averages. Students classified as black and Hispanic made up 94 percent of all school-related arrests and were fourteen and five times more likely to be arrested than white students, respectively. After seven years of impact enforcement, suspensions increased by 130 percent, which paralleled an overall decline in the black student population. More than half of black students arrested were documented as having a disability.
At roughly the same time that impact schools appeared, the NYPD began developing body-scanning technology that utilized terahertz waves to detect metal objects. “If something is obstructing the flow of radiation, for example a weapon,” the police commissioner explained, “the device will highlight that object.” Upon detecting an anomaly, algorithms determined its risk level, and levels that exceeded predefined thresholds alerted patrol units closest to the point of detection. This “sentient” dimension established conditions where machines began to play a part, however nominal, in organizing distributions of racialized police encounters by themselves. This is because the machines autonomously transmitted information that, in some instances, influenced patrol behavior. This diffusion of surveillance through automated alert systems invoked Deleuze’s control society, as it very much resembled a sieve whose mesh spread throughout physical space-time. It is not only figuratively gaseous but also literally electromagnetic. But the NYPD’s body and environment scanning did not replace the fact that the negatively racialized poor are prodded into internment. When it comes to the racial administrative state, computerization hardly equated to the disappearance of enclosures or violence.
The National Emergence of Predictive Policing
If impact zoning was publicized by city officials as a way of locking down on problematic geographies, predictive policing was publicized as a way of regulating their temporality. Though there is no consensus definition of predictive policing, it obtained a semblance of coherence after a 2009 NIJ symposium spearheaded by former NYPD chief William Bratton and broken windows theorist George L. Kelling. One of the symposium’s core goals was to forge a “connection [between] corporate ideas and methods to policing.” A reccurring theme at the symposium was that criminal justice trailed far behind the business sector in the application of data analytics. And trailed it did. Infancies of data analytics first formed in the 1956 Dartmouth Summer Research Project on Artificial Intelligence, attended by W. Ross Ashby, John Nash, and Claude Shannon, among numerous other luminaries in the budding computer sciences. The workshop revolved around the idea that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” It took a revolution in technology and decades of information capitalists exerting influence on the administrative state for the fruits of this workshop to reach the police.
Infusing market logics into police departments was an explicit objective in many early predictive policing initiatives. In fact, the first projects used software that adapted customer-targeting and supply chain management logics to crime prediction. One such example was developed by Los Angeles police and psychologists from the business technology corporation MC2 Solutions. MC2 pointed to the success e-commerce and marketing firms had with predictive analytics as a testament to its crime-fighting potential. It also proposed the application of business analytics as a technical means of shrinking police budgets while enlarging their public presence and versatility. Similar initiatives quickly replicated across the country—East Orange, New Jersey; Memphis, Tennessee; Chicago, Illinois; Dallas, Texas; Palm Beach, Florida; Santa Cruz, California; Glendale, Arizona; New Orleans, Louisiana; Baltimore, Maryland; Charlotte–Mecklenburg, North Carolina; Nashville, Tennessee; Philadelphia, Pennsylvania; New Castle, Delaware; Miami, Florida; Lincoln, Nebraska; New York City, New York; and Minneapolis, Minnesota.
From the viewpoint of the police apparatus, early predictive policing projects centered on attempts to “design a computer model that could replicate an officer’s intuition.” The idea was that transferring high-, middle-, and low-level decision-making functions to machines, the entire department would operate more fluidly. To achieve this, a mishmash of anthropologists, biomedical engineers, business solution firms, machine-learning experts, police personnel, psychologists, and technology corporations combined forces following the NIJ’s 2009 symposium. One product to emerge from these efforts was offender-based predictive software, which isolated social networks in criminalized areas and ranked the criminal inclination of each individual member. The algorithms probed criminal justice datasets for connections between victims of violence, people with violent records, and people who have been arrested at the same time as former violent offenders. Individuals with close connections to violent offenders, who have been arrested around the same time as violent offenders, or who have been victims of violence were assigned higher risk ratings. Persons with high ratings populated “heat lists,” which designated them for heightened police surveillance.
A mass of civil rights litigators mobilized against the compiling of heat lists. In reaction, many cities, criminologists, and software corporations invested in predictive policing spurned offender-based predictive analytics for geographic-based ones. The latter excluded individual data from analysis and relied instead on spatial analysis of police datasets. This approach also included a temporal dimension, as it analyzed time coordinates of past reported incidents (e.g., time of day, day of week, month). This type of geographic profiling had existed since the 1990s. It was then that crime geographic-targeting software was first designed to predict the probable spatial behavior of violent serial offenders by mapping their “hunting areas.” The software was employed by the Royal Canadian Mounted Police; the National Crime and Operations Faculty in the United Kingdom; and the Bureau of Alcohol, Tobacco, Firearms, and Explosives in the United States. Similar geography-based predictive algorithms have been devised by university professors to determine the probability of arrests for specific categories of crime in specific time frames, identify irregular concentrations of arrests, and correlate crime reports with physical characteristics of the streets on which they occur. PredPol, a California-based software company spearheaded by anthropologists, mathematicians, and the Los Angeles Police Department, was founded upon the premise that seismology could be wielded to predict spatial and temporal distributions of crime. Azavea, a Philadelphia-based firm, produced software that performs risk terrain modeling by analyzing environmental data, land uses, moon phases, school schedules, and weather patterns. There is also predictive policing software that looks for correlations between criminality with abandoned buildings, broken streetlights, liquor stores, and civilian complaints about garbage disposal and road conditions. And even when criminalized areas do not have high arrest rates, there is “cold spotting” prediction software that produces z-scores to show police that they are criminogenic nonetheless.
Predictive policing software is knowledge power written in code, as it ordains differential identification, monitoring, and intervention on human subjects. Precrime maps also belong in the company of Borges’s Cartographers Guild, as they precede territorial practices. Whether offender or geographic based, the proliferation of these maps goes hand in hand with normalizing policies of encircling, evaluating, and intervening in the everyday lives of populations and places bearing certain coefficients. The coefficients are the same ones found in Frederick L. Hoffman’s work well over a century ago. This is why today’s digital maps contribute nothing to understanding urban criminality. Against mounting suspicions that the software has no tangible effect on crime rates, clearance rates, or productivity, technology corporations have contrived selling points of all sorts. IBM promotes its crime-forecasting software on the grounds that crime is becoming more sophisticated; Motorola on the grounds that the criminal world is becoming increasingly complex; and PredPol on the grounds that it saves departments money.
No harbinger of Leviathan, Urstaat, or even instrumental rationality anticipated the creation of artificial intelligence for the purpose of calculating appropriate distributions of state violence. Dead labor animates necropolitics with predictive policing, one of the IT sector’s contributions to racial governance. Predictive policing’s algorithms are products of the radically empiricist ideologies of corporate-bureaucratic intellectuals and computer scientists content to describe facts (e.g., arrest rates) without understanding their conditions of possibility (e.g., the War on Crime). From this perspective, social relations do not exist, only mathematical relations. No thought is given to how different crime-reduction policies, crime legislation, profiling tendencies, or sentencing biases influence the patterns found by algorithms in the data. Only negatively racialized and poor human targets exist in the functionalist logics of predictive policing software, which lays bare the scopic viewpoint of police officers.
Discursively, geographic-based crime prediction furnished police with an ostensibly evidence-based justification to immerse devalued areas with patrol units. But their replacement of individual-based software is of no practical significance. Analyzing spatial data is practically no different than analyzing individual data because urban zip codes, not to mention home addresses, are overwhelmingly correlated with racial classification. And inasmuch as geographic-based crime-prediction models look for correlations in official police datasets, they are bound to yield the same practical results as offender-based models. Moreover, geographic prediction is rooted in the tautologies of criminology’s near-repeat theory, which posits that areas are criminogenic because crimes occurred in those areas. Such tautological reasoning, which expresses in the predicate that which was already stated in the subject, is foundational to crime science dogma. However, the real-world function of predictive policing is not to see crime before it happens but to graft scientific authority onto entrenched forms of racialized policing. But it is pointless to belabor all of this owing to the fact that predictive policing was not developed to solve first-order problems. Crime diffusion risk, hot spot matrices, nearest neighborhood hierarchical clusters, near-repeat patterns—“whatever the name used, whatever the latest expression,” the point is to assist the state in managing stigmatized populations.
Chicago’s Path to Predictive Policing
The City of Chicago’s foray into predictive policing came during a moment of socioeconomic crisis. The Great Recession (2007–9) increased the monthly average unemployment rate in the Chicago metropolitan region by a staggering three percentage points. The rate for black and latinx labor fractions was 7 and 3 percent higher than the city average (whites were 3 percent lower), respectively, both almost 1 percent higher than national averages. From 2007 to 2010, the Institute for Housing Studies reported, the percentage of people out of work for twenty-seven weeks or more rose from 23 to about 50 percent in Illinois. The census calculated that almost 33 percent of residents registered as black lived below the poverty line, compared to 24 percent for those registered Hispanics and 15 percent for whites. Illinois teen employment dropped to about 25 percent, the lowest in the state’s recorded history, 92 percent of which were classified as black males and 80 percent of which were classified as Hispanic.
As part of wider initiatives to turn Chicago into the Silicon Valley of the Midwest, or the Silicon Prairie, Mayor Rahm Emanuel (2011–19) proposed big data analytics as a remedy to the social wreckage of the recession. Digitally enhanced law enforcement, he proclaimed, could stop the most destitute of neighborhoods from turning into a “lost generation that slides into crime and poverty.” By 2010, Chicago’s chief information officer, chief technology officer, and chief data officer became the public face of the city’s computer-enhanced fight against crime. The trio announced that it could reduce crime rates by applying simple spatial and temporal principles, empiricism, and statistical models to law enforcement. The chief data officer advanced predictive policing as the leading edge of this initiative. To be sure, Chicago had been at the forefront of crime prediction for nearly a century and of GIS crime mapping for twenty-five years. In 1927, the Illinois parole board appointed Ernest W. Burgess and other University of Chicago sociology and law professors to apply actuarial statistics to criminal law. The initiative was part of an attempt to rectify problems arising from the state’s indeterminate sentences for juvenile offenders. Such sentences gave rise to overcrowding, which prompted the Department of Public Welfare to form a parole board to find ways of decreasing the number of juvenile detainees. To do this, the board needed standardized criteria to decide who was eligible for release. Enter the Burgess method, which, by 1935, was the sole parole-prediction method used in the United States. At the heart of the method were twenty-one variables related to the potential parolee’s “social type” and “psychiatric personality.” Social types included drunkard, drug addict, farm boy, hobo, mean citizen, and recent immigrant, while psychiatric type included egocentric, emotionally unstable, and socially inadequate. Once it was determined which variables held the strongest associations with successful parole, Burgess et al. devised a scorecard system for the parole boards.
In 2011, the CPD introduced its Predictive Analytics Group, which brought an actuarial system to the police. The group started to coalesce two years prior, during a NIJ-funded collaboration with machine-learning experts from Carnegie Mellon’s Event and Pattern Detection Laboratory and the Illinois Institute of Technology. The main objectives of the collaborators included designing algorithms capable of predicting crime events a week in advance, establishing spatiotemporal resolution at the scales of blocks and days, and developing the ability to predict at scales much smaller than neighborhoods. In its early phase, the group focused on identifying statistical relationships within CPD data on 911 calls and shooting statistics. The team maintained that it could extrapolate where and when future shootings would occur based on the assumption that time series in past data could be used to infer shootings in the immediate days afterward. But algorithms have a tendency to go rogue. They find new datasets to comb for correlations, new methodologies to assimilate, new hardware inside of which to embed themselves. Thus the Predictive Analytics Group’s software, dubbed CrimeScan, was expanded to include curfew offenses, disorderly conduct, driving under the influence, drug offenses, liquor offenses, loitering, prostitution, public drunkenness, runaways, simple assault, vandalism, and vagrancy, among others. It identified twelve indicators of future shootings, including assaults, minor crimes, gang-related emergency calls, and gun-related emergency calls. Beyond CPD datasets, the Predictive Analytics Group also probed for spatial correlations between reported incidents and data on liquor stores, physical disrepair, and untended garbage. The group also began to use CrimeScan to test for indicators of future crime using the city’s Data Portal, one of the largest public databanks in the country. The data portal consisted of more than nine hundred datasets on municipal departments, facilities, and services. The predictive analytics group adopted an open source database to incorporate unstructured data such as the area’s number of abandoned buildings, or complaints about garbage disposal and road conditions, or liquor licenses. But using the data only reproduced rationalizations for flooding the Black Belt with patrol forces, as geographies of dilapidation and disrepair are unmistakably racialized.
Owing to the racialized character of Chicago’s War on Drugs, the expansion of CrimeScan amounted to the expansion of calculating racial proxies as indicators of future criminality. In point of fact, the CPD’s own datasets for victimless crimes (vandalism, gambling, drunkenness, and disorderly conduct) between 1991 and 2009 showed remarkably consistent disparities between subjects cataloged as blacks and whites. From 1991 to 1999, 70 percent of arrestees for these categories were black, compared to 29 percent whites. According to the CPD’s own data, Chicagoans categorized as black and Hispanic accounted for 83 percent of all vandalism arrestees, 93 percent of narcotics arrestees, and 99 percent of all gambling arrestees. In 2009, 99 percent of gambling violations recorded by the CPD involved individuals classified as black, 93 percent of recorded narcotics violations involved individuals classified as black and Hispanic, 85 percent of instances of vandalism involved individuals classified as black and Hispanic, and 67 percent of simple assaults involved individuals classified as black.
In addition to geographic profiling, the CPD went on a brief jaunt in offender-based predictions. The individual-based approach emerged through the CPD’s strategic subjects list (SSL), which was generated by analyzing the social characteristics of narcotic arrestees and registered gang members. The SSL also used social network analysis to estimate a subject’s relative risk of engaging in future violent crime. This technique determined one’s likelihood of committing future crime according to one’s links to homicide victims. The SSL was inaugurated by conducting network analyses on sixty gangs and six hundred factions to predict potential acts of retaliatory violence in the aftermath of homicides. In 2013, the DOC began producing trimonthly lists of the five hundred highest-risk individuals in the city, that is, each police district’s twenty riskiest subjects. Toward the end of 2013, the department created an additional list of the city’s 426 riskiest subjects to circulate to district commanders. By the end of the program’s first two years, the total number of subjects identified as at risk had increased by more than 200 percent, including more than fourteen hundred names. Some heatlisted subjects were enrolled in call-in sessions, which were originally designed by police, federal and county authorities, and the Illinois Department of Corrections. The sessions involved rounding up putative gangs members to warn them that they would be targeted in the event of a spike in violent crime. In 2015, the department began calling in groups of high-risk subjects identified on the SSL who lived in close proximity to one another. Call-ins functioned as unequivocal admissions on behalf of the police that certain individuals, geographically concentrated in certain places, were permanently subjected to differential police supervision. Predictive policing therefore not only lent the credence of data science shamans to the age-old practice of discriminatory profiling but also lent itself to new rationalizations for extending such profiling.
In 2014, the tech magazine Verge released a widely read article on the CPD’s predictive policing program that compared it to a film based on Philip K. Dick’s short story “The Minority Report.” The story revolved around a federal agency that predicted future crimes by analyzing prophetic utterances of three “mutants,” which were speckled with hints of future crimes up to two weeks in advance. “Every incoherent utterance, every random syllable, was analyzed, compared, reassembled in the form of visual symbols, transcribed on conventional punchcards, and ejected into various coded slots. All day long the [mutants] babbled, imprisoned in their special high-backed chairs, held in one rigid position by metal bands, and bundles of wiring, clamps.” Fragments of the enslaved workers were examined and assembled in a coherent manner by data receptors and computers in the agency’s analytical wing. Once clues to future crimes were distilled from the trio’s ramblings, computers generated cards with the names of precriminals, previctims, and the dates and times of future offenses. The Precrime Agency worked in tandem with police units to capture precriminals before they committed crimes and send them to detention camps. The system was extolled for establishing a prophylactic precrime structure that cut down on felonies by over 99 percent, prevented murders for five years, and abolished the “post-crime punitive system of jails and fines.” In Dick’s story, the Precrime Agency was captured by political interests through an extraordinary turn of events. Dick’s main criticism of precrime methodology revolved around the potential incongruities between precrime methodology and individual liberties. However, the Precrime Agency’s real-world counterpart in Chicago tells a different story. Instead of an extraordinary event, Chicago’s Predictive Analytics Group was born of the normalized policies of the crime and drug wars. Moreover, real-world predictive policing does not work through preemptively criminalized individuals. Whether using offender- or geographic-based algorithms, predictive policing identifies entire demographics and neighborhoods as criminally predisposed.