A new study from New York University School of Law and NYU’s AI Now Institute concludes that predictive policing systems run the risk of exacerbating discrimination in the criminal justice system if they rely on “dirty data.”

Photo credit: kali9/Getty Images

Law enforcement has come under scrutiny in recent years for practices resulting in disproportionate aggression toward minority suspects, causing some to ask whether technology – specifically, predictive policing software – might diminish discriminatory actions.

However, a new study from New York University School of Law and NYU’s AI Now Institute concludes that predictive policing systems, in fact, run the risk of exacerbating discrimination in the criminal justice system if they rely on “dirty data” – data created from flawed, racially biased, and sometimes unlawful practices.

The researchers illustrate this phenomenon with case study data from Chicago, New Orleans, and Arizona’s Maricopa County. Their paper, “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice,” is available on SSRN.

“We chose these sites because we found an overlap between extensively documented evidence of corrupt or unlawful police practices and significant interest, development, and current or prior use of predictive policing systems. This led us to examine the risks that one would influence the other,” explains Jason Schultz, a professor of clinical law and one of the paper’s co-authors.

The authors, who include Rashida Richardson, director of policy research at the AI Now Institute, and Kate Crawford, co-director of the AI Now Institute, identified 13 jurisdictions (including the aforementioned case studies) with documented instances of unlawful or biased police practices that have also explored or deployed predictive policing systems during the periods of unlawful activity.

The Chicago Police Department, for example, was under federal investigation for unlawful police practices when it implemented a computerized system that identifies people at risk of becoming a victim or offender in a shooting or homicide. The study revealed that the same demographic of residents who had been identified by the Department of Justice as targets of Chicago’s policing bias overlapped with those who were identified by the predictive system.

Other examples showed significant risks of overlap but because government use of predictive policing systems is often secret and hidden from public oversight, the extent of the risks remains unknown, according to the study.

“In jurisdictions that have well-established histories of corrupt police practices, there is a substantial risk that data generated from such practices could corrupt predictive computational systems. In such circumstances, robust public oversight and accountability are essential,” Schultz said.

Lead author Richardson added, “Even though this study was limited to jurisdictions with well-established histories of police misconduct and discriminatory police practices, we know that these concerns about policing practices and policies are not limited to these jurisdictions, so greater scrutiny regarding the data used in predictive policing technologies is necessary globally." 

Press Contact

Jade McClain
Jade McClain
(646) 469-8496