NYU Law’s Policing Project evaluates new surveillance and data collection tools for potential civil rights concerns

Current policing technology provides time-stamped video and audio of police encounters, identifies gunshot locations within seconds, and even uses artificial intelligence to predict where and when a crime will occur. But while these methods may make people safer, their data collection and surveillance techniques raise myriad questions about the potential for violating privacy, making mistakes, and perpetuating bias.

Surveillance camera with police in background

Photo credit: pixinoo/Getty Images

“For the most part, those tools are completely unregulated,” says Farhang Heydari, executive director of the Policing Project at NYU Law. “We’ve got statutes from decades ago about wiretapping phones, but there’s no statutes about face recognition, iris scanning, predictive policing, or any other kinds of big data analytics.”

Faced with what Heydari calls an “explosion” of powerful new law enforcement tools, Barry Friedman, Policing Project faculty director and the Jacob D. Fuchsberg Professor of Law, established the Policing Project in 2015 to promote the mission of “just and effective policing” and “democratic accountability.” The Project team is working closely with police departments and communities across the country, including Camden, New Jersey, where they helped rewrite the city’s use-of-force policy. In California, they helped the Los Angeles Police Commission write policy for the LAPD that later became state law, requiring police departments to publicly release body camera footage of police shootings or use-of-force deaths within 45 days. In Chicago, they initiated a neighborhood policing and engagement initiative in which members of the community remain in active communication with the police department about urgent issues.

While making these inroads into the policing process, the Project team noticed that one significant stakeholder was largely absent from conversations around policing: tech companies. With approximately 18,000 police departments in the country and only a few major tech companies specializing in law enforcement, the team realized they could make a substantial impact if able to provide insight into tech rollouts. As Friedman says, “One of the best means to ensure policing technology is compliant with concerns about civil liberties and racial justice is to enlist vendors in designing the technology that way.”

An opportunity to do just that arose in April 2018 when Axon, a prominent body camera supplier, created an AI and Policing Technology Ethics Board to advise the company on the implications of its products and services. Friedman, one of the board’s members, worked with the Policing Project team to publish a 42-page report on Axon’s facial recognition technology, which they concluded is “not currently reliable enough to justify its use on body-worn cameras.” They proposed that face recognition should not be deployed until it “performs with far greater accuracy and performs equally well across races, ethnicities, genders, and other identity groups.”

The findings and recommendations were compelling enough for Axon to announce its decision not to incorporate facial recognition technology into its body cameras.

In July 2019, the Policing Project published its second technology audit, this time for ShotSpotter’s gunshot detection technology. The effort followed a rejection of ShotSpotter’s product by the Toronto Police Department, which was concerned about possible privacy violations. ShotSpotter currently has sensors placed in more than 100 cities in the United States that detect, locate, analyze, and alert police of gunshots within seconds.

Focusing purely on the privacy-related aspects of the technology in its assessment, the Policing Project determined that ShotSpotter’s sensors “present relatively limited privacy risks.” But in its report, the team states that there is a small possibility that the technology "could capture voices of individuals near the sensors, and conceivably could be used for deliberate voice surveillance.” They went on to recommend that ShotSpotter should reduce the audio stored on their sensors, commit to denying requests and subpoenas for audio data, commit to not sharing sensor locations, and improve controls and supervision regarding audio access.

Overall, the report lauded ShotSpotter for working to adopt those recommendations despite the costs or challenges to the company’s production process, noting, “Other policing technology companies should follow [ShotSpotter’s] leadership and proactively embrace their responsibility in protecting individual liberty.”

In September 2019, Microsoft became the latest company to engage with the Policing Project when President Brad Smith visited NYU Law for a public conversation on regulating emerging technologies, with a focus on policing products. The event was just the beginning of a new endeavor by the Policing Project to gather industry experts and academics to explore the issues around regulating new tech. Heydari is hopeful that this consultation points to a newly emerging era of thoughtfulness around designing effective policing methods that still protect civil liberties.

“There are places in this country that are gripped by real violence and so police want to use every technology available to them to fight crime,” Heydari notes. “But it’s important to slow down and think through social costs, racial equity issues, privacy issues, and impacts on First Amendment rights. That’s the concern that we’re fighting against—to not rush headfirst without thinking through possible costs.”