For the last four years, the Chicago Police Department has kept a list of people they believe are most likely to be involved in a shooting. The list—known as the "heat list" or the Strategic Subject List—was developed using a secret algorithm and contains the names of over a thousand people at any given time.
In record-breaking year for gun violence rates, Superintendent Eddie Johnson has praised the list and the department’s use of big data in predicting crimes. In May, the CPD reported three out of four shooting victims in 2016 were on the Strategic Subject List. The number of people arrested in connection to shootings was even more impressive: 80 percent were on the SSL, they say.
Though the department has been quick to tout the list’s accuracy, there is no way to independently verify their claims. The names on the list are private and the department won’t even explain what variables are used to determine a person’s ranking on the list. In other words, we’ve had to take the department at their word that their big data program works.
That changed this week with the release of a report by RAND Corporation in the Journal of Experimental Criminology. In the first independent audit of the department’s SSL, researchers found a 2013 version of the list to be not nearly as valuable as the department claims.
“Individuals on the SSL are not more or less likely to become a victim of a homicide or shooting than the [control] group,” the authors write. Police use of the list also had no effect on citywide violence levels in Chicago.
While the study’s authors found that individuals on the SSL were indeed more likely to be arrested for a shooting, the researchers guessed that this was happening because officers were using the list as leads to close cases. Superintendent Johnson has said as recently as last month, however, that the list is not being used to target people for arrest.
“One of the major findings [of the study] was that the police on the ground, the people in the field, do not get a lot of training about how to use this list and what it means,” says lead author Jessica Saunders.
When asked for a comment yesterday afternoon, CPD spokesman Anthony J. Guglielmi said he was unaware of the report. In a statement released today, which includes a point-by-point response, police emphasize that the SSL has changed significantly since the 2013 version that is the subject of RAND's analysis.
"The evaluation was conducted on an early model of the algorithm that is no longer in use today… We are currently using SSL Version 5, which is more than 3 times as accurate as the version reviewed by RAND," the statement says.
But Saunders says that her findings can still apply to the tool CPD is using today.
“The findings of this study are probably not changed by making the list better,” says Saunders. “What we really found was that they didn’t know what to do with the list and there was no intervention tied to the list. So in my opinion, it almost doesn’t matter how good the list is, if you don’t know what to do with it.”
Saunders says that the CPD must carefully consider what interventions it uses on people on the list in order to prevent crime. Tactics such as call-ins and home visits, which the CPD sometimes uses in conjunction with the list, cannot be effective if they are not done across the board.
In its official statement, CPD says this intervention strategy has likewise evolved along with the SSL since 2013: they are now used in every police district, and metrics on the interventions are now "fully integrated within our CompStat accountability framework and weekly Compstat meetings."
Still, those who study big data policing say this week’s report from RAND is troubling.
“I think there’s a real question now after [the] RAND [report],” says Andrew Ferguson, a law professor at the University of the District of Columbia in Washington. “We don’t know how effective these lists are except for what the police tell us. This is one of the first analyses of the risk factors.”
Police departments and criminal justice organizations across the country are increasingly using algorithms like Chicago’s to predict the locations and perpetrators of future crimes. And in an era marked by the police shootings of young black men, big data has been held up as a way to avoid racial profiling and reduce violence.
But few cities make their algorithms available to the public or to organizations that work with communities most at risk for violence. This week’s RAND study is one of only two independent evaluations of predictive policing programs that have been done nationwide.
Given the shroud of secrecy that covers big data policing, many have questioned the algorithms’ accuracy and fairness. A ProPublica investigation earlier this year found a risk assessment algorithm used in Florida to have significant racial disparities and to be only slightly more accurate than a coin flip.
The Electronic Frontier Foundation and the American Civil Liberties Union of Illinois have both voiced concerns about how Chicago’s Strategic Subject List handles the issue of race. The Chicago Police Department has said that race is not one of the 11 weighted variables it uses to determine a person’s ranking on the list, but other variables they are using may code for race in less explicit ways. For example, a person’s address in a highly segregated neighborhood in Chicago could indicate a person’s wealth and race.
“The RAND analysis should be the beginning, not the end of determining whether or not these systems work,” says Ferguson. “The underlying idea of prioritizing police resources on those most at risk makes a ton of sense. The downside of getting that prediction wrong means a lot of wasted resources. So I think we need to figure out whether it’s possible to prioritize risk and then really decide whether police are really the right remedy once we’ve identified risks through big data policing.”