Scientists are looking for a way to predict crime using, you guessed it, artificial intelligence.
There are loads of studies that show using AI to predict crime results in consistently racist outcomes. For instance, one AI crime prediction model that the Chicago Police Department tried out in 2016 tried to get rid of its racist biases but had the opposite effect. It used a model to predict who might be most at risk of being involved in a shooting, but 56% of 20-29 year old Black men in the city appeared on the list.
Despite it all, scientists are still trying to use the tool to find out when, and where, crime might occur. And this time, they say it’s different.
Researchers at the University of Chicago used an AI model to analyze historical crime data from 2014 to 2016 as a way to predict crime levels for the following weeks in the city. The model predicted the likelihood of crimes across the city a week in advance with nearly 90 percent accuracy; it had a similar level of success in seven other major U.S. cities.
This study, which was published in Nature Human Behavior, not only attempted to predict crime, but also allowed the researchers to look at the response to crime patterns.
Co-author and professor James Evans told Science Daily that the research allows them “to ask novel questions, and lets us evaluate police action in new ways.” Ishanu Chattopadhyay, an assistant professor at the University of Chicago, told Insider that their model found that crimes in higher-income neighborhoods resulted in more arrests than crimes in lower-income neighborhoods do, suggesting some bias in police responses to crime.
“Such predictions enable us to study perturbations of crime patterns that suggest that the response to increased crime is biased by neighborhood socio-economic status, draining policy resources from socio-economically disadvantaged areas, as demonstrated in eight major U.S. cities,” according to the report.
Chattopadhyay told Science Daily that the research found that when “you stress the system, it requires more resources to arrest more people in response to crime in a wealthy area and draws police resources away from lower socioeconomic status areas.”
Chattopadhyay also told the New Scientist that, while the data used by his model might also be biased, the researchers have worked to reduce that effect by not identifying suspects, and, instead, only identifying sites of crime.
But there’s still some concern about racism within this AI research. Lawrence Sherman from the Cambridge Center for Evidence-Based Policing told the New Scientist that because of the way crimes are recorded — either because people call the police or because the police go looking for crimes — the whole system of data is susceptible to bias. “It could be reflecting intentional discrimination by police in certain areas,” he told the news outlet.
All the while, Chattopadhyay told Insider he hopes the AI’s predictions will be used to inform policy, not directly to inform police.
“Ideally, if you can predict or pre-empt crime, the only response is not to send more officers or flood a particular community with law enforcement,” Chattopadhyay told the news outlet. “If you could preempt crime, there are a host of other things that we could do to prevent such things from actually happening so no one goes to jail, and helps communities as a whole.”