A new computer algorithm can now predict the location and crime rate in urban regions up to a week ahead with about 90 percent accuracy. But there are worries about how these kinds of systems can lead to prejudice.
The prevalence of algorithms has dramatically increased during the last few years. They are used in everything ranging from weather forecasting to driving cars, providing shopping suggestions, and helping discover medical treatments for diseases.
The Tokyo Police wanted to use artificial intelligence-based technology before the Olympics to predict crimes before they happened. It wouldn’t come as much of a surprise as they were not used to severe crime cases.
Chicago’s crime and victimization risk model
This new algorithm and the scientific details were published in the journal Nature Human Behaviour.
Previous attempts to forecast crime using AI have generated controversy because they have the potential to reinforce racial bias. The “Crime and Victimization Risk Model” was implemented by the Chicago Police Department in 2012 with the assistance of some academic researchers.

The model created a list of individuals thought to be most likely to be engaged in a shooting incident, either as a victim or a perpetrator, based on variables including age and arrest history. It also assigned a score that determined how urgently people on the list needed to be monitored. An individual with a higher score was more likely to be thought of as either the perpetrator or victim of a firearms crime.
Although the model may seem intriguing, it has been discovered to unfairly target particular individuals based on a more limited set of criteria. Details of the algorithm and the list were first kept secret. But when the list was finally released, it contained 56 percent of Black men in the city between the ages of 20 and 29.
The senior author, Ishanu Chattopadhyay, an assistant professor of medicine at the University of Chicago, acknowledges that the data his model uses may be biased but claims that steps have been taken to lessen the impact of bias. He explained that the AI only identifies potential crime scenes rather than suspects. He claims, “It’s not minority report.”
“Resources available to law enforcement are finite. Thus, you should make the best use of that. It would be fantastic if you could predict where killings will occur,” he said.
After a protracted court struggle, a Chicago Sun-Times investigation in 2017 found that about half of those identified by the model as possible perpetrators had never been charged with illegal possession of arms, while 13 percent had never been charged with a severe criminal offense. Similar to this, a technology review investigation from 2019 described how risk assessment algorithms used to determine whether or not someone should be jailed were trained on historically biased data. Conversely, researchers at the Chicago University wanted to avoid past mistakes when they tried to build their algorithm.
The new algorithm
Using historical crime data from Chicago and Illinois, from 2014 to the end of 2016, Chattopadhyay and his coworkers developed an AI model that forecasts crime levels for the weeks following this training period. The model figured out the likelihood of crime occurring at a specific time and location using hundreds of thousands of social trends.

Chattopadhyay asserts that rather than being used to allocate police resources directly, “AI’s predictions could be more safely employed to inform policy at a high level.”
How does the new algorithm work?
To test the model, the researchers used historical information on violent crimes and property crimes from Chicago. The model forecasts the likelihood of specific crimes happening throughout the city. To predict future occurrences, the model divides a city into 1,000-square-foot tiles and looks for patterns over time in these tiled areas. According to Bloomberg’s report, the algorithm also successfully predicted crimes in eight different American cities, including well-known ones like Los Angeles, Atlanta, and Philadelphia.

The new algorithm contrasts with the earlier models for prediction, which show crime as emerging from hotspots and spreading to other locations. According to the researchers, such approaches provide an opportunity for bias since they ignore the complex social environment of cities and the intricate relationship between crime and the results of police enforcement. However, the new system anticipated crime likelihood in Chicago with 90% accuracy using analysis from earlier crime records that took into account numerous other indicators.
“When people sit down and choose which patterns to look at to forecast crime, it is difficult to deny bias because these patterns are meaningless,” said Chattopadhyay. “But now, you may pose challenging queries to the algorithm, such as “what will happen to the rate of violent crime if property crimes increase?”
Additional studies
The data was also used by the researchers to search for areas where policing is being impacted by human bias. They examined the number of arrests following crimes in Chicago neighborhoods with various socioeconomic statuses. This showed that crimes in wealthier areas resulted in more arrests than they did in poorer areas, suggesting bias in the police response.
“I am worried about the inclusion of proactive and reactive policing data in the study, or crimes that often get recorded because people report them and crimes that typically get documented because the police go in search for them,” said Lawrence Sherman, Cambridge Centre for Evidence-Based Policing, UK. According to him, the latter kind of data is very prone to prejudice. He also claimed there might be evidence of deliberate discrimination by police in particular neighborhoods.