HORIZONS

NEW ALGORITHM PREDICTS WHEN AND WHERE A CRIME WILL HAPPEN BEFORE IT TAKES PLACE

The AI model was tested across eight cities in the US and predicts future crimes with 80 to 90 per cent accuracy, without falling foul of bias

A new algorithm could help predict when and where a crime will take place
YOUR ALGORITHM SUCCESSFULLY PREDICTED CRIME IN US CITIES A WEEK BEFORE THEY HAPPENED. HOW DID YOU BUILD THE ALGORITHM?

The city of Chicago and the seven other cities that we looked at have started putting out crime event logs in the public domain. In Chicago, these are actually updated daily with a week’s delay.

These event logs contain information about what happened, what type of crime it was, where it happened, the latitude, longitude, and a timestamp. In Chicago, we also have information about if there were any arrests made when there were interactions with the police officers.

So we start with this event log and then digitise the city into small areas of about two blocks by two blocks – about 1,000 feet [300 metres] across.

And in one of those tiles, we’ll see this time series of these different events, like violent crimes, property crimes, homicides and so on. This results in tens of thousands of time series that are coevolving.

What our algorithm does is look at these coevolving time series, then figures out how they are dependent on one another and how they’re constraining one another – so how they’re shaping one another. That brings up a really complex model.

You can then make predictions on what’s going to happen, say, a week in advance at a particular tile, plus or minus one day. In Chicago, for example, today is Wednesday. Using our algorithm, you can say that next Wednesday, on the intersection of 37th Street and Southwestern Avenue, there would be homicide.

HOW DO YOU ENVISAGE THE WAYS YOUR ALGORITHM COULD BE USED?

People have concerns that this will be used as a tool to put people in jail before they commit crimes. That’s not going to happen, as it doesn’t have any capability to do that. It just predicts an event at a particular location. It doesn’t tell you who is going to commit the event or the exact dynamics or mechanics of the events.

It cannot be used in the same way as in the film Minority Report.

In Chicago, most of the people losing their lives in violent crimes is largely due to gang violence. It is not like a Sherlock Holmes movie where some convoluted murder is happening. It is actually very actionable if you know about it a week in advance – you can intervene.

This does not just involve stepping up enforcement and sending police officers there, there are other ways of intervening socially so that the odds of the crime occurring actually goes down and, ideally, it never happens.

What we would like to do is enable a kind of policy optimisation. My cohorts and I have been very vocal that we don’t want this to be used as a purely predictive policy tool. We want policy optimisation to be the main use of it. We have to enable that, as just putting out a paper and having the algorithm there isn’t enough. We want the mayor or administrators to use the model generated to do simulations and inform policy.

“It doesn’t tell you who is going to commit the event or the exact dynamics or mechanics of the events. It cannot be used in the same way as in Minority Report”

PREVIOUS ALGORITHMS OF THIS KIND HAVE BEEN HEAVILY CRITICISED FOR PRODUCING BIAS, IN TERMS OF RACIAL PROFILES, FOR EXAMPLE. HOW DO YOU ACCOUNT FOR THIS?

Approaches that have been tried before are straight-up machine learning, off-the-shelf tools where you take a giant data set, determine what the important features are, then use those features with a standard complex neural network to try to make predictions.

The issue with that approach is that as soon as you say certain features are important, you’re probably going to miss things, so you will get misleading results. That happened in the Chicago Police Department [in 2014-2016].

They were putting people on the list who were likely to be perpetrators or victims of gun violence, using an equation involving characteristics like arrest histories. And that resulted in a large proportion of the black population being on the list.

We are trying to start only from the event logs. There are no humans sitting down figuring out what the features are, or what attributes are important. There’s very little manual input going on, other than the event log that is coming in. We have tried to reduce bias as much as possible.

That’s how our model is different from other models that have come before.

A LOT OF PEOPLE ARE WORRIED ABOUT THE LACK OF TRANSPARENCY IN THE AI DECISION-MAKING PROCESS. IS THERE AN ISSUE WITH THIS?

AI systems have been used to model more and more complex systems, so it’s not surprising that many of them tend to seem like a black box. Compare them to how things worked before. Back then, we just had a tiny differential equation for a system, which gave us the feeling that we understood it. If we have a giant neural network, we just can’t understand what’s going on. So that’s an issue and there’s a lot of work that’s going into explaining AI.

We have a really complex model, one that you can’t just look at and read off the factors from. But the way to think about it is to look at all of the event logs. There are observations from this complex social system interacting with all these socioeconomic factors, enforcement factors, demographics, economics and all of these things. All of that feeds into and shapes this social system you’re modelling. You can’t expect a simple kind of pattern to come out of all this data.

PROF ISHANU CHATTOPADHYAY

Ishanu leads the ZeD Lab at the University of Chicago, where he studies algorithms and data.