Elisabeth Dietz investigates the economics of pre-crime policing.

One day the police come knocking on your door and tell you that you are under suspicion. Not because they suspect you have engaged in illegal activities, or because they have evidence that you are planning to commit a crime. Instead, an algorithm has calculated that you are a ‘high risk’ individual, likely to be involved in a crime in the near future. Is this an elaborate role-play, enacting the plot of the film Minority Report? Or some dystopian vision of a technology-driven security state? Neither. This scenario is a reality under pre-crime policing, a predictive crime prevention strategy that is on the rise in countries like the UK and the US. With the increasing popularity of pre-crime software, the question becomes: do these strategies really prevent crime, and at what cost?

Data-driven predictions
The usage of data analysis to identify crime patterns and prevent future infractions is hardly a novelty in police practice. However, the pre-crime trend signifies a radical shift towards technology, involving advanced algorithm-based software doing the work previously performed by humans. For instance, the police department in the UK county of Kent has begun using PredPol, an internationally recognised prediction software relying on past criminal records. PredPol attempts to predict where and when future crimes will happen, creating ‘hotspots’ on interactive maps available to the police. And whilst PredPol is an example of a geographically oriented software, other systems feed their algorithms different information and offer different sets of predictions. For instance, The Chicago Police has made a ‘heat list’ of those judged most likely to be involved in a shooting, based on data on people’s locations and social affiliations. Hitachi’s Visualization Predictive Crime Analytics includes social media data like tweets, and IBM’s systems analyse information on big local events and proximity to payday. HunchLab on the other hand, combines crime statistics with socioeconomic data, weather information and business locations, all in order to predict the locations of crime. While some software companies do offer limited information about their algorithms, much is still unknown to the public. Some critics also warn against a lack of transparency from police departments about their new methods. What is the software trying to predict, and with what data? How accurate are the predictions, and how do we even define accuracy? How are the predictions analysed and who is making use of them? With pre-crime software booming in popularity, these questions have largely been sidelined, yet they will inevitably re-emerge as algorithms become both more advanced and widespread.

The costs of crime
Despite the uncertainty associated with letting algorithms become part of a sensitive profession such as policing, it is not difficult to understand the appeal of pre-crime strategies. The economic and social costs of crime are hard to measure, but are potentially colossal. The UK Peace Index estimated that the total cost of violence to the UK economy was £124 billion in 2012, or 7.7% of GDP, and non-violent crime would only add to this number. Intervening before a crime has happened is more cost-efficient for both the police and society as a whole, and even small crime reductions could alleviate massive human costs. Prediction software also promises to help the police use their resources more efficiently, a tempting offer as many departments feel under pressure with budget cuts and performance targets looming. However, such software does not come for free. PredPol cost the Kent Police Force around £100,000 pounds in 2014 alone. The Force itself estimated that they would only need to achieve a 0.35% reduction in crime to financially match these costs, yet as the cost of crime is notoriously hard to measure, it is difficult to know the accuracy of this estimate. Nevertheless, it appears that while prediction software may be expensive, the costs of crime are much larger – even small crime reductions would bring economic benefit. From this perspective, predictive software appears to be a rational choice for police departments. Besides, isn’t predictive software just a more effective way of making use of the resources the police already possess?

Ambiguous accuracies
However, Economist Peter van Wijck at the University of Leiden argues that pre-crime policing will lead to welfare enhancements only when certain criteria are fulfilled. There needs to be good chances of accurately predicting and influencing future behaviour, substantial harm must be prevalent, and the costs must be low. One might argue that the harm of violent crime is indeed substantial, and that while such software is expensive today, costs will possibly fall in the future as technology advances and more companies enter the market. Yet can the police actually accurately predict crime and prevent it from happening? Many police forces report back on the efficiency of pre-crime strategies, with a Los Angeles trial having software predict crime accurately at a rate of 6% compared to the human 3%. Similarly, The Trafford precinct of Manchester Police reports having cut burglaries by 26.6% in May 2011, compared to a 9.8% decline in the rest of the city. On the other hand, a report from the American research institute The RAND Corporation indicates that the Chicago ‘heat list’ has not been effective in preventing crime. At best, critics say, it has been less effective than traditional most wanted lists; at worst it has unnecessarily profiled people for unwarranted police attention. As for Kent, although the continuation of PredPol usage was recommended, the Police’s own report is ambiguous about crime reduction results.

Justice ensured?
Critics would explain these results with the limitations of prediction software. As long as software is given data on past crimes, it is bound to keep detecting the type and location of crime that is already on the police’s radar. Similarly, it might also amplify the human biases that have led to the prioritisation of a certain form of policing in the past.

Given recent accusations of racism against police departments in countries like the US, the risk of systemic injustices being magnified by algorithms should not be easily dismissed. Indeed, software might be blind to the fact that some crimes, such as white-collar crime or rape, are consistently under-reported, or to the fact that some groups are overrepresented in arrests due to racial profiling.  While some claim that algorithms can in fact reduce human bias, it appears difficult to avoid completely, as long as software is programmed by humans and fed ‘human’ data. However, the risk of algorithms being biased is not the only concern that has been raised. John Bartlett from the British think-tank DEMOS argues that we need clearer legal guidelines regarding privacy in the age of pre-crime. Indeed, should police be allowed to use data from social media posts in their software? And what will happen if social media companies begin selling private user data like messages to governments or other agencies? The Chinese government, for example, has taken pre-crime software a step further, and is reportedly developing a Social Credit System, rating people’s trustworthiness based on Big Data.

So what is the verdict on pre-crime policing? It appears that pre-crime software can bring economic benefits, yet only if it succeeds in accurately predicting crime, which may not always be the case. As van Wijk points out, pre-emptive interventions may in fact reduce welfare if they are inaccurate or expensive to carry out. The social costs to privacy are likely to be immense, especially if the software ends up in the wrong hands. As the content of most pre-crime software is secret, it is difficult to know how the algorithm is programmed, and what judgements it makes. In a justice system riddled with accusations of bias, how can we then make sure that algorithms don’t contribute to further inequality? If these concerns are not enough to gather resistance against prediction software, perhaps its all-encompassing and deterministic nature is. What do you do the day the software makes predictions about you? Perhaps we would all be better off if predictive policing tactics stayed in the movies.


Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: