Alec Edgecliffe-Johnson explores how police departments employ predictive methods to compensate for limited resources.

In 1994, at a time of increased gang violence and crime, the LAPD trialed CompStat, the first of many modelling tools for predictive policing. Since then, justice systems around the world have incorporated predictive policing technology at every level, from investigation to patrolling, sentencing and rehabilitation. Advocates cite the potential for efficient resource allocation and greater outcomes in a time of expanded scope of criminal activity and overstretched resources. Critics, however, question both the effectiveness of, and the underlying bias in, these systems as well as their place in justice systems that have not developed in line with the technology.

Location, Location, Location
One of the most common forms of predictive policing technology is geographic crime prediction software. PredPol, a software program that is used in hundreds of cities in the US and UK, generates 500×500 square foot expected “high-crime areas” (HCAs). At the heart of the program is an “Epidemic Type Aftershock Sequence” (ETAS) algorithm, which relates instances of certain criminal activity with areas of past and repeated activity to multiple other sources of geographic data in order to predict future hot spots.

While PredPol confines itself mainly to property crime and drug use, another similar program, HunchLab, focuses on more serious violent crime. HunchLab’s models integrate sophisticated geographic data including location of areas with high footfall like bars and bus stops, as well as historic seasonal crime patterns, weather and temperature data, and a number of other metrics, many of which have not been publicly disclosed.

HunchLab has yet to report its effectiveness but PredPol has displayed fairly substantial results. Instances of criminal activity were reduced by approximately 13% following a four month trial use of PredPol in the Foothills District in Los Angeles. By comparison, in neighbouring districts that had not implemented the software, criminal activity rose slightly or showed no change. Similar positive results were reported in two districts in Atlanta that introduced it and even more significant reductions have occurred during use in smaller cities.
It is, however, difficult to determine whether the relationship is causal or merely correlative. It could simply be that officers spend more time policing predicted HCAs. Furthermore, this focus on specific areas increases the likelihood of over-policing and, in turn, the threat of fraught relationships between law enforcement and community members. Officers deployed in these areas may be more likely to see normal behavior as suspicious and may be more likely to take action. Recent shootings in a number of US cities serve as a stark reminder of the consequence of armed officers who perceive increased levels of threat.

There is also an enormous potential for bias in the system itself. While neither PredPol nor HunchLab use racial data explicitly in their predictions, by incorporating past data like arrest data, and by relying on human operation they may inadvertently incorporate racial bias. Equally, geographic data may be skewed by a reporting bias: wealthier neighborhoods tend to report criminal activity more readily than lower income neighborhoods that may face significantly more crime.

Predicting the Individual


While programs like PredPol and HunchLab focus on geographic data, other initiatives in the legal system focus on individuals or groups of individuals.  For example, some police departments have deployed sophisticated analysis tools that examine social media activity and assign threat levels to certain groups based on what they write or post. This helps them identify particularly dangerous groups like gangs and trafficking rings with varying levels of success.

Additionally, certain social media websites have developed algorithms to scan conversations between individuals that may indicate paedophilic activity. The social media companies are then able to report this information to authorities directly, replacing the need for a warrant to search conversations. Most would agree that preventing this sort of activity is in the interest of the public, but this is less clear when we consider programs developed by companies like ECM Universe which identify potential extremists online or software that scans social media for potential riots or mass gatherings. To what extent should we be preempting potential crime and at what cost to personal privacy?

Perhaps the most troubling advancement in the realm of predictive policing is the application of machine learning algorithms to risk assessment scores. Tests like NorthPointe’s COMPAS, which is widely used in the US, aggregate a series of data points on individuals including attitudes/personality types, relationships, association with criminals, educational attainment, employment history and history of violence to predict potential risk of future criminal activity.

Historically, the use of these scores was largely restricted to post-sentence decisions related to resource and time allocation in rehabilitation. Individuals with lower risk scores, for example, may have historically received lighter parole sentences and fewer police check ups than those with higher ones. In recent years, however, algorithmic risk assessments have been increasingly integrated into sentencing decisions themselves, prompting judges to assign more stringent sentences to individuals with higher risk scores. This is exactly the fate of Wisconsin-resident, Paul Zilly whose prosecutor recommended a year in county jail and a short period of supervision but whose judge, upon seeing his risk assessment, imposed a 2-year sentence in state prison and a 3-year period of supervision.
From the outside, the use of mathematically rigorous processes appears to eliminate human bias and therefore we are quick to overlook the massive potential for bias in the human-made programs. While demographic details like race and income are not assessed directly, they are often incorporated into the data through correlated variables. A meta-analysis of the COMPAS results found that while the program predicted the rate of recidivism with approximately 61% accuracy, predictions failed for whites and African Americans in different ways. African Americans were roughly twice as likely to be labelled higher risk for recidivism and not commit a crime than whites (44.9% and 23.5% respectively) and whites were roughly twice as likely to be labelled low risk and then re-offend than African Americans (47.7% and 28% respectively). Additionally, a staggering number of inputs in programs such as COMPAS are related to economic background and therefore poorer individuals are theoretically more likely to generate negative scores than wealthier ones.

The Threat To Legitimacy


Clearly there are still substantial concerns regarding bias in the models to be addressed, but if the trajectory of predictive policing continues, the models will become increasingly commonplace. The question then becomes, what does this mean for our justice system? The core characteristic of a functioning and just legal system is legitimacy, and legitimacy is underpinned by both societally favourable outcomes and a just process. In order for society members to assess a just process, transparency is required and predictive policing adds further levels of obscurity that detract from the legitimacy of the system.
Many of the software programs including PredPol, HunchLab and COMPAS are proprietary software and the exact mechanisms of the underlying algorithms are considered trade secrets. Equally, the justice system itself has not evolved measures of compensatory transparency. As such, law enforcement agencies are rarely forthcoming with their data and risk assessment scores are often not available to their subjects, and even if they are, are often incontestable. 

Predictive models pose an ever greater challenge to transparency as they integrate larger degrees of machine learning with unspecified rules for finding patterns in increasingly complex data sets. Experts predict a time in the development of predictive policing machine learning algorithms when humans will be genuinely incapable of interpreting the mechanics of the algorithms. We would then face a situation in which algorithms that hold substantial influence over criminal justice systems lie beyond human comprehension.
There are, however, alterations that can be made to partially negate the growing lack of transparency. We could request independent review of policing procedures, more information regarding the algorithmic mechanisms employed by human designers and increased scrutiny and contestability of risk assessment scores. It does not, however, seem advantageous, on an outcome or a procedural level, to halt the use of the inherently incomprehensible machine learning algorithms, especially as data collection expands and we demand greater degrees of accuracy. Their role in crime reduction is clearly beneficial to society, and there is a clear procedural benefit of well-functioning algorithms that remove the influence of human bias in various policing decisions. For example, one can imagine a time when controversial policing methods like stop and frisk are allowed only in algorithmically defined HRAs.
In order to design and implement accurate, unbiased algorithms that result in better outcomes and fewer instances of human bias we will need to sacrifice ever larger degrees of transparency, and with it, legitimacy as we know it. This is the true cost of certainty regarding criminal activity in a world that relies increasingly on algorithmic prediction. Unless we preempt these technological changes with adaptations to our justice systems, there is a substantial threat that this loss of legitimacy will result in a loss of trust, accountability and human control in our justice systems.


Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: