A failure of prediction? - October 2016 issue preview

It was more than a decade ago, while working as a reporter on a market research magazine, that I first encountered predictive policing. At the time, if I recall, the term “predictive policing” was not in widespread use – nor was the technology we have now. However, software vendors were starting to promote their tools to police departments, explaining how “predictive analytics” could be used to solve crimes, spot correlations between incident types, and optimise the deployment of officers.

A press release dating back to January 2004, publicising the Richmond, Virginia police department’s use of the SPSS Clementine software, explains how critical it is that crime fighters are able to analyse “the thousands of incident reports, crime tips, calls for service and other data resources that police departments receive every day”. “Within this deluge of data lies the key to increasing the safety of the public,” said SPSS.

Twelve years on, I think most people would agree with that comment. Policing can, and should, get better and smarter through effective use of data. But it would seem that the predictive policing we have now is not the sort of predictive policing we had in mind all those years ago.

As we were working on our October 2016 issue, and our cover story, a statement signed by civil rights and privacy groups was published, expressing concerns with the state of predictive policing in the USA. This statement echoes many of the points made in Kristian Lum and William Isaac’s article: in relying on police recorded data, which are inherently biased, predictive policing tools are “further concentrat[ing] enforcement activities in communities that are already over-policed”. Lum and Isaac explain exactly why this is and how it happens.

We will of course be following this story over the coming months, as I’m sure many of our readers will be. Indeed, the civil rights groups would seem to envisage an active role for the statistical community going forward: in their statement, they say that “rigorous, independent, expert assessment of the statistical validity and operational impact of any new [predictive policing] system” is “essential” before any widespread deployment.

Elsewhere in our October issue, we look back 350 years to the Eyam plague with an article exploring whether the quarantine of this small village in the North of England was a heroic sacrifice or a tragic mistake. And we publish the winning article from our Young Statisticians Writing Competition. Congratulations to Adam Kashlak of the University of Cambridge.