Study Finds That Police “Crime Predicting” AI Fails Miserably at Predicting Crimes

Broken Algo Using an algorithm similar to those that predict earthquake aftershocks, cops have for more than a decade tried to use “predictive policing” software to presage where crimes will take place before they happen — and as a joint investigation by The Markup and Wired shows, it seems to have failed miserably. Geolitica, a predictive policing software used by police in Plainfield, New Jersey — the only department out of 38 departments willing to provide the publications with information — was so bad at predicting crimes that its success rate was less than half a percent. Formerly known as PredPol, the machine learning [same as below] software purchased by authorities in Plainfield and other police departments promised to help cops fight crime before it happens, like Philip K. Dick’s classic novella “Minority Report” — except with computers instead of psychics. Unsurprisingly, these prediction models, which have been the subject of ample reporting over the years, are riddled with ethical concerns given the discriminatory and racist natures of both artificial intelligence and law enforcement. And, as this new investigation reveals, they’re shockingly ineffective at predicting crime, as well. Hilariously Bad To get to this conclusion, The Markup and Wired looked at 23,631 of Geolitica’s predictions between the months of February and December 2018 and found that less than 100 of those predictions actually aligned with an actual crime, a success rate of less than half a percent. While the algorithm was slightly better at predicting some crimes than others — it correctly identified 0.6 percent…Study Finds That Police “Crime Predicting” AI Fails Miserably at Predicting Crimes

Leave a Reply

Your email address will not be published. Required fields are marked *