AI predicts crimes a week in advance with 90 percent

AI predicts crimes a week in advance with 90 percent accuracy — but can also perpetuate racial bias

RoboCop may get a 21st-century reboot after finding an algorithm that predicts future crimes a week in advance with 90 percent accuracy.

The artificial intelligence (AI) tool predicts crime by learning temporal and geographic patterns of violent and property crime.

Data scientists from the University of Chicago trained the computer model using public data from eight major US cities.

However, it has proved controversial because the model fails to account for systemic biases in policing and their complex relationship to crime and society.

Similar systems have been shown to actually perpetuate racial bias in policing, which this model could replicate in practice.

But these researchers claim that their model could be used to uncover bias and should only be used to inform current police strategies.

For example, it has been found that socio-economically disadvantaged areas may receive disproportionately less police attention than more affluent neighborhoods.

A new artificial intelligence (AI) tool developed by scientists in Chicago, USA, predicts crime by learning temporal and geographic patterns of violent and property crime

A new artificial intelligence (AI) tool developed by scientists in Chicago, USA, predicts crime by learning temporal and geographic patterns of violent and property crime

Violent crimes (left) and property crimes (right) recorded in Chicago over the two-week period April 1-15, 2017.  These incidents were used to train the computer model

Violent crimes (left) and property crimes (right) recorded in Chicago over the two-week period April 1-15, 2017. These incidents were used to train the computer model

Accuracy of model predictions of violent (left) and property crime (right) in Chicago.  The prediction is made one week in advance, and the event is registered as a successful prediction if a crime is recorded within ± a day of the predicted date

Accuracy of model predictions of violent (left) and property crime (right) in Chicago. The prediction is made one week in advance, and the event is registered as a successful prediction if a crime is recorded within ± a day of the predicted date

The Chicago Police Department tested an algorithm in 2016 that created a list of people most at risk of being involved in a shooting, either as a victim or perpetrator.

Initially, details of the results were kept secret, but it eventually turned out that 56 percent of black men in Chicago between the ages of 20 and 29 were on the list.

Lawrence Sherman of the Cambridge Center for Evidence-Based Policing told New Scientist he was concerned the model looked at data prone to bias.

He said: “It could reflect deliberate discrimination by the police in certain areas.”

HOW DOES AI WORK?

The model was trained on historical data from crime incidents in Chicago from 2014 to late 2016.

It then forecast crime levels for the weeks following the training period.

The incidents it was trained on fell into either violent crimes or property crimes.

It takes into account the temporal and spatial coordinates of individual crimes and recognizes patterns in them in order to predict future events.

It divides the city into spatial tiles about 1,000 feet in diameter and forecasts crime in those areas

The computer model was trained using historical data from criminal incidents in the city of Chicago from 2014 to late 2016.

It then predicted the crime rate for the weeks following that training period.

The incidents it was trained on fell into two broad categories of events that are less susceptible to enforcement bias.

These included violent crimes such as homicide, assault and assault, and property crimes such as burglary, theft and auto theft.

These incidents were also more likely to be reported to police in urban areas, where there is a history of distrust and lack of cooperation with law enforcement.

The model also takes into account the temporal and spatial coordinates of individual crimes and recognizes patterns in them in order to predict future events.

It divides the city into spatial tiles about 1,000 feet in diameter and forecasts crime in those areas.

This is in contrast to viewing areas as crime “hotspots” that spread to surrounding areas, as previous studies have done.

The hotspots often rely on traditional neighborhood or political boundaries, which are also fraught with prejudice.

co-author dr. James Evans said: “Spatial models ignore the natural topology of the city,

“Transportation networks respect roads, sidewalks, train and bus routes, and communications networks respect areas of similar socioeconomic background.

“Our model enables the discovery of these compounds.

“We demonstrate the importance of discovering city-specific patterns in predicting reported crime, which creates a new perspective on city neighborhoods, allows us to ask new questions and evaluate policing in new ways.”

The model performed as well as Chicago in data from seven other US cities, according to results published yesterday in Nature Human Behaviour.

Graphic showing the modeling approach of the AI ​​tool.  A city is divided into small spatial tiles roughly 1.5 times the size of an average city block, and the model computes patterns in the sequential streams of events recorded at different tiles

Graphic showing the modeling approach of the AI ​​tool. A city is divided into small spatial tiles roughly 1.5 times the size of an average city block, and the model computes patterns in the sequential streams of events recorded at different tiles

These were Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland and San Francisco.

The researchers then used the model to study police response to incidents in areas with different socioeconomic backgrounds.

They found that crime in affluent neighborhoods attracted more police forces and led to more arrests than in deprived neighborhoods.

This indicates a bias in police response and enforcement.

The lead author, Dr. Ishanu Chattopadhyay said: “What we’re seeing is that when you strain the system, it takes more resources to arrest more people in response to crimes in a wealthy area and siphons police resources from areas of lower socioeconomic status. ‘

The model also found that crimes that took place in more affluent neighborhoods attracted more police resources and led to more arrests than in deprived neighborhoods

The model also found that crimes that took place in more affluent neighborhoods attracted more police resources and led to more arrests than in deprived neighborhoods

Accuracy of the model's predictions of property and violent crime in major US cities.  a: Atlanta, b: Philadelphia, c: San Francisco, d: Detroit, e: Los Angeles, f: Austin.  All of these cities show a comparably high forecast performance

Accuracy of the model’s predictions of property and violent crime in major US cities. a: Atlanta, b: Philadelphia, c: San Francisco, d: Detroit, e: Los Angeles, f: Austin. All of these cities show a comparably high forecast performance

The use of computer modeling in law enforcement has proved controversial due to concerns that it could reinforce existing police bias.

However, this tool is not intended to direct police officers to areas where crimes could be predicted, but is used to inform current police strategies and policies.

The data and algorithms used in the study have been made public to allow other researchers to examine the results.

dr Chattopadhyay said: “We have created a digital twin of urban environments. If you feed it data from the past, it will tell you what will happen in the future.

“It’s not magic, there are limitations, but we’ve validated it and it works really well.

“Now you can use this as a simulation tool to see what happens when crime increases in one area of ​​the city or law enforcement increases in another.

“If you apply all these different variables, you can see how the systems evolve in response.”

Could a face-reading AI “lie detector” tell police when suspects aren’t telling the truth?

Forget the old “good cop, bad cop” routine – soon police could turn to artificial intelligence systems that can reveal a suspect’s true feelings during interrogations.

Face-scanning technology would rely on microexpressions, tiny involuntary facial movements that reveal and even reveal true feelings when people are lying.

London-based startup Facesoft has trained an AI using microexpressions seen on real people’s faces and a database of 300 million expressions.

The company has spoken to both the UK and Mumbai police about possible practical applications of the AI ​​technology.

Read more here