Predictive policing

Last updated

Predictive policing is the usage of mathematics, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. [1] A report published by the RAND Corporation identified four general categories predictive policing methods fall into: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators' identities, and methods for predicting victims of crime. [2]

Contents

Methodology

Predictive policing uses data on the times, locations and nature of past crimes to provide insight to police strategists concerning where, and at what times, police patrols should patrol, or maintain a presence, in order to make the best use of resources or to have the greatest chance of deterring or preventing future crimes. This type of policing detects signals and patterns in crime reports to anticipate if crime will spike, when a shooting may occur, where the next car will be broken into, and who the next crime victim will be. Algorithms are produced by taking into account these factors, which consist of large amounts of data that can be analyzed. [3] The use of algorithms creates a more effective approach that speeds up the process of predictive policing since it can quickly factor in different variables to produce an automated outcome. From the predictions the algorithm generates, they should be coupled with a prevention strategy, which typically sends an officer to the predicted time and place of the crime. [4] The use of automated predictive policing supplies a more accurate and efficient process when looking at future crimes because there is data to back up decisions, rather than just the instincts of police officers. By having police use information from predictive policing, they are able to anticipate the concerns of communities, wisely allocate resources to times and places, and prevent victimization. [5]

Police may also use data accumulated on shootings and the sounds of gunfire to identify locations of shootings. The city of Chicago uses data blended from population mapping crime statistics to improve monitoring and identify patterns. [6]

Other approaches

Rather than predicting crime, predictive policing can be used to prevent it. The "AI Ethics of Care" approach recognizes that some locations have greater crime rates as a result of negative environmental conditions. Artificial intelligence can be used to minimize crime by addressing the identified demands. [7]

History

Iraq

At the end of destructive and violent combat operations in April 2003, Improvised Explosive Devices (IED) [8] were placed throughout the streets of Iraq to monitor and rebuttal against US military action with predictive policing. However, the amount of space the IEDs covered were too big for Iraq to take action against each American in the area. This problem introduced the concept of Actionable Hot Spots. Areas that had a lot of action, but were too large to control the areas. This caused Iraq military difficulties in determining the best location to focus surveillance, position snipers, and patrol the routes being observed and placed with the IEDs.

China

The roots of predictive policing can be traced to the policy approach of social governance, in which leader of the Chinese Communist Party Xi Jinping announced at a security conference in 2016 is the Chinese regime’s agenda to promote a harmonious and prosperous country through an extensive use of information systems. [9] A common instance of social governance is the development of the social credit system, where big data is used to digitize identities and quantify trustworthiness. There is no other comparably comprehensive and institutionalized system of citizen assessment in the West. [10]

The increase in collecting and assessing aggregate public and private information by China’s police force to analyze past crime and forecast future criminal activity is part of the government’s mission to promote social stability by converting intelligence-led policing (i.e. effectively using information) into informatization (i.e. using information technologies) of policing. [9] The increase in employment of big data through the police geographical information system (PGIS) is within China’s promise to better coordinate information resources across departments and regions to transform analysis of past crime patterns and trends into automated prevention and suppression of crime. [11] [12] PGIS was first introduced in 1970s and was originally used for internal government management and research institutions for city surveying and planning. Since the mid-1990s PGIS has been introduced into the Chinese public security industry to empower law enforcement by promoting police collaboration and resource sharing. [11] [13] The current applications of PGIS are still contained within the stages of public map services, spatial queries, and hot spot mapping. Its application in crime trajectory analysis and prediction is still in the exploratory stage; however, the promotion of informatization of policing has encouraged cloud-based upgrades to PGIS design, fusion of multi-source spatiotemporal data, and developments to police spatiotemporal big data analysis and visualization. [14]

Although there is no nationwide police prediction program in China, local projects between 2015 and 2018 have also been undertaken in regions such as Zhejiang, Guangdong, Suzhou, and Xinjiang, that are either advertised as or are building blocks towards a predictive policing system. [9] [15]

Zhejiang and Guangdong had established prediction and prevention of telecommunication fraud through the real-time collection and surveillance of suspicious online or telecommunication activities and the collaboration with private companies such as the Alibaba Group for the identification of potential suspects. [16] The predictive policing and crime prevention operation involves forewarning to specific victims, with 9,120 warning calls being made in 2018 by the Zhongshan police force along with direct interception of over 13,000 telephone calls and over 30,000 text messages in 2017. [9]

Substance-related crime is also investigated in Guangdong, specifically the Zhongshan police force who were the first city in 2017 to utilize wastewater analysis and data models that included water and electricity usage to locate hotspots for drug crime. This method led to the arrest of 341 suspects in 45 different criminal investigations by 2019. [17]

In China, Suzhou Police Bureau has adopted predictive policing since 2013. During 2015–2018, several cities in China have adopted predictive policing. [18] China has used predictive policing to identify and target people for sent to Xinjiang internment camps. [19] [20]

The integrated joint operations platform (IJOP) predictive policing system is operated by the Central Political and Legal Affairs Commission. [21]

Europe

In Europe there has been significant pushback against predictive policing and the broader use of artificial intelligence in policing on both a national and European Union level. [22]

The Danish POL-INTEL project has been operational since 2017 and is based on the Gotham system from Palantir Technologies. The Gotham system has also been used by German state police and Europol. [22]

Predictive policing has been used in the Netherlands. [22]

United States

In the United States, the practice of predictive policing has been implemented by police departments in several states such as California, Washington, South Carolina, Alabama, Arizona, Tennessee, New York, and Illinois. [23] [24]

In New York, the NYPD has begun implementing a new crime tracking program called Patternizr. The goal of the Patternizr was to help aid police officers in identifying commonalities in crimes committed by the same offenders or same group of offenders. With the help of the Patternizr, officers are able to save time and be more efficient as the program generates the possible "pattern" of different crimes. The officer then has to manually search through the possible patterns to see if the generated crimes are related to the current suspect. If the crimes do match, the officer will launch a deeper investigation into the pattern crimes. [25]

Concerns

Predictive policing faces issues that affect its effectiveness. Obioha mentions several concerns raised about predictive policing. High costs and limited use prevent more widespread use, especially among poorer countries. Another issue that affects predictive policing is that it relies on human input to determine patterns. Flawed data can lead to biased and possibly racist results. [26] Technology cannot predict crime, it can only weaponize proximity to policing. Though it is claimed to be unbiased data, communities of color and low income are the most targeted. [27] It should also be noted that not all crime is reported, making the data faulty[ further explanation needed ] and inaccurate.

In 2020, following protests against police brutality, a group of mathematicians published a letter in Notices of the American Mathematical Society urging colleagues to stop work on predictive policing. Over 1,500 other mathematicians joined the proposed boycott. [28]

Some applications of predictive policing have targeted minority neighborhoods and lack feedback loops. [29]

Cities throughout the United States are enacting legislation to restrict the use of predictive policing technologies and other “invasive” intelligence-gathering techniques within their jurisdictions.

Following the introduction of predictive policing as a crime reduction strategy, via the results of an algorithm created through the use of the software PredPol, the city of Santa Cruz, California experienced a decline in the number of burglaries reaching almost 20% in the first six months the program was in place. Despite this, in late June 2020 in the aftermath of the murder of George Floyd in Minneapolis, Minnesota along with a growing call for increased accountability amongst police departments, the Santa Cruz City Council voted in favor of a complete ban on the use of predictive policing technology. [30]

Accompanying the ban on predictive policing, was a similar prohibition of facial recognition technology. Facial recognition technology has been criticized for its reduced accuracy on darker skin tones - which can contribute to cases of mistaken identity and potentially, wrongful convictions. [31]

In 2019, Michael Oliver, of Detroit, Michigan, was wrongfully accused of larceny when his face registered as a “match” in the DataWorks Plus software to the suspect identified in a video taken by the victim of the alleged crime. Oliver spent months going to court arguing for his innocence - and once the judge supervising the case viewed the video footage of the crime, it was clear that Oliver was not the perpetrator. In fact, the perpetrator and Oliver did not resemble each other at all  - except for the fact that they are both African-American which makes it more likely that the facial recognition technology will make an identification error. [31]

With regards to predictive policing technology, the mayor of Santa Cruz, Justin Cummings, is quoted as saying, “this is something that targets people who are like me,” referencing the patterns of racial bias and discrimination that predictive policing can continue rather than stop. [32]

For example, as Dorothy Roberts explains in her academic journal article, Digitizing the Carceral State, the data entered into predictive policing algorithms to predict where crimes will occur or who is likely to commit criminal activity, tends to contain information that has been impacted by racism. For example, the inclusion of arrest or incarceration history, neighborhood of residence, level of education, membership in gangs or organized crime groups, 911 call records, among other features, can produce algorithms that suggest the over-policing of minority or low-income communities. [31]

See also

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.

<span class="mw-page-title-main">Facial recognition system</span> Technology capable of matching a face from an image against a database of faces

A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.

Predictive analytics is a form of business analytics applying machine learning to generate a predictive model for certain business applications. As such, it encompasses a variety of statistical techniques from predictive modeling and machine learning that analyze current and historical facts to make predictions about future or otherwise unknown events. It represents a major subset of machine learning applications; in some contexts, it is synonymous with machine learning.

Disease Informatics (also infectious disease informatics) studies the knowledge production, sharing, modeling, and management of infectious diseases. It became a more studied field as a by-product of the rapid increases in the amount of biomedical and clinical data widely available, and to meet the demands for useful data analyses of such data.

Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian brain.

Artificial intelligence marketing (AIM) is a form of marketing that uses artificial intelligence concepts and models such as machine learning, Natural process Languages, and Bayesian Networks to achieve marketing goals. The main difference between AIM and traditional forms of marketing resides in the reasoning, which is performed by a computer algorithm rather than a human.

Artificial intelligence (AI) has been used in applications throughout industry and academia. Similar to electricity or computers, AI serves as a general-purpose technology that has numerous applications. Its applications span language translation, image recognition, decision-making, credit scoring, e-commerce and various other domains. AI which accommodates such technologies as machines being equipped perceive, understand, act and learning a scientific discipline.

In network theory, link analysis is a data-analysis technique used to evaluate relationships between nodes. Relationships may be identified among various types of nodes (100k), including organizations, people and transactions. Link analysis has been used for investigation of criminal activity, computer security analysis, search engine optimization, market research, medical research, and art.

<span class="mw-page-title-main">Deep learning</span> Branch of machine learning

Deep learning is the subset of machine learning methods based on neural networks with representation learning. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.

In the United States, the practice of predictive policing has been implemented by police departments in several states such as California, Washington, South Carolina, Alabama, Arizona, Tennessee, New York, and Illinois. Predictive policing refers to the usage of mathematical, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. Predictive policing methods fall into four general categories: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators' identities, and methods for predicting victims of crime.

<span class="mw-page-title-main">Unanimous A.I.</span> American technology company specializing in artificial swarm intelligence

Unanimous AI is an American technology company provides artificial swarm intelligence (ASI) technology. Unanimous AI provides a "human swarming" platform "swarm.ai" that allows distributed groups of users to collectively predict answers to questions. This process has resulted in successful predictions of major events such as the Kentucky Derby, the Oscars, the Stanley Cup, Presidential Elections, and the World Series.

<span class="mw-page-title-main">Artificial intelligence in healthcare</span> Overview of the use of artificial intelligence in healthcare

Artificial intelligence in healthcare is a term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to copy human cognition in the analysis, presentation, and understanding of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease. Specifically, AI is the ability of computer algorithms to arrive at approximate conclusions based solely on input data.

Explainable AI (XAI), often overlapping with Interpretable AI, or Explainable Machine Learning (XML), either refers to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

<span class="mw-page-title-main">Merative</span> U.S. healthcare company

Merative L.P., formerly IBM Watson Health, is an American medical technology company that provides products and services that help clients facilitate medical research, clinical research, real world evidence, and healthcare services, through the use of artificial intelligence, data analytics, cloud computing, and other advanced information technology. Merative is owned by Francisco Partners, an American private equity firm headquartered in San Francisco, California. In 2022, IBM divested and spun-off their Watson Health division into Merative. As of 2023, it remains a standalone company headquartered in Ann Arbor with innovation centers in Hyderabad, Bengaluru, and Chennai.

Government by algorithm is an alternative form of government or social ordering where the usage of computer algorithms is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013. A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation of judiciary is in its scope. In the context of blockchain, it is also known as blockchain governance.

Artificial intelligence (AI) in hiring involves the use of technology to automate aspects of the hiring process. Advances in artificial intelligence, such as the advent of machine learning and the growth of big data, enable AI to be utilized to recruit, screen, and predict the success of applicants. Proponents of artificial intelligence in hiring claim it reduces bias, assists with finding qualified candidates, and frees up human resource workers' time for other tasks, while opponents worry that AI perpetuates inequalities in the workplace and will eliminate jobs. Despite the potential benefits, the ethical implications of AI in hiring remain a subject of debate, with concerns about algorithmic transparency, accountability, and the need for ongoing oversight to ensure fair and unbiased decision-making throughout the recruitment process.

Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning, artificial intelligence, algorithmic bias, and AI accountability. She is currently an Assistant Professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University. Lakkaraju is known for her work on explainable machine learning. More broadly, her research focuses on developing machine learning models and algorithms that are interpretable, transparent, fair, and reliable. She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare, criminal justice, business, and education. Lakkaraju was named as one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

References

  1. Rienks R. (2015). "Predictive Policing: Taking a chance for a safer future".
  2. Perry, Walter L.; McInnis, Brian; Price, Carter C.; Smith, Susan; Hollywood, John S. (25 September 2013), The Role of Crime Forecasting in Law Enforcement Operations
  3. "Predictive Policing Explained | Brennan Center for Justice". www.brennancenter.org. 2020-04-01. Retrieved 2020-11-19.
  4. National Academies of Sciences, Engineering (2017-11-09). Proactive Policing: Effects on Crime and Communities. National Academies Press. ISBN   978-0-309-46713-1.
  5. National Academies of Sciences, Engineering (2017-11-09). Weisburd, David; Majimundar, Malay K (eds.). Proactive Policing: Effects on Crime and Communities. doi:10.17226/24928. ISBN   978-0-309-46713-1. S2CID   158608420.
  6. "Violent crime is down in Chicago". The Economist. 5 May 2018. Retrieved 2018-05-31.
  7. Alikhademi, Kiana; Drobina, Emma; Prioleau, Diandra; Richardson, Brianna; Purves, Duncan; Gilbert, Juan E. (2021-04-15). "A review of predictive policing from the perspective of fairness". Artificial Intelligence and Law. 30: 1–17. doi:10.1007/s10506-021-09286-4. ISSN   0924-8463. S2CID   234806056.
  8. Perry, Walter; McInnis, Brian; Price, Carter; Smith, Susan; Hollywood, John (2013). Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. p. 104. doi:10.7249/rr233. ISBN   9780833081483. S2CID   169831855.
  9. 1 2 3 4 Sprick, Daniel (2020-09-22). "Predictive Policing in China". Naveiñ Reet: Nordic Journal of Law and Social Research (9): 299–324. doi: 10.7146/nnjlsr.v1i9.122164 . ISSN   2246-7807. S2CID   254483644.
  10. Wong, Karen Li Xan; Dobson, Amy Shields (June 2019). "We're just data: Exploring China's social credit system in relation to digital platform ratings cultures in Westernised democracies". Global Media and China. 4 (2): 220–232. doi: 10.1177/2059436419856090 . hdl: 20.500.11937/81128 . ISSN   2059-4364. S2CID   197785198.
  11. 1 2 He, Rixing; Xu, Yanqing; Jiang, Shanhe (2022-06-01). "Applications of GIS in Public Security Agencies in China". Asian Journal of Criminology. 17 (2): 213–235. doi:10.1007/s11417-021-09360-5. ISSN   1871-014X. S2CID   255163408.
  12. Schwarck, Edward (2018-07-01). "Intelligence and Informatization: The Rise of the Ministry of Public Security in Intelligence Work in China". The China Journal. 80: 1–23. doi:10.1086/697089. ISSN   1324-9347. S2CID   149764208.
  13. Chen, Jun; Li, Jing; He, Jianbang; Li, Zhilin (2002-01-01). "Development of geographic information systems (GIS) in China: An overview". Photogrammetric Engineering and Remote Sensing. 68 (4): 325–332. ISSN   0099-1112.
  14. Zhang, Lili; Xie, Yuxiang; Xidao, Luan; Zhang, Xin (May 2018). "Multi-source heterogeneous data fusion". 2018 International Conference on Artificial Intelligence and Big Data (ICAIBD). pp. 47–51. doi:10.1109/ICAIBD.2018.8396165. ISBN   978-1-5386-6987-7. S2CID   49540273.
  15. "法制日报--2014年10月09日--视点--"大数据"给公安警务改革带来了什么" (in Chinese). 2018-12-21. Archived from the original on 21 December 2018. Retrieved 2022-05-08.
  16. "大数据背景下跨境电信网络诈骗犯罪的预警与反制——以冒充公检法诈骗为例". lzlib.cglhub.com (in Chinese). Retrieved 2022-05-08.
  17. "中山市公安局:建"智慧公安"保中山平安_政务频道_中山网". www.zsnews.cn (in Chinese). Retrieved 2022-05-08.
  18. ""大数据"给公安警务改革带来了什么" (in Chinese (China)). 2014-10-09. Archived from the original on 2018-12-21. Retrieved 2015-04-21.
  19. "Exposed: China's Operating Manuals For Mass Internment And Arrest By Algorithm". ICIJ . 2019-11-24. Retrieved 2019-11-26.
  20. "'Big data' predictions spur detentions in China's Xinjiang: Human Rights Watch". Reuters . 2018-02-26. Retrieved 2019-11-26.
  21. Davidson, Helen; Ni, Vincent (19 October 2021). "Chinese effort to gather 'micro clues' on Uyghurs laid bare in report". The Guardian . Retrieved 2 November 2021.
  22. 1 2 3 Neslen, Arthur (20 October 2021). "FEATURE-Pushback against AI policing in Europe heats up over racism fears". www.reuters.com. Reuters. Retrieved 1 November 2021.
  23. Friend, Zach. "Predictive Policing: Using Technology to Reduce Crime". FBI Law Enforcement Bulletin. Federal Bureau of Investigation . Retrieved 8 February 2018.
  24. Levine, E. S.; Tisch, Jessica; Tasso, Anthony; Joy, Michael (February 2017). "The New York City Police Department's Domain Awareness System". Interfaces. 47 (1): 70–84. doi:10.1287/inte.2016.0860.
  25. Griffard, Molly (December 2019). "A Bias-Free Predictive Policing Tool?: An Evaluation of the Nypd's Patternizr". Fordham Urban Law Journal. 47 (1): 43–83 via EBSCO.
  26. Mugari, Ishmael; Obioha, Emeka E. (2021-06-20). "Predictive Policing and Crime Control in The United States of America and Europe: Trends in a Decade of Research and the Future of Predictive Policing". Social Sciences. 10 (6): 234. doi: 10.3390/socsci10060234 . ISSN   2076-0760.
  27. Guariglia, Matthew (2020-09-03). "Technology Can't Predict Crime, It Can Only Weaponize Proximity to Policing". Electronic Frontier Foundation. Retrieved 2021-12-13.
  28. Linder, Courtney (2020-07-20). "Why Hundreds of Mathematicians Are Boycotting Predictive Policing". Popular Mechanics. Retrieved 2022-06-03.
  29. "Where in the World is AI? Responsible & Unethical AI Examples". map.ai-global.org. Retrieved 2022-06-03.
  30. Sturgill, Kristi (2020-06-26). "Santa Cruz becomes the first U.S. city to ban predictive policing". Los Angeles Times. Retrieved 2022-06-03.
  31. 1 2 3 "Predictive policing in the United States", Wikipedia, 2022-06-03, retrieved 2022-06-03
  32. "In a U.S. first, California city set to ban predictive policing". Reuters. 2020-06-17. Retrieved 2022-06-03.

Further reading