Part of a series on |
Algocracy |
---|
Examples |
Criminology |
---|
Main Theories |
Methods |
Subfields and other major theories |
Browse |
Predictive policing is the usage of mathematics, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. [1] [2] [3] A report published by the RAND Corporation identified four general categories predictive policing methods fall into: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators' identities, and methods for predicting victims of crime. [4]
Predictive policing uses data on the times, locations and nature of past crimes to provide insight to police strategists concerning where, and at what times, police patrols should patrol, or maintain a presence, in order to make the best use of resources or to have the greatest chance of deterring or preventing future crimes. This type of policing detects signals and patterns in crime reports to anticipate if crime will spike, when a shooting may occur, where the next car will be broken into, and who the next crime victim will be. Algorithms are produced by taking into account these factors, which consist of large amounts of data that can be analyzed. [5] [6] The use of algorithms creates a more effective approach that speeds up the process of predictive policing since it can quickly factor in different variables to produce an automated outcome. From the predictions the algorithm generates, they should be coupled with a prevention strategy, which typically sends an officer to the predicted time and place of the crime. [7] The use of automated predictive policing supplies a more accurate and efficient process when looking at future crimes because there is data to back up decisions, rather than just the instincts of police officers. By having police use information from predictive policing, they are able to anticipate the concerns of communities, wisely allocate resources to times and places, and prevent victimization. [8]
Police may also use data accumulated on shootings and the sounds of gunfire to identify locations of shootings. The city of Chicago uses data blended from population mapping crime statistics to improve monitoring and identify patterns. [9]
Rather than predicting crime, predictive policing can be used to prevent it. The "AI Ethics of Care" approach recognizes that some locations have greater crime rates as a result of negative environmental conditions. Artificial intelligence can be used to minimize crime by addressing the identified demands. [10]
At the conclusion of intense combat operations in April 2003, Improvised Explosive Devices (IEDs) were dispersed throughout Iraq’s streets. These devices were deployed to monitor and counteract U.S. military activities using predictive policing tactics. However, the extensive areas covered by these IEDs made it impractical for Iraqi forces to respond to every American presence within the region. This challenge led to the concept of Actionable Hot Spots—zones experiencing high levels of activity yet too vast for effective control. This situation presented difficulties for the Iraqi military in selecting optimal locations for surveillance, sniper placements, and route patrols along areas monitored by IEDs.[ citation needed ]
The roots of predictive policing can be traced to the policy approach of social governance, in which leader of the Chinese Communist Party Xi Jinping announced at a security conference in 2016 is the Chinese regime’s agenda to promote a harmonious and prosperous country through an extensive use of information systems. [11] A common instance of social governance is the development of the social credit system, where big data is used to digitize identities and quantify trustworthiness. There is no other comparably comprehensive and institutionalized system of citizen assessment in the West. [12]
The increase in collecting and assessing aggregate public and private information by China’s police force to analyze past crime and forecast future criminal activity is part of the government’s mission to promote social stability by converting intelligence-led policing (i.e. effectively using information) into informatization (i.e. using information technologies) of policing. [11] The increase in employment of big data through the police geographical information system (PGIS) is within China’s promise to better coordinate information resources across departments and regions to transform analysis of past crime patterns and trends into automated prevention and suppression of crime. [13] [14] PGIS was first introduced in 1970s and was originally used for internal government management and research institutions for city surveying and planning. Since the mid-1990s PGIS has been introduced into the Chinese public security industry to empower law enforcement by promoting police collaboration and resource sharing. [13] [15] The current applications of PGIS are still contained within the stages of public map services, spatial queries, and hot spot mapping. Its application in crime trajectory analysis and prediction is still in the exploratory stage; however, the promotion of informatization of policing has encouraged cloud-based upgrades to PGIS design, fusion of multi-source spatiotemporal data, and developments to police spatiotemporal big data analysis and visualization. [16]
Although there is no nationwide police prediction program in China, local projects between 2015 and 2018 have also been undertaken in regions such as Zhejiang, Guangdong, Suzhou, and Xinjiang, that are either advertised as or are building blocks towards a predictive policing system. [11] [17]
Zhejiang and Guangdong had established prediction and prevention of telecommunication fraud through the real-time collection and surveillance of suspicious online or telecommunication activities and the collaboration with private companies such as the Alibaba Group for the identification of potential suspects. [18] The predictive policing and crime prevention operation involves forewarning to specific victims, with 9,120 warning calls being made in 2018 by the Zhongshan police force along with direct interception of over 13,000 telephone calls and over 30,000 text messages in 2017. [11]
Substance-related crime is also investigated in Guangdong, specifically the Zhongshan police force who were the first city in 2017 to utilize wastewater analysis and data models that included water and electricity usage to locate hotspots for drug crime. This method led to the arrest of 341 suspects in 45 different criminal investigations by 2019. [19]
In China, Suzhou Police Bureau has adopted predictive policing since 2013. During 2015–2018, several cities in China have adopted predictive policing. [20] China has used predictive policing to identify and target people for sent to Xinjiang internment camps. [21] [22]
The integrated joint operations platform (IJOP) predictive policing system is operated by the Central Political and Legal Affairs Commission. [23]
In Europe there has been significant pushback against predictive policing and the broader use of artificial intelligence in policing on both a national and European Union level. [24]
The Danish POL-INTEL project has been operational since 2017 and is based on the Gotham system from Palantir Technologies. The Gotham system has also been used by German state police and Europol. [24]
Predictive policing has been used in the Netherlands. [24]
In the United States, the practice of predictive policing has been implemented by police departments in several states such as California, Washington, South Carolina, Alabama, Arizona, Tennessee, New York, and Illinois. [25] [26]
In New York, the NYPD has begun implementing a new crime tracking program called Patternizr. The goal of the Patternizr was to help aid police officers in identifying commonalities in crimes committed by the same offenders or same group of offenders. With the help of the Patternizr, officers are able to save time and be more efficient as the program generates the possible "pattern" of different crimes. The officer then has to manually search through the possible patterns to see if the generated crimes are related to the current suspect. If the crimes do match, the officer will launch a deeper investigation into the pattern crimes. [27]
In India, various state police forces have adopted AI technologies to enhance their law enforcement capabilities. For instance, the Maharashtra Police have launched Maharashtra Advanced Research and Vigilance for Enhanced Law Enforcement (MARVEL), the country's first state-level police AI system, to improve crime prediction and detection. [28] Additionally, the Uttar Pradesh Police utilize the AI-powered mobile application 'Trinetra' for facial recognition and criminal tracking. [29]
Predictive policing faces issues that affect its effectiveness. Obioha mentions several concerns raised about predictive policing. High costs and limited use prevent more widespread use, especially among poorer countries. Another issue that affects predictive policing is that it relies on human input to determine patterns. Flawed data can lead to biased and possibly racist results. [30] Technology cannot predict crime, it can only weaponize proximity to policing. Though it is claimed to be unbiased data, communities of color and low income are the most targeted. [31] It should also be noted that not all crime is reported, making the data faulty[ further explanation needed ] and inaccurate.[ citation needed ]
In 2020, following protests against police brutality, a group of mathematicians published a letter in Notices of the American Mathematical Society urging colleagues to stop work on predictive policing. Over 1,500 other mathematicians joined the proposed boycott. [32]
Some applications of predictive policing have targeted minority neighborhoods and lack feedback loops. [33]
Cities throughout the United States are enacting legislation to restrict the use of predictive policing technologies and other “invasive” intelligence-gathering techniques within their jurisdictions.
Following the introduction of predictive policing as a crime reduction strategy, via the results of an algorithm created through the use of the software PredPol, the city of Santa Cruz, California experienced a decline in the number of burglaries reaching almost 20% in the first six months the program was in place. Despite this, in late June 2020 in the aftermath of the murder of George Floyd in Minneapolis, Minnesota along with a growing call for increased accountability amongst police departments, the Santa Cruz City Council voted in favor of a complete ban on the use of predictive policing technology. [34]
Accompanying the ban on predictive policing, was a similar prohibition of facial recognition technology. Facial recognition technology has been criticized for its reduced accuracy on darker skin tones - which can contribute to cases of mistaken identity and potentially, wrongful convictions. [35]
In 2019, Michael Oliver, of Detroit, Michigan, was wrongfully accused of larceny when his face registered as a “match” in the DataWorks Plus software to the suspect identified in a video taken by the victim of the alleged crime. Oliver spent months going to court arguing for his innocence - and once the judge supervising the case viewed the video footage of the crime, it was clear that Oliver was not the perpetrator. In fact, the perpetrator and Oliver did not resemble each other at all - except for the fact that they are both African-American which makes it more likely that the facial recognition technology will make an identification error. [35]
With regards to predictive policing technology, the mayor of Santa Cruz, Justin Cummings, is quoted as saying, “this is something that targets people who are like me,” referencing the patterns of racial bias and discrimination that predictive policing can continue rather than stop. [36]
For example, as Dorothy Roberts explains in her academic journal article, Digitizing the Carceral State, the data entered into predictive policing algorithms to predict where crimes will occur or who is likely to commit criminal activity, tends to contain information that has been impacted by racism. For example, the inclusion of arrest or incarceration history, neighborhood of residence, level of education, membership in gangs or organized crime groups, 911 call records, among other features, can produce algorithms that suggest the over-policing of minority or low-income communities. [35]
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Advances in the field of deep learning have allowed neural networks to surpass many previous approaches in performance.
A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image.
Legal informatics is an area within information science.
Crime analysis is a law enforcement function that involves systematic analysis for identifying and analyzing patterns and trends in crime and disorder. Information on patterns can help law enforcement agencies deploy resources in a more effective manner, and assist detectives in identifying and apprehending suspects. Crime analysis also plays a role in devising solutions to crime problems, and formulating crime prevention strategies. Quantitative social science data analysis methods are part of the crime analysis process, though qualitative methods such as examining police report narratives also play a role.
Predictive analytics encompasses a variety of statistical techniques from data mining, predictive modeling, and machine learning that analyze current and historical facts to make predictions about future or otherwise unknown events.
Pre-crime is the idea that the occurrence of a crime can be anticipated before it happens. The term was coined by science fiction author Philip K. Dick, and is increasingly used in academic literature to describe and criticise the tendency in criminal justice systems to focus on crimes not yet committed. Precrime intervenes to punish, disrupt, incapacitate or restrict those deemed to embody future crime threats. The term precrime embodies a temporal paradox, suggesting both that a crime has not yet occurred and that it is a foregone conclusion.
Artificial intelligence marketing (AIM) is a form of marketing that uses artificial intelligence concepts and models such as machine learning, natural language processing (NLP), and computer vision to achieve marketing goals. The main difference between AIM and traditional forms of marketing resides in the reasoning, which is performed by a computer algorithm rather than a human.
Artificial intelligence (AI) has been used in applications throughout industry and academia. In a manner analogous to electricity or computers, AI serves as a general-purpose technology. AI programes emulate perception and understanding, and are designed to adapt to new information and new situations. Machine learning has been used for various scientific and commercial purposes including language translation, image recognition, decision-making, credit scoring, and e-commerce.
In network theory, link analysis is a data-analysis technique used to evaluate relationships between nodes. Relationships may be identified among various types of nodes, including organizations, people and transactions. Link analysis has been used for investigation of criminal activity, computer security analysis, search engine optimization, market research, medical research, and art.
The Classification System for Serial Criminal Patterns (CSSCP) is an artificial intelligence computer system that assists law enforcement officials in identifying links between serial crimes. Working in conjunction with a neural network called a Kohonen network, CSSCP finds patterns in law enforcement databases by analyzing the characteristics of an offender, the criminal activities that have occurred, and the objects used in a crime. Once the links between crimes have been identified by CSSCP, law enforcement officials can then use the data that is produced to build leads or solve criminal cases. Through its capability to run autonomously, the CSSCP has proven that it can operate non-stop without any human interaction and can achieve results with much more accuracy and efficiency than a human.
The Domain Awareness System, the largest digital surveillance system in the world, is part of the Lower Manhattan Security Initiative in partnership between the New York Police Department and Microsoft to monitor New York City. It allows the NYPD to track surveillance targets and gain detailed information about them, and is overseen by the NYPD Counterterrorism Bureau.
In the United States, the practice of predictive policing has been implemented by police departments in several states such as California, Washington, South Carolina, Alabama, Arizona, Tennessee, New York, and Illinois. Predictive policing refers to the usage of mathematical, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. Predictive policing methods fall into four general categories: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators' identities, and methods for predicting victims of crime.
Smart cities seek to implement information and communication technologies (ICT) to improve the efficiency and sustainability of urban spaces while reducing costs and resource consumption. In the context of surveillance, smart cities monitor citizens through strategically placed sensors around the urban landscape, which collect data regarding many different factors of urban living. From these sensors, data is transmitted, aggregated, and analyzed by governments and other local authorities to extrapolate information about the challenges the city faces in sectors such as crime prevention, traffic management, energy use and waste reduction. This serves to facilitate better urban planning and allows governments to tailor their services to the local population.
Artificial intelligence in healthcare is the application of artificial intelligence (AI) to analyze and understand complex medical and healthcare data. In some cases, it can exceed or augment human capabilities by providing better or faster ways to diagnose, treat, or prevent disease.
Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML), is a field of research within artificial intelligence (AI) that explores methods that provide humans with the ability of intellectual oversight over AI algorithms. The main focus is on the reasoning behind the decisions or predictions made by the AI algorithms, to make them more understandable and transparent. This addresses users' requirement to assess safety and scrutinize the automated decision making in applications. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.
Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
Cynthia Diane Rudin is an American computer scientist and statistician specializing in machine learning and known for her work in interpretable machine learning. She is the director of the Interpretable Machine Learning Lab at Duke University, where she is a professor of computer science, electrical and computer engineering, statistical science, and biostatistics and bioinformatics. In 2022, she won the Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (AAAI) for her work on the importance of transparency for AI systems in high-risk domains.
Government by algorithm is an alternative form of government or social ordering where the usage of computer algorithms is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013. A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation of judiciary is in its scope. In the context of blockchain, it is also known as blockchain governance.
Rashida Richardson is a visiting scholar at Rutgers Law School and the Rutgers Institute for Information Policy and the Law and an attorney advisor to the Federal Trade Commission. She is also an assistant professor of law and political science at the Northeastern University School of Law and the Northeastern University Department of Political Science in the College of Social Sciences and Humanities.
Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.