Cyberpolitik: AI and Crime Prevention: Is it a force multiplier?
— Satya Sahu
Crime prevention is based on the idea that crime can be reduced or eliminated by modifying the factors that influence its occurrence or consequences. We can classify “prevention” into three main types: primary, secondary, and tertiary. Primary prevention addresses the root causes of crime or deters potential offenders before they commit a crime. Secondary prevention aims to intervene with at-risk groups or individuals to prevent them from becoming involved in crime. Finally, tertiary prevention efforts seek to rehabilitate or punish offenders to prevent them from reoffending. (This, however, is beyond the scope of today’s discussion.)
Flipping the coin, we notice that policing is based on the idea that law enforcement and public order can be maintained by enforcing the law and responding to crimes or incidents. Policing also lends itself to being classified into two main types: reactive and proactive. Reactive policing responds to reported crimes or incidents after they occur. Proactive policing anticipates or prevents crimes or incidents before they occur. On the face of it, AI can help us prevent and fight crime by enhancing both types of crime prevention and policing.
AI can digest and analyse petabytes of data from disparate sources, such as social media, CCTV footage, sensors used in our Smart Cities™, and boring old digitised government records, to identify patterns, trends, and anomalies that can indicate potential criminal activity. For example, the police in Vancouver use predictive models to identify areas where robberies are expected to occur and then post officers to deter potential thieves or other criminals. Similarly, the police in Los Angeles use a system called PredPol that generates maps of hotspots where crimes are likely to happen based on past data. These systems can help the police allocate their resources more efficiently and effectively and reduce crime rates and response times.
When it comes to collecting and processing evidence, such as fingerprints, DNA, facial recognition, voice recognition, and digital forensics etc., we can look at the UK Home Office’s VALCRI, which uses AI to analyse large volumes of data from different sources, such as crime reports, witness statements, CCTV footage, and social media posts, to generate hypotheses and leads for investigators. For example, the police in India used ML-backed facial recognition technology to reunite thousands of missing children with their families. Moreover, AI can help the police in presenting evidence and arguments in court, such as using natural language processing to generate concise summaries or transcripts of testimonies or documents.
It could augment efforts to monitor and evaluate police performance and conduct, such as using dashcams, bodycams, or drones to record their interactions with the public and/or suspects. For example, the police in New Orleans developed a program called EPIC that uses AI to analyse video footage from bodycams to identify instances of misconduct or excessive force by officers. It can also help the police in engaging with the public and building trust and confidence, such as using chatbots or social media platforms to communicate with citizens and provide critical information services, hopefully unlike the chatbot from my bank’s beleaguered website.
However, all this has enormous implications for the jurisprudential underpinnings of crime prevention and policing. One such significance arises when AI itself can change the nature and scope of crime and criminality. AI can enable new forms of crime that exploit its capabilities and vulnerabilities, such as cyberattacks, biometric spoofing, deepfakes, autonomous weapons, or social engineering. Unlike their current-crime counterparts, leveraging AI allows these future crimes to be more sophisticated, scalable and anonymous than conventional ones. Therefore, the legal and ethical frameworks that govern our efforts to control such crimes must, therefore, must evolve to address these new crimes. It is a foregone conclusion that without involving AI at the forefront of these efforts, it will be impossible to counter AI-enabled crimes themselves. Hence the concomitant need to update the legal and ethical norms guiding society’s conceptions of policing and crime prevention.
Yet another implication is that AI also transforms the roles and responsibilities of police officers and other actors involved in crime prevention or response. As the examples show, AI can augment or automate some of the tasks that police officers perform, such as data collection, analysis, or evidence processing. AI can also assist or replace some of the decisions that police officers make, such as risk assessment, resource allocation, or intervention selection. To ensure that the concerns of effectiveness and responsibility surrounding Mx. Robo-Cop are adequately balanced, clear and consistent standards and regulations for police and state actors must be established side-by-side with the development and deployment of such systems.
This is not to say that we need to disavow the use of AI in the field of policing and crime prevention. The potential and limitations of AI and the skills and knowledge to use it effectively and responsibly make it so versatile and terrifying. However, it is still a tool to be wielded by the legitimate wielder of the state’s punitive power: the police.
The use of AI in identifying young people who are vulnerable to gang exploitation or violence and mounting efforts to prevent them from becoming involved in crime is already a burning question in the UK. This recognises that leveraging AI to provide better targeted and tailored state support and services to at-risk groups or individuals, is valuable. On the face of it, any enhancements to their state’s performance, efficiency, and accountability in this regard will be applauded. But given what we know about the pitfalls surrounding AI, the opposite also holds: violating the privacy, dignity, or rights of individuals or communities will reduce the trust and legitimacy that is essential for state actors and the police to be able to police under the social contract.
Referring back to my previous post here, we know that AI can create or exacerbate the digital divide or systemic social inequalities among different groups or individuals. The conversation about the use of AI in a field where the slightest deviation from the limited scope of policing is undesirable must discuss the processes involved as well as the outcomes exacted upon the population being policed. This indicates the need to ensure that AI is used in a way that respects and protects the interests and values of individuals or communities. AI is a powerful tool that can help us understand the causes of, prevent, and reduce crime. Still, it is not a substitute for human judgment or responsibility. It is not merely a technology but also a socio-cultural phenomenon to be embraced with a healthy mixture of curiosity and caution.
(I use the term ‘AI’ to include machine learning, Neural Language Processing, etc., here for brevity.)
Matsyanyaaya: Why a local Indian rickshaw app should worry Big Tech
— Shailesh Chitnis
Digital platforms, such as Google and Facebook for advertising and Amazon for e-commerce, derive their power by bringing sellers and buyers together in one place. Over time, "network effects" ensure that these platforms achieve monopoly power in the market. Regulators have tried different methods to limit the reach of these platforms. The European Union prefers a rule-based approach to reining in these companies, while the United States M+A policy is focused on preventing market concentration.
Neither has worked particularly well.
Namma Yatri, a small ride-hailing app in Bangalore, may point in another direction. Since its launch last November, the app lists almost a third of the city's 150,000-odd rickshaw drivers on its network and routes 40% of all rickshaw rides. It is now a viable competitor to Ola and Uber, the dominant apps.
Namma Yatri is unique in that it is entirely funded and run by the community. The app is based on the open-source platform Open Network for Digital Commerce (ONDC), which is a non-profit supported by the Indian government. A private company, Juspay Technologies create the app, and there is no commission fee.
ONDC's concept is to create a common platform where buyers and sellers can easily transact. This is essentially a technological solution that deconstructs a marketplace (see
Information
- Show
- Published3 May 2023 at 03:30 UTC
- Length22 min
- RatingClean