Established | 2017 |
---|---|
Location | |
Key people | Irakli Beridze (Head of Centre) |
Website | www.unicri.it |
The Centre for Artificial Intelligence and Robotics at the United Nations Interregional Crime and Justice Research Institute (UNICRI) was established to advance understanding of artificial intelligence (AI), robotics and related technologies with a special focus on crime, terrorism and other threats to security, with the goal of supporting and assisting UN Member States in understanding the risks and benefits of these technologies and exploring their use for contributing to a future free of violence and crime. [1] [2]
UNICRI launched its programme on AI and Robotics in 2015 – making it one of the first such initiatives within the UN system. In September 2017, UNICRI signed a host country agreement with the Ministry of Foreign Affairs of the Netherlands for the establishment of the UNICRI Centre for AI and Robotics in The Hague. On 10 July 2019, the Centre held a formal celebration of its launch at the Peace Palace, in The Hague, The Netherlands. [3]
The Centre connects a large network of stakeholders, including governmental entities, industry, academia, think tanks, foundations and civil society. [1] [4] It has signed strategic partnerships with the Kay Family Foundation and industry partners, such as 1QBit Information Technologies, Inc., for the provision of support to the Centre. [5]
The Centre has also established cooperative relationships with the United Nations Office of Information and Communication Technologies (OICT), [6] the Organisation for Security and Cooperation in Europe (OSCE), [7] INTERPOL [8] and other entities.
Together with INTERPOL, the Centre co-organizes the annual Global Meeting on AI for Law Enforcement. [8] [9] The first INTERPOL - UNICRI Global Meeting was held on 11–12 July 2018 at INTERPOL's Global Complex for Innovation (IGCI) [8] in Singapore and the second Global Meeting took place within the framework of INTERPOL World 2019, on 3–4 July 2019 in Singapore. The Third Global Meeting will take place in The Hague, The Netherlands in 2020.
UNICRI and INTERPOL released a report AI and Robotics for Law Enforcement at the United Nations in New York in April 2019. [10] A follow-up report, Towards Responsible AI Innovation, was released by UNICRI and INTERPOL in May 2020. [11]
The Centre conducts action-oriented research, training and technical cooperation programmes. It is also exploring the conceptual design and development of AI-based tools. Current priorities for this include, inter alia, tools for combating human trafficking, child sexual abuse material, corruption and bribery, the financing of terrorism, terrorist use of the Internet and social media, and identifying programmatically manipulated voice or video content (deepfakes).
In November 2020, UNICRI through its Centre of Artificial Intelligence and Robotics, and the Ministry of Interior of the United Arab Emirates launched the ‘Artificial Intelligence for Safer Children’ Initiative. This new initiative seeks to combat online CSAM through the joint exploration of new technological solutions, specifically Artificial Intelligence (AI) and machine learning, together with law enforcement agencies to combat the growing problem of online CSAM. [12] On 23 March, the first meeting of the advisory board of the "AI for Safer Children" initiative met virtually, which is composed of global leaders in child protection, law enforcement and AI, including representatives from: Aarambh India; the Bracket Foundation; the Canadian Center for Child Protection; World Childhood Foundation; ECPAT; the European Commission Directorate-General for Migration and Home Affairs; Europol; the Fund to End Violence Against Children; Griffeye; the Gucci Children’s Foundation; International Justice Mission; INTERPOL; the National Center for Missing and Exploited Children; Red Papaz; SafeToNet; Thorn; UNICEF; University of Massachusetts Amherst; the Virtual Global Taskforce; and the WePROTECT Alliance. [13]
In October 2021, the Centre of AI and Robotics at UNICRI in partnership with INTERPOL Innovation Centre jointly launched the Toolkit for the Responsible use of AI by Law Enforcement. The purpose of this toolkit is to provide practical support on the specific use of AI for global law enforcement. Conceptualised as a collection of practical insights, use cases, principles, recommendations and resources, it will guide law enforcement to utilise AI to support specific strategic and operational objectives and still abide by the universal principles or considerations to ensure trustworthy use of AI in a lawful and responsible manner. While AI is undeniably a transformative technology, the potential of which to support law enforcement work is immense, there are equally numerous pitfalls that could undermine fundamental freedoms and infringe human rights, such as the right to privacy, equality and non-discrimination, as well as public trust in law enforcement as an institution. Failure to adequately navigate or cater for these pitfalls, could limit the possibility for law enforcement to revolutionise police work and practices with AI. On 12 November 2020, a Core Group meeting of experts from both the law enforcement and AI communities took place and served as a forum for initial discussions on the possible objective, structure, target audience and key points of the Toolkit.
Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology (MIT) formed by the 2003 merger of the Laboratory for Computer Science (LCS) and the Artificial Intelligence Laboratory. Housed within the Ray and Maria Stata Center, CSAIL is the largest on-campus laboratory as measured by research scope and membership. It is part of the Schwarzman College of Computing but is also overseen by the MIT Vice President of Research.
Europol, officially the European Union Agency for Law Enforcement Cooperation, is the law enforcement agency of the European Union (EU). Established in 1998, it is based in The Hague, Netherlands, and serves as the central hub for coordinating criminal intelligence and supporting the EU's member states in their efforts to combat various forms of serious and organized crime, as well as terrorism.
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.
Transnational organized crime (TOC) is organized crime coordinated across national borders, involving groups or markets of individuals working in more than one country to plan and execute illegal business ventures. In order to achieve their goals, these criminal groups use systematic violence and corruption. Common transnational organized crimes include conveying drugs, conveying arms, trafficking for sex, toxic waste disposal, materials theft and poaching.
The International Centre for Missing & Exploited Children (ICMEC), headquartered in Alexandria, Virginia, with a regional presence in Brazil, Singapore, and Australia, is a private 501(c)(3) non-governmental, nonprofit global organization. It combats child sexual exploitation, child pornography, and child abduction.
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.
The United Nations Interregional Crime and Justice Research Institute (UNICRI) is one of the five United Nations Research and Training Institutes. The Institute was founded in 1968 to assist the international community in formulating and implementing improved policies in the field of crime prevention and criminal justice. Its work currently focuses on Goal 16 of the 2030 Agenda for Sustainable Development, that is centred on promoting peaceful, just and inclusive societies, free from crime and violence.
Stephen K. Ibaraki has been a teacher, an industry analyst, writer and consultant in the IT industry, and the former president of the Canadian Information Processing Society.
PhotoDNA is a proprietary image-identification and content filtering technology widely used by online service providers.
The International Centre for Counter-Terrorism (ICCT) is an independent think-and-do tank providing multidisciplinary policy advice and practical support focused on prevention, the rule of law and current and emerging threats three important parts of effective counter-terrorism work. ICCT's work focuses on themes at the intersection of countering violent extremism and criminal justice sector responses, as well as human rights related aspects of counter-terrorism. The major project areas concern countering violent extremism, rule of law, foreign fighters, country and regional analysis, rehabilitation, civil society engagement and victims' voices.
1QB Information Technologies, Inc. (1QBit) is a quantum computing software company, based in Vancouver, British Columbia. 1QBit was founded on December 1, 2012 and has established hardware partnerships with Microsoft, IBM, Fujitsu and D-Wave Systems. While 1QBit develops general purpose algorithms for quantum computing hardware, the organization is primarily focused on computational finance, materials science, quantum chemistry, and the life sciences.
Fintech, a portmanteau of "financial technology", refers to firms using new technology to compete with traditional financial methods in the delivery of financial services. Artificial intelligence, blockchain, cloud computing, and big data are regarded as the "ABCD" of fintech. The use of smartphones for mobile banking, investing, borrowing services, and cryptocurrency are examples of technologies designed to make financial services more accessible to the general public. Fintech companies consist of both startups and established financial institutions and technology companies trying to replace or enhance the usage of financial services provided by existing financial companies.
WeProtect Global Alliance is a global alliance that brings together experts from government, the private sector and civil society to protect children from sexual abuse online.
Kriti Sharma is an artificial intelligence technologist, business executive and humanitarian. As of 2018, she is the vice president of artificial intelligence and ethics at UK software company Sage Group. Sharma is the founder of AI for Good UK, which works to make artificial intelligence tools more ethical and equitable. Sharma has been named to Forbes magazine's 30 Under 30 Europe: Technology list, and appointed as a United Nations Young Leader. In 2018, she was appointed as an advisor to the UK's Department for Digital, Culture, Media and Sport. Sharma's initiatives include Pegg, an accounting chatbot, and rAInbow, a platform to support survivors of domestic violence. She has called for a philosophy of "embracing botness", arguing that artificial intelligence should prioritize utility over human resemblance.
A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between global superpowers for better military AI, driven by increasing geopolitical and military tensions. An AI arms race is sometimes placed in the context of an AI Cold War between the US and China.
Government by algorithm is an alternative form of government or social ordering where the usage of computer algorithms, especially of artificial intelligence and blockchain, is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. The term "government by algorithm" appeared in academic literature as an alternative for "algorithmic governance" in 2013. A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation of judiciary is in its scope. In the context of blockchain, it is also known as blockchain governance.
Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
Anja Kaspersen is a senior fellow at Carnegie Council for Ethics in International Affairs. She is the former Director of the United Nations Office for Disarmament Affairs in Geneva and Deputy Secretary General of the Conference on Disarmament (UNODA). Previously, she held the role as the head of strategic engagement and new technologies at the International Committee of the Red Cross (ICRC). Prior to joining the ICRC she served as a senior director for geopolitics and international security and a member of the executive committee at the World Economic Forum.
The Technology Innovation Institute (TII) is an Abu Dhabi government funded research institution that operates in the areas of artificial intelligence, quantum computing, autonomous robotics, cryptography, advanced materials, digital science, directed energy and secure systems. The institute is a part of the Abu Dhabi Government’s Advanced Technology Research Council (ATRC).