This article contains content that is written like an advertisement .(December 2023) |
AI for Good | |
---|---|
Begins | 6 July 2023 |
Ends | 7 July 2023 |
Venue | ITU-T |
Location(s) | Geneva |
Country | Switzerland |
Years active | 6 |
Most recent | 2023 |
Attendance | 20,000 (2022) |
Organised by | ITU-T |
Sponsor | Swiss Confederation, immersion4 Technology Innovation Institute, Monash University, ZTE |
Website | aiforgood.itu.int |
AI for Good is an ongoing webinar series organized by the Standardization Bureau (ITU-T) of the International Telecommunication Union, where AI innovators and problem owners learn, discuss and connect to identify AI solutions to advance the Sustainable Development Goals. [1] The impetus for organizing global summits that are action oriented, came from existing discourse in artificial intelligence (AI) research being dominated by research streams such as the Netflix Prize (improve the movie recommendation algorithm).
AI for Good aims to bring forward Artificial Intelligence research topics that contribute towards more global problems, [2] [3] in particular through the Sustainable Development Goals.
AI for Good [4] came out of the AI for Good Global Summit 2020 which had been moved online in 2020 due to the COVID-19 Pandemic. AI for Good is organized by the Standardization Sector of the ITU (ITU-T). Since moving online, AI for Good developed into three main program streams: Learn, Build, and Connect. AI for Good also helps organize ITU's Global Standards Symposium. [5]
The 2023 events received some publicity due to the large gathering of humanoid robots that occurred, including Ai-Da, Nadine Social Robot, Geminoid, and Sophia. [6]
In 2020 the Global Summit became an online-only event. In 2022, the summit moved to the "Neural Network" community platform. [7] Speakers include: [8]
The third AI for Good Global Summit took place from 28 May to 31 May, and gave rise to the ITU Focus Group on Artificial Intelligence for Autonomous and Assisted Driving with several Day 0 workshops and VIP events having taken place on May 27. [9] Some of the speakers included:
The second AI for Good Global Summit took place from 15 to 17 May 2018 at the ITU headquarters in Geneva, Switzerland and generated 35 AI project proposals [10] [11] [12] Speakers included: [13]
|
The first AI for Good Global summit took place from 7 to 9 June 2017. Speakers at the event included: [14] [15]
One of the outcomes of the 2017 Global Summit was the creation of an ITU-T Focus Group on Machine Learning for 5G.
The ITU-T Focus Group on Machine Learning for 5G Networks (FG-ML5G) was created following discussions at the 2017 AI for Good Global Summit. The FG-ML5G is produced several technology standards in this domain, including Y.3172, Y.3173, Y.3176, which were adopted by ITU-T Study Group 13. The FG-ML5G created the impetus for a new ITU-T Focus Group on Autonomous Networks, which is responsible for i.a. Y.3181.
The 2018 Global Summit led to the creation of the ITU-WHO Focus Group on Artificial Intelligence for Health with the World Health Organization, which created the AI for Health Framework. [16]
Together with ITU-T Study Group 16 and 17, AI for Good has been developing technology specifications under Trustworthy AI. Including items on homomorphic encryption, secure multi-party computation, and federated learning.
The ITU relaunched its Journal ICT Discoveries during the 2018 Global Summit, with the first edition being a special on Artificial Intelligence. [17]
The International Telecommunication Union Telecommunication Standardization Sector (ITU-T) is one of the three Sectors (branches) of the International Telecommunication Union (ITU). It is responsible for coordinating standards for telecommunications and Information Communication Technology, such as X.509 for cybersecurity, Y.3172 and Y.3173 for machine learning, and H.264/MPEG-4 AVC for video compression, between its Member States, Private Sector Members, and Academia Members.
Cynthia Breazeal is an American robotics scientist and entrepreneur. She is a former chief scientist and chief experience officer of Jibo, a company she co-founded in 2012 that developed personal assistant robots. Currently, she is a professor of media arts and sciences at the Massachusetts Institute of Technology and the director of the Personal Robots group at the MIT Media Lab. Her most recent work has focused on the theme of living everyday life in the presence of AI, and gradually gaining insight into the long-term impacts of social robots.
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics.
Stephen K. Ibaraki has been a teacher, an industry analyst, writer and consultant in the IT industry, and the former president of the Canadian Information Processing Society.
Artificial intelligence in healthcare is a term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to copy human cognition in the analysis, presentation, and understanding of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data.
The World Governments Summit is an annual event held in Dubai, United Arab Emirates. It brings together leaders in government for a global dialogue about governmental process and policies with a focus on the issues of futurism, technology innovation and other topics. The summit acts as a knowledge exchange hub between government officials, thought leaders, policy makers and private sector leaders and as an analysis platform for the future trends, issues and opportunities facing humanity. The summit hosts over 90 speakers from 150 participating countries, along with over 4000 attendees.
Shinjini Kundu is an Indian American physician and computer scientist at The Johns Hopkins Hospital in Baltimore, Maryland. Her research focuses on designing artificial intelligence systems to detect diseases that may be imperceptible to humans. She was named one of Forbes 30 under 30, MIT Technology Review's 35 innovators under 35, a World Economic Forum Young Global Leader, and a winner of the Carnegie Science Award.
Aimee van Wynsberghe is Alexander von Humboldt professor for "Applied Ethics of Artificial Intelligence" at the University of Bonn, Germany. As founder of the Bonn Sustainable AI Lab and director of the Institute for Science and Ethics, Aimee van Wynsberghe hosts every two years the Bonn Sustainable AI Conference.
Chaesub Lee PhD is a telecommunication executive who served as the Director of ITU Telecommunication Standardization Bureau, the permanent secretariat of the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) from 2015 until 2022.
Y.3172 is an ITU-T Recommendation specifying an architecture for machine learning in future networks including 5G (IMT-2020). The architecture describes a machine learning pipeline in the context of telecommunication networks that involves the training of machine learning models, and also the deployment using methods such as containers and orchestration.
H.870 "Guidelines for safe listening devices/systems" is an ITU-T Recommendation, developed in collaboration with the World Health Organization. It specifies standards for safe listening to prevent hearing loss and was first approved in 2018. In March 2022, version 2 was approved and published.
The Center for Human-Compatible Artificial Intelligence (CHAI) is a research center at the University of California, Berkeley focusing on advanced artificial intelligence (AI) safety methods. The center was founded in 2016 by a group of academics led by Berkeley computer science professor and AI expert Stuart J. Russell. Russell is known for co-authoring the widely used AI textbook Artificial Intelligence: A Modern Approach.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
The ITU-T Study Group 16 (SG16) is a statutory group of the ITU Telecommunication Standardization Sector (ITU-T) concerned with multimedia coding, systems and applications, such as video coding standards. It is responsible for standardization of the "H.26x" line of video coding standards, the "T.8xx" line of image coding standards, and related technologies, as well as various collaborations with the World Health Organization, including on safe listing (H.870) accessibility of e-health (F.780.2), it is also the parent body of VCEG and various Focus Groups, such as the ITU-WHO Focus Group on Artificial Intelligence for Health and its AI for Health Framework.
The ITU-WHO Focus Group on Artificial Intelligence for Health is an inter-agency collaboration between the World Health Organization and the ITU, which created a benchmarking framework to assess the accuracy of AI in health.
Anja Kaspersen is a director for Global Markets Development, New Frontiers and Emerging Spaces at IEEE, the world's largest technical professional organisation. Kaspersen is also a senior fellow at Carnegie Council for Ethics in International Affairs where she co-directs the Artificial Intelligence Equality Initiative with Wendell Wallach. With scholars and thinkers in the field of technology governance, supported by Carnegie Council for Ethics in International Affairs and IEEE, Kaspersen and Wallach provided a Proposal for International governance of AI.
Wendell Wallach is a bioethicist and author focused on the ethics and governance of emerging technologies, in particular artificial intelligence and neuroscience. He is a scholar at Yale University's Interdisciplinary Center for Bioethics, a senior advisor to The Hastings Center, a Carnegie/Uehiro Senior Fellow at the Carnegie Council for Ethics in International Affairs, where he co-directs the "Artificial Intelligence Equality Initiative" with Anja Kaspersen. Wendell Wallach is also a fellow at the Center for Law and Innovation at the Sandra Day O'Connor School of Law at Arizona State University. He has written two books on the ethics of emerging technologies.: "Moral Machines: Teaching Robots Right from Wrong" (2010) and "A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control" (2015). Wallach speaks eloquently about his professional, personal and spiritual journey, as well as some of the biggest conundrums facing humanity at the wake of the bio/digital revolution in this podcast published by the Carnegie Council for Ethics in International Affairs (CCEIA).
Y.3181 is an ITU-T Recommendation specifying an Architectural framework for Machine Learning Sandbox in future networks. The standard describes the requirements and architecture for a machine learning sandbox a in future networks including IMT-2020.
Trustworthy AI is a programme of work of the ITU under its AI for Good programme. The programme advances the standardization of a number of Privacy-enhancing technologies (PETs), including homomorphic encryption, federated learning, secure multi-party computation, differential privacy, zero-knowledge proof.