Abbreviation | AI for Health |
---|---|
Formation | 2018 |
Type | Standards organization |
Purpose | Benchmarking framework for AI in health diagnostic aids |
Location | |
Region served | Worldwide |
Chairman | Thomas Wiegand |
Vice-Chairman | Stephen Ibaraki |
Secretariat | Simao Campos, Bastiaan Quast |
Parent organization | ITU-T Study Group 16 |
Subsidiaries | Working Groups and Topic Groups |
Affiliations | ITU-T, World Health Organization |
Website | www |
The ITU-WHO Focus Group on Artificial Intelligence for Health (AI for Health) is an inter-agency collaboration between the World Health Organization and the ITU, which created a benchmarking framework to assess the accuracy of AI in health. [1] [2]
This organization convenes an international network of experts and stakeholders from fields like research, practice, regulation, ethics, public health, etc, that develops guideline documentation and code. The documents address ethics, assessment/evaluation, handling, and regulation of AI for health solutions, covering specific use cases including AI in ophthalmology, histopathology, dentistry, malaria detection, radiology, symptom checker applications, etc. FG-AI4H has established an ad hoc group concerned with digital technologies for health emergencies, including COVID-19. All documentation is public. [3]
The idea for the Focus Group came out of the Health Track of the 2018 AI for Good Global Summit. [3] Administratively, FG-AI4H was created by ITU-T Study Group 16. Under ITU-T's framework, participation in Focus Groups is open to anyone from an ITU Member State. The secretariat is provided by the Telecommunication Standardization Bureau (under Director Chaesub Lee). It was first created at the July 2018 meeting with a lifetime of two years, [4] at the July 2020 meeting, this was extended for another two years, where the focus group also submitted its deliverables to its parent body. [5] It was also presented at the NeurIPS 2020 health workshop. [6]
The outline of the benchmarking framework was published in a commentary in The Lancet. [1]
The Deliverables (outputs) of the Focus Group AI for Health are structured in the AI for Health Framework, which roughly corresponds to the dashed-line area in the above ecosystem diagram. Depending on their primary domain being health or ICT, the individual components of the AI for Health Framework were ratified by the corresponding United Nations Specialized Agency, as WHO Guidelines and ITU Recommendations respectively.
Standards drawn up by FG-AI4H are titled as:
The International Telecommunication Union Telecommunication Standardization Sector (ITU-T) is one of the three Sectors (branches) of the International Telecommunication Union (ITU). It is responsible for coordinating standards for telecommunications and Information Communication Technology, such as X.509 for cybersecurity, Y.3172 and Y.3173 for machine learning, and H.264/MPEG-4 AVC for video compression, between its Member States, Private Sector Members, and Academia Members.
The International Telecommunication Union (ITU) is a specialized agency of the United Nations responsible for many matters related to information and communication technologies. It was established on 17 May 1865 as the International Telegraph Union, significantly predating the UN and making it the oldest UN agency. Doreen Bogdan-Martin is the Secretary-General of ITU, the first woman to serve as its head.
Thomas Wiegand is a German electrical engineer who substantially contributed to the creation of the H.264/AVC, H.265/HEVC, and H.266/VVC video coding standards. For H.264/AVC, Wiegand was one of the chairmen of the Joint Video Team (JVT) standardization committee that created the standard and was the chief editor of the standard itself. He was also a very active technical contributor to the H.264/AVC, H.265/HEVC, and H.266/VVC video coding standards. Wiegand also holds a chairmanship position in the ITU-T VCEG of ITU-T Study Group 16 and previously in ISO/IEC MPEG standardization organizations. In July 2006, video coding work of the ITU-T was jointly led by Gary J. Sullivan and Wiegand for the preceding six years. It was voted as the most influential area of the standardization work of the CCITT and ITU-T in their 50-year history. Since 2018, Wiegand has served as chair of the ITU/WHO Focus Group on Artificial Intelligence for Health (FG-AI4H). Since 2014, Thomson Reuters named Wiegand in their list of “The World’s Most Influential Scientific Minds” as one of the most cited researchers in his field.
The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems. The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks. Some application areas may also have particularly important ethical implications, like healthcare, education, or the military.
Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. Machine ethics should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with the grander social effects of technology.
MPEG-H is a group of international standards under development by the ISO/IEC Moving Picture Experts Group (MPEG). It has various "parts" – each of which can be considered a separate standard. These include a media transport protocol standard, a video compression standard, an audio compression standard, a digital file format container standard, three reference software packages, three conformance testing standards, and related technologies and technical reports. The group of standards is formally known as ISO/IEC 23008 – High efficiency coding and media delivery in heterogeneous environments. Development of the standards began around 2010, and the first fully approved standard in the group was published in 2013. Most of the standards in the group have been revised or amended several times to add additional extended features since their first edition.
Artificial intelligence in healthcare is a term used to describe the use of machine-learning algorithms and software, or artificial intelligence (AI), to copy human cognition in the analysis, presentation, and understanding of complex medical and health care data, or to exceed human capabilities by providing new ways to diagnose, treat, or prevent disease. Specifically, AI is the ability of computer algorithms to arrive at approximate conclusions based solely on input data.
AI for Good is an ongoing webinar series organized by the Standardization Bureau (ITU-T) of the International Telecommunication Union, where AI innovators and problem owners learn, discuss and connect to identify AI solutions to advance the Sustainable Development Goals. The impetus for organizing global summits that are action oriented, came from existing discourse in artificial intelligence (AI) research being dominated by research streams such as the Netflix Prize.
The AI Now Institute is an American research institute studying the social implications of artificial intelligence and policy research that addresses the concentration of power in the tech industry. AI Now has partnered with organizations such as the Distributed AI Research Institute (DAIR), Data & Society, Ada Lovelace Institute, New York University Tandon School of Engineering, New York University Center for Data Science, Partnership on AI, and the ACLU. AI Now has produced annual reports that examine the social implications of artificial intelligence. In 2021-2, AI Now’s leadership served as a Senior Advisors on AI to Chair Lina Khan at the Federal Trade Commission. Its executive director is Amba Kak.
Aimee van Wynsberghe is Alexander von Humboldt professor for "Applied Ethics of Artificial Intelligence" at the University of Bonn, Germany. As founder of the Bonn Sustainable AI Lab and director of the Institute for Science and Ethics, Aimee van Wynsberghe hosts every two years the Bonn Sustainable AI Conference.
Fondation Botnar is a philanthropic foundation based in Basel, Switzerland. The foundation was founded in 2003 by Marcela Botnar, wife of businessman and philanthropist Octav Botnar. It is one of the largest foundations in Switzerland, holding CHF 3.8 billion in assets. Fondation Botnar champions the use of AI and digital technologies to improve the health and wellbeing of children and young people in growing urban environments. The foundation provides a range of funding opportunities to enable research and innovative projects that fit within its strategic focus.
Chaesub Lee PhD is a telecommunication executive who served as the Director of ITU Telecommunication Standardization Bureau, the permanent secretariat of the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) from 2015 until 2022.
Y.3172 is an ITU-T Recommendation specifying an architecture for machine learning in future networks including 5G (IMT-2020). The architecture describes a machine learning pipeline in the context of telecommunication networks that involves the training of machine learning models, and also the deployment using methods such as containers and orchestration.
H.870 "Guidelines for safe listening devices/systems" is an ITU-T Recommendation, developed in collaboration with the World Health Organization. It specifies standards for safe listening to prevent hearing loss and was first approved in 2018. In March 2022, version 2 was approved and published.
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.
The ITU-T Study Group 16 (SG16) is a statutory group of the ITU Telecommunication Standardization Sector (ITU-T) concerned with multimedia coding, systems and applications, such as video coding standards. It is responsible for standardization of the "H.26x" line of video coding standards, the "T.8xx" line of image coding standards, and related technologies, as well as various collaborations with the World Health Organization, including on safe listing (H.870) accessibility of e-health (F.780.2), it is also the parent body of VCEG and various Focus Groups, such as the ITU-WHO Focus Group on Artificial Intelligence for Health and its AI for Health Framework.
Anja Kaspersen is a director for Global Markets Development, New Frontiers and Emerging Spaces at IEEE, the world's largest technical professional organisation. Kaspersen is also a senior fellow at Carnegie Council for Ethics in International Affairs where she co-directs the Artificial Intelligence Equality Initiative with Wendell Wallach. With scholars and thinkers in the field of technology governance, supported by Carnegie Council for Ethics in International Affairs and IEEE, Kaspersen and Wallach provided a Proposal for International governance of AI.
Latifa Mohammed Al-Abdulkarim is a Saudi Arabian computer scientist and professor working on AI ethics, legal technology, and explainable AI. She is currently an assistant professor of computer science at King Saud University and visiting researcher in artificial intelligence and law at the University of Liverpool. Al-Abdulkarim has been recognized by Forbes as one of the “women defining the 21st century AI movement” and was selected as one of the 100 Brilliant Women in AI Ethics in 2020.
The ITU-T Study Group 17 (SG17) is a statutory group of the ITU Telecommunication Standardization Sector (ITU-T) concerned with security. The group is concerned with a broad range of security-related standardization issues such as cybersecurity, security management, security architectures and frameworks, countering spam, identity management, biometrics, protection of personally identifiable information, and the security of applications and services for the Internet of Things (IoT). It is responsible for standardization of i.a. ASN.1 and X.509, it is also the parent body of the Focus Group on Quantum Information Technology (FG-QIT). The group is currently chaired by Heung Youl Youm of South Korea.
Trustworthy AI is a programme of work of the ITU under its AI for Good programme. The programme advances the standardization of a number of Privacy-enhancing technologies (PETs), including homomorphic encryption, federated learning, secure multi-party computation, differential privacy, zero-knowledge proof.