Trustworthy AI

Last updated

Trustworthy AI refers to artificial intelligence systems designed and deployed to be transparent, robust and respectful of data privacy.

Contents

Trustworthy AI makes use of a number of Privacy-enhancing technologies (PETs), including homomorphic encryption, federated learning, secure multi-party computation, differential privacy, zero-knowledge proof. [1] [2]

The concept of trustworthy AI also encompasses the need for AI systems to be explainable, accountable, and robust. Transparency in AI involves making the processes and decisions of AI systems understandable to users and stakeholders. Accountability ensures that there are protocols for addressing adverse outcomes or biases that may arise, with designated responsibilities for oversight and remediation. Robustness and security aim to ensure that AI systems perform reliably under various conditions and are safeguarded against malicious attacks. [3]

ITU standardization

Trustworthy AI is also a work programme of the International Telecommunication Union, an agency of the United Nations, initiated under its AI for Good programme. [2] Its origin lies with the ITU-WHO Focus Group on Artificial Intelligence for Health, where strong need for privacy at the same time as the need for analytics, created a demand for a standard in these technologies.

When AI for Good moved online in 2020, the TrustworthyAI seminar series was initiated to start discussions on such work, which eventually led to the standardization activities. [4]

Multi-Party Computation

Secure multi-party computation (MPC) is being standardized under "Question 5" (the incubator) of ITU-T Study Group 17. [5]

Homomorphic Encryption

Homomorphic encryption allows for computing on encrypted data, where the outcomes or result is still encrypted and unknown to those performing the computation, but can be deciphered by the original encryptor. It is often developed with the goal of enabling use in jurisdictions different from the data creation (under e.g. GDPR).[ citation needed ]

ITU has been collaborating since the early stage of the HomomorphicEncryption.org standardization meetings, which has developed a standard on homomorphic encryption. The 5th homomorphic encryption meeting was hosted at ITU HQ in Geneva.[ citation needed ]

Federated Learning

Zero-sum masks as used by federated learning for privacy preservation are used extensively in the multimedia standards of ITU-T Study Group 16 (VCEG) such as JPEG, MP3, and H.264, H.265 (aka MPEG).[ citation needed ]

Zero-knowledge proof

Previous pre-standardization work on the topic of zero-knowledge proof has been conducted in the ITU-T Focus Group on Digital Ledger Technologies.[ citation needed ]

Differential privacy

The application of differential privacy in the preservation of privacy was examined at several of the "Day 0" machine learning workshops at AI for Good Global Summits.[ citation needed ]

See also

Related Research Articles

The International Telecommunication Union Telecommunication Standardization Sector (ITU-T) is one of the three Sectors (branches) of the International Telecommunication Union (ITU). It is responsible for coordinating standards for telecommunications and Information Communication Technology, such as X.509 for cybersecurity, Y.3172 and Y.3173 for machine learning, and H.264/MPEG-4 AVC for video compression, between its Member States, Private Sector Members, and Academia Members.

Center for Democracy & Technology (CDT) is a Washington, D.C.-based 501(c)(3) nonprofit organisation that advocates for digital rights and freedom of expression. CDT seeks to promote legislation that enables individuals to use the internet for purposes of well-intent, while at the same time reducing its potential for harm. It advocates for transparency, accountability, and limiting the collection of personal information.

Microsoft Research (MSR) is the research subsidiary of Microsoft. It was created in 1991 by Richard Rashid, Bill Gates and Nathan Myhrvold with the intent to advance state-of-the-art computing and solve difficult world problems through technological innovation in collaboration with academic, government, and industry researchers. The Microsoft Research team has more than 1,000 computer scientists, physicists, engineers, and mathematicians, including Turing Award winners, Fields Medal winners, MacArthur Fellows, and Dijkstra Prize winners.

Homomorphic encryption is a form of encryption that allows computations to be performed on encrypted data without first having to decrypt it. The resulting computations are left in an encrypted form which, when decrypted, result in an output that is identical to that produced had the operations been performed on the unencrypted data. Homomorphic encryption can be used for privacy-preserving outsourced storage and computation. This allows data to be encrypted and outsourced to commercial cloud environments for processing, all while encrypted.

Disease Informatics (also infectious disease informatics) studies the knowledge production, sharing, modeling, and management of infectious diseases. It became a more studied field as a by-product of the rapid increases in the amount of biomedical and clinical data widely available, and to meet the demands for useful data analyses of such data.

Privacy-enhancing technologies (PET) are technologies that embody fundamental data protection principles by minimizing personal data use, maximizing data security, and empowering individuals. PETs allow online users to protect the privacy of their personally identifiable information (PII), which is often provided to and handled by services or applications. PETs use techniques to minimize an information system's possession of personal data without losing functionality. Generally speaking, PETs can be categorized as either hard or soft privacy technologies.

Private biometrics is a form of encrypted biometrics, also called privacy-preserving biometric authentication methods, in which the biometric payload is a one-way, homomorphically encrypted feature vector that is 0.05% the size of the original biometric template and can be searched with full accuracy, speed and privacy. The feature vector's homomorphic encryption allows search and match to be conducted in polynomial time on an encrypted dataset and the search result is returned as an encrypted match. One or more computing devices may use an encrypted feature vector to verify an individual person or identify an individual in a datastore without storing, sending or receiving plaintext biometric data within or between computing devices or any other entity. The purpose of private biometrics is to allow a person to be identified or authenticated while guaranteeing individual privacy and fundamental human rights by only operating on biometric data in the encrypted space. Some private biometrics including fingerprint authentication methods, face authentication methods, and identity-matching algorithms according to bodily features. Private biometrics are constantly evolving based on the changing nature of privacy needs, identity theft, and biotechnology.

Cloud computing security or, more simply, cloud security, refers to a broad set of policies, technologies, applications, and controls utilized to protect virtualized IP, data, applications, services, and the associated infrastructure of cloud computing. It is a sub-domain of computer security, network security, and, more broadly, information security.

Datain use is an information technology term referring to active data which is stored in a non-persistent digital state typically in computer random-access memory (RAM), CPU caches, or CPU registers.

<span class="mw-page-title-main">ITU AI for Good</span>

AI for Good is an ongoing webinar series organized by the Standardization Bureau (ITU-T) of the International Telecommunication Union, where AI innovators and problem owners learn, discuss and connect to identify AI solutions to advance the Sustainable Development Goals. The impetus for organizing global summits that are action oriented, came from existing discourse in artificial intelligence (AI) research being dominated by research streams such as the Netflix Prize.

Local differential privacy (LDP) is a model of differential privacy with the added requirement that if an adversary has access to the personal responses of an individual in the database, that adversary will still be unable to learn much of the user's personal data. This is contrasted with global differential privacy, a model of differential privacy that incorporates a central aggregator with access to the raw data.

<span class="mw-page-title-main">Flux (machine-learning framework)</span> Open-source machine-learning software library

Flux is an open-source machine-learning software library and ecosystem written in Julia. Its current stable release is v0.14.5 . It has a layer-stacking-based interface for simpler models, and has a strong support on interoperability with other Julia packages instead of a monolithic design. For example, GPU support is implemented transparently by CuArrays.jl. This is in contrast to some other machine learning frameworks which are implemented in other languages with Julia bindings, such as TensorFlow.jl, and thus are more limited by the functionality present in the underlying implementation, which is often in C or C++. Flux joined NumFOCUS as an affiliated project in December of 2021.

<span class="mw-page-title-main">Federated learning</span> Decentralized machine learning

Federated learning is a sub-field of machine learning focusing on settings in which multiple entities collaboratively train a model while ensuring that their data remains decentralized. This stands in contrast to machine learning settings in which data is centrally stored. One of the primary defining characteristics of federated learning is data heterogeneity. Due to the decentralized nature of the clients' data, there is no guarantee that data samples held by each client are independently and identically distributed.

<span class="mw-page-title-main">Microsoft SEAL</span>

Simple Encrypted Arithmetic Library or SEAL is a free and open-source cross platform software library developed by Microsoft Research that implements various forms of homomorphic encryption.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

<span class="mw-page-title-main">ITU-T Study Group 16</span> Standardization body focused on multimedia standards, such as video coding standards (e.g. MP4)

The ITU-T Study Group 16 (SG16) is a statutory group of the ITU Telecommunication Standardization Sector (ITU-T) concerned with multimedia coding, systems and applications, such as video coding standards. It is responsible for standardization of the "H.26x" line of video coding standards, the "T.8xx" line of image coding standards, and related technologies, as well as various collaborations with the World Health Organization, including on safe listening (H.870) accessibility of e-health (F.780.2), it is also the parent body of VCEG and various Focus Groups, such as the ITU-WHO Focus Group on Artificial Intelligence for Health and its AI for Health Framework.

<span class="mw-page-title-main">ITU-WHO Focus Group on Artificial Intelligence for Health</span>

The ITU-WHO Focus Group on Artificial Intelligence for Health is an inter-agency collaboration between the World Health Organization and the ITU, which created a benchmarking framework to assess the accuracy of AI in health.

<span class="mw-page-title-main">Jean-Pierre Hubaux</span> Swiss-Belgian computer scientist spezialised in security and privacy

Jean-Pierre Hubaux is a Swiss-Belgian computer scientist specialised in security and privacy. He is a professor of computer science at EPFL and is the head of the Laboratory for Data Security at EPFL's School of Computer and Communication Sciences.

Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning, artificial intelligence, algorithmic bias, and AI accountability. She is currently an Assistant Professor at the Harvard Business School and is also affiliated with the Department of Computer Science at Harvard University. Lakkaraju is known for her work on explainable machine learning. More broadly, her research focuses on developing machine learning models and algorithms that are interpretable, transparent, fair, and reliable. She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare, criminal justice, business, and education. Lakkaraju was named as one of the world's top Innovators Under 35 by both Vanity Fair and the MIT Technology Review.

<span class="mw-page-title-main">ITU-T Study Group 17</span> Standardization body focused on cyberseucirty standards, (e.g. X.509)

The ITU-T Study Group 17 (SG17) is a statutory group of the ITU Telecommunication Standardization Sector (ITU-T) concerned with security. The group is concerned with a broad range of security-related standardization issues such as cybersecurity, security management, security architectures and frameworks, countering spam, identity management, biometrics, protection of personally identifiable information, and the security of applications and services for the Internet of Things (IoT). It is responsible for standardization of i.a. ASN.1 and X.509, it is also the parent body of the Focus Group on Quantum Information Technology (FG-QIT). The group is currently chaired by Heung Youl Youm of South Korea.

References

  1. "Advancing Trustworthy AI - US Government". National Artificial Intelligence Initiative. Retrieved 2022-10-24.
  2. 1 2 "TrustworthyAI". ITU. Archived from the original on 2022-10-24. Retrieved 2022-10-24.
    Creative Commons by small.svg  This article incorporates text from this source, which isby the International Telecommunication Union available under the CC BY 4.0 license.
  3. "'Trustworthy AI' is a framework to help manage unique risk". MIT Technology Review. Retrieved 2024-06-01.
  4. "TrustworthyAI Seminar Series". AI for Good. Retrieved 2022-10-24.
  5. Shulman, R.; Greene, R.; Glynne, P. (2006-03-21). "Does implementation of a computerised, decision-supported intensive insulin protocol achieve tight glycaemic control? A prospective observational study". Critical Care. 10 (1): P256. doi: 10.1186/cc4603 . ISSN   1364-8535. PMC   4092631 .