Domain adaptation

Last updated
Distinction between usual machine learning setting and transfer learning, and positioning of domain adaptation Transfer learning and domain adaptation.png
Distinction between usual machine learning setting and transfer learning, and positioning of domain adaptation

Domain adaptation [1] [2] [3] is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning a model from a source data distribution and applying that model on a different (but related) target data distribution. For instance, one of the tasks of the common spam filtering problem consists in adapting a model from one user (the source distribution) to a new user who receives significantly different emails (the target distribution). Domain adaptation has also been shown to be beneficial for learning unrelated sources. [4] Note that, when more than one source distribution is available the problem is referred to as multi-source domain adaptation. [5]

Contents

Overview

Domain adaptation is the ability to apply an algorithm trained in one or more "source domains" to a different (but related) "target domain". Domain adaptation is a subcategory of transfer learning. In domain adaptation, the source and target domains all have the same feature space (but different distributions); in contrast, transfer learning includes cases where the target domain's feature space is different from the source feature space or spaces. [6]

Domain shift

A domain shift, [7] or distributional shift, [8] is a change in the data distribution between an algorithm's training dataset, and a dataset it encounters when deployed. These domain shifts are common in practical applications of artificial intelligence. Conventional machine-learning algorithms often adapt poorly to domain shifts. The modern machine-learning community has many different strategies to attempt to gain better domain adaptation. [7]

Examples

Other applications include wifi localization detection and many aspects of computer vision. [6]

Formalization

Let be the input space (or description space) and let be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis) able to attach a label from to an example from . This model is learned from a learning sample .

Usually in supervised learning (without domain adaptation), we suppose that the examples are drawn i.i.d. from a distribution of support (unknown and fixed). The objective is then to learn (from ) such that it commits the least error possible for labelling new examples coming from the distribution .

The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributions and on [ citation needed ]. The domain adaptation task then consists of the transfer of knowledge from the source domain to the target one . The goal is then to learn (from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domain [ citation needed ].

The major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain?

The different types of domain adaptation

There are several contexts of domain adaptation. They differ in the information considered for the target task.

  1. The unsupervised domain adaptation: the learning sample contains a set of labeled source examples, a set of unlabeled source examples and a set of unlabeled target examples.
  2. The semi-supervised domain adaptation: in this situation, we also consider a "small" set of labeled target examples.
  3. The supervised domain adaptation: all the examples considered are supposed to be labeled.

Four algorithmic principles

Reweighting algorithms

The objective is to reweight the source labeled sample such that it "looks like" the target sample (in terms of the error measure considered). [14] [15]

Iterative algorithms

A method for adapting consists in iteratively "auto-labeling" the target examples. [16] The principle is simple:

  1. a model is learned from the labeled examples;
  2. automatically labels some target examples;
  3. a new model is learned from the new labeled examples.

Note that there exist other iterative approaches, but they usually need target labeled examples. [17] [18]

Search of a common representation space

The goal is to find or construct a common representation space for the two domains. The objective is to obtain a space in which the domains are close to each other while keeping good performances on the source labeling task. This can be achieved through the use of Adversarial machine learning techniques where feature representations from samples in different domains are encouraged to be indistinguishable. [19] [20]

Hierarchical Bayesian Model

The goal is to construct a Bayesian hierarchical model , which is essentially a factorization model for counts , to derive domain-dependent latent representations allowing both domain-specific and globally shared latent factors. [4]

Softwares

Several compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades:

Related Research Articles

<span class="mw-page-title-main">Supervised learning</span> A paradigm in machine learning

Supervised learning (SL) is a paradigm in machine learning where input objects and a desired output value train a model. The training data is processed, building a function that maps new data on expected output values. An optimal scenario will allow for the algorithm to correctly determine output values for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way. This statistical quality of an algorithm is measured through the so-called generalization error.

<span class="mw-page-title-main">Pattern recognition</span> Automated recognition of patterns and regularities in data

Pattern recognition is the automated recognition of patterns and regularities in data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess (PR) capabilities but their primary function is to distinguish and create emergent pattern. PR has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Pattern recognition has its origins in statistics and engineering; some modern approaches to pattern recognition include the use of machine learning, due to the increased availability of big data and a new abundance of processing power.

Unsupervised learning is a paradigm in machine learning where, in contrast to supervised learning and semi-supervised learning, algorithms learn patterns exclusively from unlabeled data.

<span class="mw-page-title-main">Nonlinear dimensionality reduction</span> Summary of algorithms for nonlinear dimensionality reduction

Nonlinear dimensionality reduction, also known as manifold learning, refers to various related techniques that aim to project high-dimensional data onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.

Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately. Early versions of MTL were called "hints".

In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsistent, but three major types can be distinguished, following Jebara (2004):

  1. A generative model is a statistical model of the joint probability distribution on given observable variable X and target variable Y;
  2. A discriminative model is a model of the conditional probability of the target Y, given an observation x; and
  3. Classifiers computed without using a probability model are also referred to loosely as "discriminative".
<span class="mw-page-title-main">Multi-armed bandit</span> Machine Learning

In probability theory and machine learning, the multi-armed bandit problem is a problem in which a fixed limited set of resources must be allocated between competing (alternative) choices in a way that maximizes their expected gain, when each choice's properties are only partially known at the time of allocation, and may become better understood as time passes or by allocating resources to the choice. This is a classic reinforcement learning problem that exemplifies the exploration–exploitation tradeoff dilemma. The name comes from imagining a gambler at a row of slot machines, who has to decide which machines to play, how many times to play each machine and in which order to play them, and whether to continue with the current machine or try a different machine. The multi-armed bandit problem also falls into the broad category of stochastic scheduling.

In machine learning, one-class classification (OCC), also known as unary classification or class-modelling, tries to identify objects of a specific class amongst all objects, by primarily learning from a training set containing only the objects of that class, although there exist variants of one-class classifiers where counter-examples are used to further refine the classification boundary. This is different from and more difficult than the traditional classification problem, which tries to distinguish between two or more classes with the training set containing objects from all the classes. Examples include the monitoring of helicopter gearboxes, motor failure prediction, or the operational status of a nuclear plant as 'normal': In this scenario, there are few, if any, examples of catastrophic system states; only the statistics of normal operation are known.

<span class="mw-page-title-main">Learning to rank</span> Use of machine learning to rank items

Learning to rank or machine-learned ranking (MLR) is the application of machine learning, typically supervised, semi-supervised or reinforcement learning, in the construction of ranking models for information retrieval systems. Training data consists of lists of items with some partial order specified between items in each list. This order is typically induced by giving a numerical or ordinal score or a binary judgment for each item. The goal of constructing the ranking model is to rank new, unseen lists in a similar way to rankings in the training data.

<span class="mw-page-title-main">Active learning (machine learning)</span> Machine learning strategy

Active learning is a special case of machine learning in which a learning algorithm can interactively query a user to label new data points with the desired outputs. In statistics literature, it is sometimes also called optimal experimental design. The information source is also called teacher or oracle.

<span class="mw-page-title-main">Feature learning</span> Set of learning techniques in machine learning

In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.

<span class="mw-page-title-main">Sample complexity</span>

The sample complexity of a machine learning algorithm represents the number of training-samples that it needs in order to successfully learn a target function.

<span class="mw-page-title-main">Quantum machine learning</span> Interdisciplinary research area at the intersection of quantum physics and machine learning

Quantum machine learning is the integration of quantum algorithms within machine learning programs.

<span class="mw-page-title-main">Adversarial machine learning</span> Research field that lies at the intersection of machine learning and computer security

Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks. A survey from May 2020 exposes the fact that practitioners report a dire need for better protecting machine learning systems in industrial applications.

<span class="mw-page-title-main">Generative adversarial network</span> Deep learning method

A generative adversarial network (GAN) is a class of machine learning framework and a prominent framework for approaching generative AI. The concept was initially developed by Ian Goodfellow and his colleagues in June 2014. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss.

<span class="mw-page-title-main">Data augmentation</span> Data analysis technique

Data augmentation is a technique in machine learning used to reduce overfitting when training a machine learning model, by training models on several slightly-modified copies of existing data.

Weak supervision, also called semi-supervised learning, is a paradigm in machine learning, the relevance and notability of which increased with the advent of large language models due to large amount of data required to train them. It is characterized by using a combination of a small amount of human-labeled data, followed by a large amount of unlabeled data. In other words, the desired output values are provided only for a subset of the training data. The remaining data is unlabeled or imprecisely labeled. Intuitively, it can be seen as an exam and labeled data as sample problems that the teacher solves for the class as an aid in solving another set of problems. In the transductive setting, these unsolved problems act as exam questions. In the inductive setting, they become practice problems of the sort that will make up the exam. Technically, it could be viewed as performing clustering and then labeling the clusters with the labeled data, pushing the decision boundary away from high-density regions, or learning an underlying one-dimensional manifold where the data reside.

<span class="mw-page-title-main">Flow-based generative model</span>

A flow-based generative model is a generative model used in machine learning that explicitly models a probability distribution by leveraging normalizing flow, which is a statistical method using the change-of-variable law of probabilities to transform a simple distribution into a complex one.

<span class="mw-page-title-main">Deep learning speech synthesis</span> Method of speech synthesis that uses deep neural networks

Deep learning speech synthesis uses Deep Neural Networks (DNN) to produce artificial speech from text (text-to-speech) or spectrum (vocoder). The deep neural networks are trained using a large amount of recorded speech and, in the case of a text-to-speech system, the associated labels and/or input text.

The Inception Score (IS) is an algorithm used to assess the quality of images created by a generative image model such as a generative adversarial network (GAN). The score is calculated based on the output of a separate, pretrained Inceptionv3 image classification model applied to a sample of (typically around 30,000) images generated by the generative model. The Inception Score is maximized when the following conditions are true:

  1. The entropy of the distribution of labels predicted by the Inceptionv3 model for the generated images is minimized. In other words, the classification model confidently predicts a single label for each image. Intuitively, this corresponds to the desideratum of generated images being "sharp" or "distinct".
  2. The predictions of the classification model are evenly distributed across all possible labels. This corresponds to the desideratum that the output of the generative model is "diverse".

References

  1. Redko, Ievgen; Morvant, Emilie; Habrard, Amaury; Sebban, Marc; Bennani, Younès (2019). Advances in Domain Adaptation Theory. ISTE Press - Elsevier. p. 187. ISBN   9781785482366.
  2. Bridle, John S.; Cox, Stephen J (1990). "RecNorm: Simultaneous normalisation and classification applied to speech recognition" (PDF). Conference on Neural Information Processing Systems (NIPS). pp. 234–240.
  3. Ben-David, Shai; Blitzer, John; Crammer, Koby; Kulesza, Alex; Pereira, Fernando; Wortman Vaughan, Jennifer (2010). "A theory of learning from different domains" (PDF). Machine Learning. 79 (1–2): 151–175. doi: 10.1007/s10994-009-5152-4 .
  4. 1 2 Hajiramezanali, Ehsan; Siamak Zamani Dadaneh; Karbalayghareh, Alireza; Zhou, Mingyuan; Qian, Xiaoning (2018). "Bayesian multi-domain learning for cancer subtype discovery from next-generation sequencing count data". arXiv: 1810.09433 [stat.ML].
  5. Crammer, Koby; Kearns, Michael; Wortman, Jeniifer (2008). "Learning from Multiple Sources" (PDF). Journal of Machine Learning Research. 9: 1757–1774.
  6. 1 2 Sun, Shiliang; Shi, Honglei; Wu, Yuanbin (July 2015). "A survey of multi-source domain adaptation". Information Fusion. 24: 84–92. doi:10.1016/j.inffus.2014.12.003. S2CID   18385140.
  7. 1 2 Sun, Baochen, Jiashi Feng, and Kate Saenko. "Return of frustratingly easy domain adaptation." In Thirtieth AAAI Conference on Artificial Intelligence. 2016.
  8. Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. "Concrete problems in AI safety." arXiv preprint arXiv:1606.06565 (2016).
  9. Daumé III, Hal. "Frustratingly easy domain adaptation." arXiv preprint arXiv:0907.1815 (2009).
  10. Ben-David, Shai, John Blitzer, Koby Crammer, and Fernando Pereira. "Analysis of representations for domain adaptation." In Advances in neural information processing systems, pp. 137-144. 2007.
  11. Hu, Yipeng; Jacob, Joseph; Parker, Geoffrey J. M.; Hawkes, David J.; Hurst, John R.; Stoyanov, Danail (June 2020). "The challenges of deploying artificial intelligence models in a rapidly evolving pandemic". Nature Machine Intelligence. 2 (6): 298–300. arXiv: 2005.12137 . doi: 10.1038/s42256-020-0185-2 . ISSN   2522-5839.
  12. Matthews, Dylan (26 March 2019). "AI disaster won't look like the Terminator. It'll be creepier". Vox. Retrieved 21 June 2020.
  13. "Our weird behavior during the pandemic is messing with AI models". MIT Technology Review. 11 May 2020. Retrieved 21 June 2020.
  14. Huang, Jiayuan; Smola, Alexander J.; Gretton, Arthur; Borgwardt, Karster M.; Schölkopf, Bernhard (2006). "Correcting Sample Selection Bias by Unlabeled Data" (PDF). Conference on Neural Information Processing Systems (NIPS). pp. 601–608.
  15. Shimodaira, Hidetoshi (2000). "Improving predictive inference under covariate shift by weighting the log-likelihood function". Journal of Statistical Planning and Inference. 90 (2): 227–244. doi:10.1016/S0378-3758(00)00115-4. S2CID   9238949.
  16. Gallego, A.J.; Calvo-Zaragoza, J.; Fisher, R.B. (2020). "Incremental Unsupervised Domain-Adversarial Training of Neural Networks" (PDF). IEEE Transactions on Neural Networks and Learning Systems. PP (11): 4864–4878. doi:10.1109/TNNLS.2020.3025954. hdl: 20.500.11820/72ba0443-8a7d-4cdd-8212-38682d4f0730 . PMID   33027004. S2CID   210164756.
  17. Arief-Ang, I.B.; Salim, F.D.; Hamilton, M. (2017-11-08). DA-HOC: semi-supervised domain adaptation for room occupancy prediction using CO2 sensor data. 4th ACM International Conference on Systems for Energy-Efficient Built Environments (BuildSys). Delft, Netherlands. pp. 1–10. doi:10.1145/3137133.3137146. ISBN   978-1-4503-5544-5.
  18. Arief-Ang, I.B.; Hamilton, M.; Salim, F.D. (2018-12-01). "A Scalable Room Occupancy Prediction with Transferable Time Series Decomposition of CO2 Sensor Data". ACM Transactions on Sensor Networks. 14 (3–4): 21:1–21:28. doi:10.1145/3217214. S2CID   54066723.
  19. Ganin, Yaroslav; Ustinova, Evgeniya; Ajakan, Hana; Germain, Pascal; Larochelle, Hugo; Laviolette, François; Marchand, Mario; Lempitsky, Victor (2016). "Domain-Adversarial Training of Neural Networks" (PDF). Journal of Machine Learning Research. 17: 1–35.
  20. Hajiramezanali, Ehsan; Siamak Zamani Dadaneh; Karbalayghareh, Alireza; Zhou, Mingyuan; Qian, Xiaoning (2017). "Addressing Appearance Change in Outdoor Robotics with Adversarial Domain Adaptation". arXiv: 1703.01461 [cs.RO].
  21. de Mathelin, Antoine and Deheeger, François and Richard, Guillaume and Mougeot, Mathilde and Vayatis, Nicolas (2020) "ADAPT: Awesome Domain Adaptation Python Toolbox"
  22. Mingsheng Long Junguang Jiang, Bo Fu. (2020) "Transfer-learning-library"
  23. Ke Yan. (2016) "Domain adaptation toolbox"