Exploration-exploitation dilemma

Last updated

The exploration-exploitation dilemma, also known as the explore-exploit tradeoff, is a fundamental concept in decision-making that arises in many domains. [1] [2] It is depicted as the balancing act between two opposing strategies. Exploitation involves choosing the best option based on current knowledge of the system (which may be incomplete or misleading), while exploration involves trying out new options that may lead to better outcomes in the future at the expense of an exploitation opportunity. Finding the optimal balance between these two strategies is a crucial challenge in many decision-making problems whose goal is to maximize long-term benefits. [3]

Contents

Application in machine learning

In the context of machine learning, the exploration-exploitation tradeoff is fundamental in reinforcement learning (RL), a type of machine learning that involves training agents to make decisions based on feedback from the environment. Crucially, this feedback may be incomplete or delayed. [4] The agent must decide whether to exploit the current best-known policy or explore new policies to improve its performance.

Multi-armed bandit methods

The multi-armed bandit (MAB) problem was a classic example of the tradeoff, and many methods were developed for it, such as epsilon-greedy, Thompson sampling, and the upper confidence bound (UCB). See the page on MAB for details.

In more complex RL situations than the MAB problem, the agent can treat each choice as a MAB, where the payoff is the expected future reward. For example, if the agent performs epsilon-greedy method, then the agent would often "pull the best lever" by picking the action that had the best predicted expected reward (exploit). However, it would with probability epsilon to pick a random action (explore). The Monte Carlo Tree Search, for example, uses a variant of the UCB method. [5]

Exploration problems

There are some problems that make exploration difficult. [5]

Exploration reward

This section based on. [8]

The exploration reward (also called exploration bonus) methods convert the exploration-exploitation dilemma into a balance of exploitations. That is, instead of trying to get the agent to balance exploration and exploitation, exploration is simply treated as another form of exploitation, and the agent simply attempts to maximize the sum of rewards from exploration and exploitation. The exploration reward can be treated as a form of intrinsic reward. [9]

We write these as , meaning the intrinsic and extrinsic rewards at time step .

However, exploration reward is different from exploitation in two regards:

Count-based exploration uses , the number of visits to a state during the time-steps , to calculate the exploration reward. This is only possible in small and discrete state space. Density-based exploration extends count-based exploration by using a density model . The idea is that, if a state has been visited, then nearby states are also partly-visited. [10]

In maximum entropy exploration, the entropy of the agent's policy is included as a term in the intrinsic reward. That is, . [11]

Prediction-based

This section based on. [8]

The forward dynamics model is a function for predicting the next state based on the current state and the current action: . The forward dynamics model is trained as the agent plays. The model becomes better at predicting state transition for state-action pairs that had been done many times.

A forward dynamics model can define an exploration reward by . That is, the reward is the squared-error of the prediction compared to reality. This rewards the agent to perform state-action pairs that had not been done many times. This is however susceptible to the noisy TV problem.

Dynamics model can be run in latent space. That is, for some featurizer . The featurizer can be the identity function (i.e. ), randomly generated, the encoder-half of a variational autoencoder, etc. A good featurizer improves forward dynamics exploration. [12]

The Intrinsic Curiosity Module (ICM) method trains simultaneously a forward dynamics model and a featurizer. The featurizer is trained by an inverse dynamics model, which is a function for predicting the current action based on the features of the current and the next state: . By optimizing the inverse dynamics, both the inverse dynamics model and the featurizer are improved. Then, the improved featurizer improves the forward dynamics model, which improves the exploration of the agent. [13]

Random Network Distillation (RND) method attempts to solve this problem by teacher-student distillation. Instead of a forward dynamics model, it has two models . The teacher model is fixed, and the student model is trained to minimize on states . As a state is visited more and more, the student network becomes better at predicting the teacher. Meanwhile, the prediction error is also an exploration reward for the agent, and so the agent learns to perform actions that result in higher prediction error. Thus, we have a student network attempting to minimize the prediction error, while the agent attempting to maximize it, resulting in exploration.

The states are normalized by subtracting a running average and dividing a running variance, which is necessary since the teacher model is frozen. The rewards are normalized by dividing with a running variance. [7] [14]

Exploration by disagreement trains an ensemble of forward dynamics models, each on a random subset of all tuples. The exploration reward is the variance of the models' predictions. [15]

Noise

For neural-network based agents, the NoisyNet method changes some of its neural network modules by noisy versions. That is, some network parameters are random variables from a probability distribution. The parameters of the distribution are themselves learnable. [16] For example, in a linear layer , both are sampled from gaussian distributions at every step, and the parameters are learned via the reparameterization trick. [17]

Related Research Articles

Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised learning.

Temporal difference (TD) learning refers to a class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods.

Q-learning is a model-free reinforcement learning algorithm that teaches an agent to assign values to each action it might take, conditioned on the agent being in a particular state. It does not require a model of the environment, and it can handle problems with stochastic transitions and rewards without requiring adaptations.

<span class="mw-page-title-main">Multi-armed bandit</span> Resource problem in machine learning

In probability theory and machine learning, the multi-armed bandit problem is a problem in which a decision maker iteratively selects one of multiple fixed choices when the properties of each choice are only partially known at the time of allocation, and may become better understood as time passes. A fundamental aspect of bandit problems is that choosing an arm does not affect the properties of the arm or other arms.

<span class="mw-page-title-main">Autoencoder</span> Neural network that learns efficient data encoding in an unsupervised manner

An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data. An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically for dimensionality reduction, to generate lower-dimensional embeddings for subsequent use by other machine learning algorithms.

State–action–reward–state–action (SARSA) is an algorithm for learning a Markov decision process policy, used in the reinforcement learning area of machine learning. It was proposed by Rummery and Niranjan in a technical note with the name "Modified Connectionist Q-Learning" (MCQ-L). The alternative name SARSA, proposed by Rich Sutton, was only mentioned as a footnote.

<span class="mw-page-title-main">Thompson sampling</span> Type of heuristic technique

Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that address the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.

Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks. A survey from May 2020 exposes the fact that practitioners report a dire need for better protecting machine learning systems in industrial applications.

In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process, which must be configured before the process starts.

<span class="mw-page-title-main">Tsetlin machine</span> Artificial intelligence algorithm

A Tsetlin machine is an artificial intelligence algorithm based on propositional logic.

Deep reinforcement learning is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs and decide what actions to perform to optimize an objective. Deep reinforcement learning has been used for a diverse set of applications including but not limited to robotics, video games, natural language processing, computer vision, education, transportation, finance and healthcare.

<span class="mw-page-title-main">Transformer (deep learning architecture)</span> Deep learning architecture for modelling sequential data

A transformer is a deep learning architecture that was developed by researchers at Google and is based on the multi-head attention mechanism, which was proposed in the 2017 paper "Attention Is All You Need". Text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for key tokens to be amplified and less important tokens to be diminished.

<span class="mw-page-title-main">Multi-agent reinforcement learning</span> Sub-field of reinforcement learning

Multi-agent reinforcement learning (MARL) is a sub-field of reinforcement learning. It focuses on studying the behavior of multiple learning agents that coexist in a shared environment. Each agent is motivated by its own rewards, and does actions to advance its own interests; in some environments these interests are opposed to the interests of other agents, resulting in complex group dynamics.

Intrinsic motivation in the study of artificial intelligence and any robotics is a mechanism for enabling artificial agents to exhibit inherently rewarding behaviours such as exploration and curiosity, grouped under the same term in the study of psychology. Psychologists consider intrinsic motivation in humans to be the drive to perform an activity for inherent satisfaction – just for the fun or challenge of it.

<span class="mw-page-title-main">MuZero</span> Game-playing artificial intelligence

MuZero is a computer program developed by artificial intelligence research company DeepMind to master games without knowing their rules. Its release in 2019 included benchmarks of its performance in go, chess, shogi, and a standard suite of Atari games. The algorithm uses an approach similar to AlphaZero. It matched AlphaZero's performance in chess and shogi, improved on its performance in Go, and improved on the state of the art in mastering a suite of 57 Atari games, a visually-complex domain.

<span class="mw-page-title-main">Knowledge graph embedding</span> Dimensionality reduction of graph-based semantic data objects [machine learning task]

In representation learning, knowledge graph embedding (KGE), also referred to as knowledge representation learning (KRL), or multi-relation learning, is a machine learning task of learning a low-dimensional representation of a knowledge graph's entities and relations while preserving their semantic meaning. Leveraging their embedded representation, knowledge graphs (KGs) can be used for various applications such as link prediction, triple classification, entity recognition, clustering, and relation extraction.

Proximal policy optimization (PPO) is a reinforcement learning (RL) algorithm for training an intelligent agent's decision function to accomplish difficult tasks. PPO was developed by John Schulman in 2017, and had become the default RL algorithm at the US artificial intelligence company OpenAI. Since 2018, PPO has seen success in a wide variety of applications, such as controlling a robotic arm, beating professional players at Dota 2, and excelling in Atari games. Many experts called PPO the state of the art, due to its ability to strike a balance between performance and comprehension. Compared with other algorithms, the three main advantages of PPO are simplicity, stability, and sample efficiency.

In machine learning, diffusion models, also known as diffusion probabilistic models or score-based generative models, are a class of latent variable generative models. A diffusion model consists of three major components: the forward process, the reverse process, and the sampling procedure. The goal of diffusion models is to learn a diffusion process for a given dataset, such that the process can generate new elements that are distributed similarly as the original dataset. A diffusion model models data as generated by a diffusion process, whereby a new datum performs a random walk with drift through the space of all possible data. A trained diffusion model can be sampled in many ways, with different efficiency and quality.

<span class="mw-page-title-main">Reinforcement learning from human feedback</span> Machine learning technique

In machine learning, reinforcement learning from human feedback (RLHF) is a technique to align an intelligent agent with human preferences. It involves training a reward model to represent preferences, which can then be used to train other models through reinforcement learning.

Imitation learning is a paradigm in reinforcement learning, where an agent learns to perform a task by supervised learning from expert demonstrations. It is also called learning from demonstration and apprenticeship learning.

References

  1. Berger-Tal, Oded; Nathan, Jonathan; Meron, Ehud; Saltz, David (22 April 2014). "The Exploration-Exploitation Dilemma: A Multidisciplinary Framework". PLOS ONE. 9 (4): e95693. Bibcode:2014PLoSO...995693B. doi: 10.1371/journal.pone.0095693 . PMC   3995763 . PMID   24756026.
  2. Rhee, Mooweon; Kim, Tohyun (2018). "Exploration and Exploitation". The Palgrave Encyclopedia of Strategic Management. London: Palgrave Macmillan UK. pp. 543–546. doi:10.1057/978-1-137-00772-8_388. ISBN   978-0-230-53721-7.
  3. Fruit, R. (2019). Exploration-exploitation dilemma in Reinforcement Learning under various form of prior knowledge (Doctoral dissertation, Université de Lille 1, Sciences et Technologies; CRIStAL UMR 9189).
  4. Richard S. Sutton; Andrew G. Barto (2020). Reinforcement Learning: An Introduction (2nd edition). http://incompleteideas.net/book/the-book-2nd.html
  5. 1 2 Weng, Lilian (2018-01-23). "The Multi-Armed Bandit Problem and Its Solutions". lilianweng.github.io. Retrieved 2024-09-15.
  6. Salimans, Tim; Chen, Richard (2018-07-04). "Learning Montezuma's Revenge from a Single Demonstration". OpenAI Blog. arXiv: 1812.03381 . Bibcode:2018arXiv181203381S . Retrieved 2019-03-01.
  7. 1 2 3 Burda, Yuri; Edwards, Harrison; Storkey, Amos; Klimov, Oleg (2018-10-30). "Exploration by Random Network Distillation". arXiv: 1810.12894 [cs.LG].
  8. 1 2 Weng, Lilian (2020-06-07). "Exploration Strategies in Deep Reinforcement Learning". lilianweng.github.io. Retrieved 2024-09-15.
  9. Şimşek, Özgür; Barto, Andrew G. (2006-06-25). "An intrinsic reward mechanism for efficient exploration". Proceedings of the 23rd international conference on Machine learning - ICML '06. New York, NY, USA: Association for Computing Machinery. pp. 833–840. doi:10.1145/1143844.1143949. ISBN   978-1-59593-383-6.
  10. Bellemare, Marc G.; Srinivasan, Sriram; Ostrovski, Georg; Schaul, Tom; Saxton, David; Munos, Remi (2016-11-07). "Unifying Count-Based Exploration and Intrinsic Motivation". arXiv: 1606.01868 [cs.AI].
  11. Hazan, Elad; Kakade, Sham; Singh, Karan; Soest, Abby Van (2019-05-24). "Provably Efficient Maximum Entropy Exploration". Proceedings of the 36th International Conference on Machine Learning. PMLR: 2681–2691.
  12. Burda, Yuri; Edwards, Harri; Pathak, Deepak; Storkey, Amos; Darrell, Trevor; Efros, Alexei A. (2018-08-13). "Large-Scale Study of Curiosity-Driven Learning". arXiv: 1808.04355 [cs.LG].
  13. Pathak, Deepak; Agrawal, Pulkit; Efros, Alexei A.; Darrell, Trevor (2017-05-15). "Curiosity-driven Exploration by Self-supervised Prediction". arXiv: 1705.05363 [cs.LG].
  14. "Reinforcement Learning with Prediction-Based Rewards". OpenAI Blog. 2018-10-31. Retrieved 2019-03-01.
  15. Pathak, Deepak; Gandhi, Dhiraj; Gupta, Abhinav (2019-06-10). "Self-Supervised Exploration via Disagreement". arXiv: 1906.04161 [cs.LG].
  16. Fortunato, Meire; Azar, Mohammad Gheshlaghi; Piot, Bilal; Menick, Jacob; Osband, Ian; Graves, Alex; Mnih, Vlad; Munos, Remi; Hassabis, Demis; Pietquin, Olivier; Blundell, Charles; Legg, Shane (2017). "Noisy Networks for Exploration". arXiv: 1706.10295 [cs.LG].
  17. Kingma, Durk P; Salimans, Tim; Welling, Max (2015). "Variational Dropout and the Local Reparameterization Trick". Advances in Neural Information Processing Systems. 28. Curran Associates, Inc.