Proximal policy optimization

Last updated

Proximal policy optimization (PPO) is a reinforcement learning (RL) algorithm for training an intelligent agent. Specifically, it is a policy gradient method, often used for deep RL when the policy network is very large.

Contents

The predecessor to PPO, Trust Region Policy Optimization (TRPO), was published in 2015. It addressed the instability issue of another algorithm, the Deep Q-Network (DQN), by using the trust region method to limit the KL divergence between the old and new policies. However, TRPO uses the Hessian matrix (a matrix of second derivatives) to enforce the trust region, but the Hessian is inefficient for large-scale problems. [1]

PPO was published in 2017. It was essentially an approximation of TRPO that does not require computing the Hessian. The KL divergence constraint was approximated by simply clipping the policy gradient. [2]

Since 2018, PPO was the default RL algorithm at OpenAI. [3] PPO has been applied to many areas, such as controlling a robotic arm, beating professional players at Dota 2 (OpenAI Five), and playing Atari games. [4]

TRPO

TRPO, the predecessor of PPO, is an on-policy algorithm. It can be used for environments with either discrete or continuous action spaces.

The pseudocode is as follows: [5]

PPO

The pseudocode is as follows: [6]

Like all policy gradient methods, PPO is used for training an RL agent whose actions are determined by a differentiable policy function by gradient ascent.

Intuitively, a policy gradient method takes small policy update steps, so the agent can reach higher and higher rewards in expectation.

Policy gradient methods may be unstable: A step size that is too big may direct the policy in a suboptimal direction, thus having little possibility of recovery; a step size that is too small lowers the overall efficiency.

To solve the instability, PPO implements a clip function that constrains the policy update of an agent from being too large, so that larger step sizes may be used without negatively affecting the gradient ascent process. [7]

Basic concepts

To begin the PPO training process, the agent is set in an environment to perform actions based on its current input. In the early phase of training, the agent can freely explore solutions and keep track of the result. Later, with a certain amount of transition samples and policy updates, the agent will select an action to take by randomly sampling from the probability distribution generated by the policy network. [8] The actions that are most likely to be beneficial will have the highest probability of being selected from the random sample. After an agent arrives at a different scenario (a new state) by acting, it is rewarded with a positive reward or a negative reward. The objective of an agent is to maximize the cumulative reward signal across sequences of states, known as episodes.

Policy gradient laws: the advantage function

The advantage function (denoted as ) is central to PPO, as it tries to answer the question of whether a specific action of the agent is better or worse than some other possible action in a given state. By definition, the advantage function is an estimate of the relative value for a selected action. If the output of this function is positive, it means that the action in question is better than the average return, so the possibilities of selecting that specific action will increase. The opposite is true for a negative advantage output. [1]

The advantage function can be defined as , where is the discounted sum of rewards (the total weighted reward for the completion of an episode) and is the baseline estimate. [9] [1] Since the advantage function is calculated after the completion of an episode, the program records the outcome of the episode. Therefore, calculating advantage is essentially an unsupervised learning problem. The baseline estimate comes from the value function that outputs the expected discounted sum of an episode starting from the current state. In the PPO algorithm, the baseline estimate will be noisy (with some variance), as it also uses a neural network, like the policy function itself. With and computed, the advantage function is calculated by subtracting the baseline estimate from the actual discounted return. [10] If , the actual return of the action is better than the expected return from experience; if , the actual return is worse.

Ratio function

In PPO, the ratio function () calculates the probability of selecting action in state given the current policy network, divided by the previous probability under the old policy. In other words:

Hence, this ratio function can easily estimate the divergence between old and current policies. [11] [7]

PPO objective function

The objective function of PPO takes the expectation operator (denoted as ) which means that this function will be computed over quantities of trajectories. The expectation operator takes the minimum of two terms:

  1. : this is the product of the ratio function and the advantage function introduced in TRPO, also known as the normal policy gradient objective. [12]
  2. : the policy ratio is first clipped to the range ; generally, is defined to be 0.2. Then, as before, it is multiplied by the advantage.

The fundamental intuition behind PPO is the same as that of TRPO: conservatism. Clipping results in a conservative advantage estimate of the new policy. The reasoning is that if an agent makes significant changes due to high advantage estimates, its policy update will be large and unstable, and may diverge from the optimal policy with little possibility of recovery. [13] There are two common applications of the clipping function: when an action under a new policy happens to be a good choice based on the advantage function, the clipping function limits how much credit can be given to a new policy for up-weighted good actions. On the other hand, when an action under the old policy is judged to be bad, the clipping function constrains how much the agent can accept the down-weighted bad actions of the new policy. [14] Consequently, the clipping mechanism is designed to discourage the incentive of moving beyond the defined range by clipping both directions. The advantage of this method is that it can be optimized directly with gradient descent, as opposed to the strict KL divergence constraint of TRPO, making the implementation faster and more intuitive.

After computing the clipped surrogate objective function, the agent has two probability ratios: one non-clipped and one clipped. Then, by taking the minimum of the two objectives, the final objective becomes a lower bound (a pessimistic bound) of what the agent knows is possible. [14] In other words, the minimum method makes sure that the agent is doing the safest possible update.

Advantages

Simplicity

PPO approximates what TRPO does, with considerably less computation. It uses first-order optimization (the clip function) to constrain the policy update, while TRPO uses KL divergence constraints (second-order optimization). Compared to TRPO, the PPO method is relatively easy to implement and requires less computational resource and time. Therefore, it is cheaper and more efficient to use PPO in large-scale problems. [15]

Stability

While other RL algorithms require hyperparameter tuning, PPO comparatively does not require as much (0.2 for epsilon can be used in most cases). [16] Also, PPO does not require sophisticated optimization techniques. It can be easily practiced with standard deep learning frameworks and generalized to a broad range of tasks.

Sample efficiency

Sample efficiency indicates whether the algorithms need more or less data to train a good policy. PPO achieved sample efficiency because of its use of surrogate objectives. The surrogate objective allows PPO to avoid the new policy moving too far from the old policy; the clip function regularizes the policy update and reuses training data. Sample efficiency is especially useful for complicated and high-dimensional tasks, where data collection and computation can be costly. [17]

See also

Related Research Articles

In physics, the cross section is a measure of the probability that a specific process will take place in a collision of two particles. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted σ (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process.

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

<span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

<span class="mw-page-title-main">Reinforcement learning</span> Field of machine learning

Reinforcement learning (RL) is an interdisciplinary area of machine learning and optimal control concerned with how an intelligent agent should take actions in a dynamic environment in order to maximize a reward signal. Reinforcement learning is one of the three basic machine learning paradigms, alongside supervised learning and unsupervised learning.

In physics, the Clauser–Horne–Shimony–Holt (CHSH) inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics cannot be reproduced by local hidden-variable theories. Experimental verification of the inequality being violated is seen as confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Stewart Bell's original inequality, is a constraint—on the statistical occurrence of "coincidences" in a Bell test—which is necessarily true if an underlying local hidden-variable theory exists. In practice, the inequality is routinely violated by modern experiments in quantum mechanics.

<span class="mw-page-title-main">Tautochrone curve</span> Curve for which the time to roll to the end is equal for all starting points

A tautochrone curve or isochrone curve is the curve for which the time taken by an object sliding without friction in uniform gravity to its lowest point is independent of its starting point on the curve. The curve is a cycloid, and the time is equal to π times the square root of the radius over the acceleration of gravity. The tautochrone curve is related to the brachistochrone curve, which is also a cycloid.

In differential topology, the jet bundle is a certain construction that makes a new smooth fiber bundle out of a given smooth fiber bundle. It makes it possible to write differential equations on sections of a fiber bundle in an invariant form. Jets may also be seen as the coordinate free versions of Taylor expansions.

In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.

An external ray is a curve that runs from infinity toward a Julia or Mandelbrot set. Although this curve is only rarely a half-line (ray) it is called a ray because it is an image of a ray.

<span class="mw-page-title-main">Autoencoder</span> Neural network that learns efficient data encoding in an unsupervised manner

An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data. An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically for dimensionality reduction, to generate lower-dimensional embeddings for subsequent use by other machine learning algorithms.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors.

In quantum computing, the quantum phase estimation algorithm is a quantum algorithm to estimate the phase corresponding to an eigenvalue of a given unitary operator. Because the eigenvalues of a unitary operator always have unit modulus, they are characterized by their phase, and therefore the algorithm can be equivalently described as retrieving either the phase or the eigenvalue itself. The algorithm was initially introduced by Alexei Kitaev in 1995.

In physics, Berry connection and Berry curvature are related concepts which can be viewed, respectively, as a local gauge potential and gauge field associated with the Berry phase or geometric phase. The concept was first introduced by S. Pancharatnam as geometric phase and later elaborately explained and popularized by Michael Berry in a paper published in 1984 emphasizing how geometric phases provide a powerful unifying concept in several branches of classical and quantum physics.

Policy gradient methods are a class of reinforcement learning algorithms.

Batch normalization is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.

<span class="mw-page-title-main">Variational autoencoder</span> Deep learning generative model to encode data representation

In machine learning, a variational autoencoder (VAE) is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling. It is part of the families of probabilistic graphical models and variational Bayesian methods.

In quantum computing, the variational quantum eigensolver (VQE) is a quantum algorithm for quantum chemistry, quantum simulations and optimization problems. It is a hybrid algorithm that uses both classical computers and quantum computers to find the ground state of a given physical system. Given a guess or ansatz, the quantum processor calculates the expectation value of the system with respect to an observable, often the Hamiltonian, and a classical optimizer is used to improve the guess. The algorithm is based on the variational method of quantum mechanics.

<span class="mw-page-title-main">Reinforcement learning from human feedback</span> Machine learning technique

In machine learning, reinforcement learning from human feedback (RLHF) is a technique to align an intelligent agent with human preferences. It involves training a reward model to represent preferences, which can then be used to train other models through reinforcement learning.

The actor-critic algorithm (AC) is a family of reinforcement learning (RL) algorithms that combine policy-based RL algorithms such as policy gradient methods, and value-based RL algorithms such as value iteration, Q-learning, SARSA, and TD learning.

References

  1. 1 2 3 Schulman, John; Levine, Sergey; Moritz, Philipp; Jordan, Michael; Abbeel, Pieter (2015-07-06). "Trust region policy optimization". Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37. ICML'15. Lille, France: JMLR.org: 1889–1897.
  2. Schulman, John; Wolski, Filip; Dhariwal, Prafulla; Radford, Alec; Klimov, Oleg (2017-08-28), Proximal Policy Optimization Algorithms, arXiv: 1707.06347
  3. OpenAI, "Proximal Policy Optimization" Available at: https://openai.com/research/openai-baselines-ppo
  4. Arxiv Insights. "An introduction to Policy Gradient methods," YouTube, Oct 1st, 2018 [Video file]. Available: https://www.youtube.com/watch?v=5P7I-xPq8u8
  5. "Trust Region Policy Optimization — Spinning Up documentation". spinningup.openai.com. Retrieved 2025-01-21.
  6. "Proximal Policy Optimization — Spinning Up documentation". spinningup.openai.com. Retrieved 2025-01-21.
  7. 1 2 T. Simonini, "Proximal Policy Optimization (PPO)," Hugging Face – The AI community building the future., https://huggingface.co/blog/deep-rl-ppo
  8. "A Beginner's Guide to deep Reinforcement learning," Pathmind. https://wiki.pathmind.com/deep-reinforcement-learning#reward
  9. OpenAI, "Part 1: Key concepts in RL," Part 1: Key Concepts in RL - Spinning Up documentation, https://spinningup.openai.com/en/latest/spinningup/rl_intro.html
  10. Rohitkumar, "PPO (Proximal Policy Optimization) explained with code examples in PyTorch and TensorFlow," PlainSwipe, https://plainswipe.com/ppo-proximal-policy-optimization-explained-with-code-examples-in-pytorch-and-tensorflow/
  11. W. Heeswijk, "Proximal Policy Optimization (PPO) explained," Medium, https://towardsdatascience.com/proximal-policy-optimization-ppo-explained-abed1952457b
  12. Edan Meyer. "Proximal Policy Optimization Explained," YouTube, May 20th, 2021 [Video file]. Available: https://www.youtube.com/watch?v=HrapVFNBN64
  13. C. M. Wild (Jul 9, 2018). "The Pursuit of (Robotic) Happiness: How TRPO and PPO Stabilize Policy Gradient Methods". towardsdatascience.com.
  14. 1 2 Zheng, R., Dou, S., Gao, S., Hua, Y., Shen, W., Wang, B.,(2023). Secrets of RLHF in Large Language Models Part I: PPO. ArXiv. /abs/2307.04964
  15. J. Nocedal and Y. Nesterov., "Natural, trust region and proximal policy optimization," TransferLab, https://transferlab.ai/blog/trpo-and-ppo/
  16. J. Hui, "RL - reinforcement learning algorithms comparison," Medium, https://jonathan-hui.medium.com/rl-reinforcement-learning-algorithms-comparison-76df90f180cf/
  17. XiaoYang-ElegantRL, "ElegantRL: Mastering PPO Algorithms - towards Data Science," Medium, Nov. 23, 2022. [Online]. Available: https://towardsdatascience.com/elegantrl-mastering-the-ppo-algorithm-part-i-9f36bc47b791