Stochastic approximation

Last updated

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

Contents

In a nutshell, stochastic approximation algorithms deal with a function of the form which is the expected value of a function depending on a random variable . The goal is to recover properties of such a function without evaluating it directly. Instead, stochastic approximation algorithms use random samples of to efficiently approximate properties of such as zeros or extrema.

Recently, stochastic approximations have found extensive applications in the fields of statistics and machine learning, especially in settings with big data. These applications range from stochastic optimization methods and algorithms, to online forms of the EM algorithm, reinforcement learning via temporal differences, and deep learning, and others. [1] Stochastic approximation algorithms have also been used in the social sciences to describe collective dynamics: fictitious play in learning theory and consensus algorithms can be studied using their theory. [2]

The earliest, and prototypical, algorithms of this kind are the Robbins–Monro and Kiefer–Wolfowitz algorithms introduced respectively in 1951 and 1952.

Robbins–Monro algorithm

The Robbins–Monro algorithm, introduced in 1951 by Herbert Robbins and Sutton Monro, [3] presented a methodology for solving a root finding problem, where the function is represented as an expected value. Assume that we have a function , and a constant , such that the equation has a unique root at . It is assumed that while we cannot directly observe the function , we can instead obtain measurements of the random variable where . The structure of the algorithm is to then generate iterates of the form:

Here, is a sequence of positive step sizes. Robbins and Monro proved [3] , Theorem 2 that converges in (and hence also in probability) to , and Blum [4] later proved the convergence is actually with probability one, provided that:

A particular sequence of steps which satisfy these conditions, and was suggested by Robbins–Monro, have the form: , for . Other series, such as are possible but in order to average out the noise in , the above condition must be met.

Example

Consider the problem of estimating the mean of a probability distribution from a stream of independent samples .

Let , then the unique solution to is the desired mean . The RM algorithm gives usThis is equivalent to stochastic gradient descent with loss function . It is also equivalent to a weighted average:In general, if there exists some function such that , then the Robbins–Monro algorithm is equivalent to stochastic gradient descent with loss function . However, the RM algorithm does not require to exist in order to converge.

Complexity results

  1. If is twice continuously differentiable, and strongly convex, and the minimizer of belongs to the interior of , then the Robbins–Monro algorithm will achieve the asymptotically optimal convergence rate, with respect to the objective function, being , where is the minimal value of over . [5] [6]
  2. Conversely, in the general convex case, where we lack both the assumption of smoothness and strong convexity, Nemirovski and Yudin [7] have shown that the asymptotically optimal convergence rate, with respect to the objective function values, is . They have also proven that this rate cannot be improved.

Subsequent developments and Polyak–Ruppert averaging

While the Robbins–Monro algorithm is theoretically able to achieve under the assumption of twice continuous differentiability and strong convexity, it can perform quite poorly upon implementation. This is primarily due to the fact that the algorithm is very sensitive to the choice of the step size sequence, and the supposed asymptotically optimal step size policy can be quite harmful in the beginning. [6] [8]

Chung (1954) [9] and Fabian (1968) [10] showed that we would achieve optimal convergence rate with (or ). Lai and Robbins [11] [12] designed adaptive procedures to estimate such that has minimal asymptotic variance. However the application of such optimal methods requires much a priori information which is hard to obtain in most situations. To overcome this shortfall, Polyak (1991) [13] and Ruppert (1988) [14] independently developed a new optimal algorithm based on the idea of averaging the trajectories. Polyak and Juditsky [15] also presented a method of accelerating Robbins–Monro for linear and non-linear root-searching problems through the use of longer steps, and averaging of the iterates. The algorithm would have the following structure:The convergence of to the unique root relies on the condition that the step sequence decreases sufficiently slowly. That is

A1)

Therefore, the sequence with satisfies this restriction, but does not, hence the longer steps. Under the assumptions outlined in the Robbins–Monro algorithm, the resulting modification will result in the same asymptotically optimal convergence rate yet with a more robust step size policy. [15] Prior to this, the idea of using longer steps and averaging the iterates had already been proposed by Nemirovski and Yudin [16] for the cases of solving the stochastic optimization problem with continuous convex objectives and for convex-concave saddle point problems. These algorithms were observed to attain the nonasymptotic rate .

A more general result is given in Chapter 11 of Kushner and Yin [17] by defining interpolated time , interpolated process and interpolated normalized process as

Let the iterate average be and the associate normalized error to be .

With assumption A1) and the following A2)

A2)There is a Hurwitz matrix and a symmetric and positive-definite matrix such that converges weakly to , where is the statisolution to where is a standard Wiener process.

satisfied, and define . Then for each ,

The success of the averaging idea is because of the time scale separation of the original sequence and the averaged sequence , with the time scale of the former one being faster.

Application in stochastic optimization

Suppose we want to solve the following stochastic optimization problemwhere is differentiable and convex, then this problem is equivalent to find the root of . Here can be interpreted as some "observed" cost as a function of the chosen and random effects . In practice, it might be hard to get an analytical form of , Robbins–Monro method manages to generate a sequence to approximate if one can generate , in which the conditional expectation of given is exactly , i.e. is simulated from a conditional distribution defined by

Here is an unbiased estimator of . If depends on , there is in general no natural way of generating a random outcome that is an unbiased estimator of the gradient. In some special cases when either IPA or likelihood ratio methods are applicable, then one is able to obtain an unbiased gradient estimator . If is viewed as some "fundamental" underlying random process that is generated independently of , and under some regularization conditions for derivative-integral interchange operations so that , then gives the fundamental gradient unbiased estimate. However, for some applications we have to use finite-difference methods in which has a conditional expectation close to but not exactly equal to it.

We then define a recursion analogously to Newton's Method in the deterministic algorithm:

Convergence of the algorithm

The following result gives sufficient conditions on for the algorithm to converge: [18]

C1)

C2)

C3)

C4)

C5)

Then converges to almost surely.

Here are some intuitive explanations about these conditions. Suppose is a uniformly bounded random variables. If C2) is not satisfied, i.e. , thenis a bounded sequence, so the iteration cannot converge to if the initial guess is too far away from . As for C3) note that if converges to then

so we must have ,and the condition C3) ensures it. A natural choice would be . Condition C5) is a fairly stringent condition on the shape of ; it gives the search direction of the algorithm.

Example (where the stochastic gradient method is appropriate) [8]

Suppose , where is differentiable and is a random variable independent of . Then depends on the mean of , and the stochastic gradient method would be appropriate in this problem. We can choose

Kiefer–Wolfowitz algorithm

The Kiefer–Wolfowitz algorithm was introduced in 1952 by Jacob Wolfowitz and Jack Kiefer, [19] and was motivated by the publication of the Robbins–Monro algorithm. However, the algorithm was presented as a method which would stochastically estimate the maximum of a function.

Let be a function which has a maximum at the point . It is assumed that is unknown; however, certain observations , where , can be made at any point . The structure of the algorithm follows a gradient-like method, with the iterates being generated as

where and are independent. At every step, the gradient of is approximated akin to a central difference method with . So the sequence specifies the sequence of finite difference widths used for the gradient approximation, while the sequence specifies a sequence of positive step sizes taken along that direction.

Kiefer and Wolfowitz proved that, if satisfied certain regularity conditions, then will converge to in probability as , and later Blum [4] in 1954 showed converges to almost surely, provided that:

A suitable choice of sequences, as recommended by Kiefer and Wolfowitz, would be and .

Subsequent developments and important issues

  1. The Kiefer Wolfowitz algorithm requires that for each gradient computation, at least different parameter values must be simulated for every iteration of the algorithm, where is the dimension of the search space. This means that when is large, the Kiefer–Wolfowitz algorithm will require substantial computational effort per iteration, leading to slow convergence.
    1. To address this problem, Spall proposed the use of simultaneous perturbations to estimate the gradient. This method would require only two simulations per iteration, regardless of the dimension . [20]
  2. In the conditions required for convergence, the ability to specify a predetermined compact set that fulfills strong convexity (or concavity) and contains the unique solution can be difficult to find. With respect to real world applications, if the domain is quite large, these assumptions can be fairly restrictive and highly unrealistic.

Further developments

An extensive theoretical literature has grown up around these algorithms, concerning conditions for convergence, rates of convergence, multivariate and other generalizations, proper choice of step size, possible noise models, and so on. [21] [22] These methods are also applied in control theory, in which case the unknown function which we wish to optimize or find the zero of may vary in time. In this case, the step size should not converge to zero but should be chosen so as to track the function. [21] , 2nd ed., chapter 3

C. Johan Masreliez and R. Douglas Martin were the first to apply stochastic approximation to robust estimation. [23]

The main tool for analyzing stochastic approximations algorithms (including the Robbins–Monro and the Kiefer–Wolfowitz algorithms) is a theorem by Aryeh Dvoretzky published in 1956. [24]

See also

Related Research Articles

In integral calculus, an elliptic integral is one of a number of related functions defined as the value of certain integrals, which were first studied by Giulio Fagnano and Leonhard Euler. Their name originates from their originally arising in connection with the problem of finding the arc length of an ellipse.

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as

<span class="mw-page-title-main">Central limit theorem</span> Fundamental theorem in probability theory and statistics

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if A convergent series that is not absolutely convergent is called conditionally convergent.

<span class="mw-page-title-main">Wiener process</span> Stochastic process generalizing Brownian motion

In mathematics, the Wiener process is a real-valued continuous-time stochastic process named in honor of American mathematician Norbert Wiener for his investigations on the mathematical properties of the one-dimensional Brownian motion. It is often also called Brownian motion due to its historical connection with the physical process of the same name originally observed by Scottish botanist Robert Brown. It is one of the best known Lévy processes and occurs frequently in pure and applied mathematics, economics, quantitative finance, evolutionary biology, and physics.

<span class="mw-page-title-main">Law of large numbers</span> Averages of repeated trials converge to the expected value

In probability theory, the law of large numbers (LLN) is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean.

In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input which may or may not be in the domain of the function.

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of exact simulation method. The method works for any distribution in with a density.

<span class="mw-page-title-main">Consistent estimator</span> Statistical estimator converging in probability to a true parameter as sample size increases

In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one.

<span class="mw-page-title-main">Arc length</span> Distance along a curve

Arc length is the distance between two points along a section of a curve.

In mathematics, more specifically in dynamical systems, the method of averaging exploits systems containing time-scales separation: a fast oscillationversus a slow drift. It suggests that we perform an averaging over a given amount of time in order to iron out the fast oscillations and observe the qualitative behavior from the resulting dynamics. The approximated solution holds under finite time inversely proportional to the parameter denoting the slow time scale. It turns out to be a customary problem where there exists the trade off between how good is the approximated solution balanced by how much time it holds to be close to the original solution.

<span class="mw-page-title-main">Pendulum (mechanics)</span> Free swinging suspended body

A pendulum is a body suspended from a fixed support such that it freely swings back and forth under the influence of gravity. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back towards the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging it back and forth. The mathematics of pendulums are in general quite complicated. Simplifying assumptions can be made, which in the case of a simple pendulum allow the equations of motion to be solved analytically for small-angle oscillations.

In mathematics, a local martingale is a type of stochastic process, satisfying the localized version of the martingale property. Every martingale is a local martingale; every bounded local martingale is a martingale; in particular, every local martingale that is bounded from below is a supermartingale, and every local martingale that is bounded from above is a submartingale; however, a local martingale is not in general a martingale, because its expectation can be distorted by large values of small probability. In particular, a driftless diffusion process is a local martingale, but not necessarily a martingale.

Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation and selection: in each generation (iteration) new individuals are generated by variation of the current parental individuals, usually in a stochastic way. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, individuals with better and better -values are generated over the generation sequence.

<span class="mw-page-title-main">Potential flow around a circular cylinder</span> Classical solution for inviscid, incompressible flow around a cyclinder

In mathematics, potential flow around a circular cylinder is a classical solution for the flow of an inviscid, incompressible fluid around a cylinder that is transverse to the flow. Far from the cylinder, the flow is unidirectional and uniform. The flow has no vorticity and thus the velocity field is irrotational and can be modeled as a potential flow. Unlike a real fluid, this solution indicates a net zero drag on the body, a result known as d'Alembert's paradox.

In physics, the Maxwell–Jüttner distribution, sometimes called Jüttner–Synge distribution, is the distribution of speeds of particles in a hypothetical gas of relativistic particles. Similar to the Maxwell–Boltzmann distribution, the Maxwell–Jüttner distribution considers a classical ideal gas where the particles are dilute and do not significantly interact with each other. The distinction from Maxwell–Boltzmann's case is that effects of special relativity are taken into account. In the limit of low temperatures much less than , this distribution becomes identical to the Maxwell–Boltzmann distribution.

<span class="mw-page-title-main">Stochastic gradient Langevin dynamics</span> Optimization and sampling technique

Stochastic gradient Langevin dynamics (SGLD) is an optimization and sampling technique composed of characteristics from Stochastic gradient descent, a Robbins–Monro optimization algorithm, and Langevin dynamics, a mathematical extension of molecular dynamics models. Like stochastic gradient descent, SGLD is an iterative optimization algorithm which uses minibatching to create a stochastic gradient estimator, as used in SGD to optimize a differentiable objective function. Unlike traditional SGD, SGLD can be used for Bayesian learning as a sampling method. SGLD may be viewed as Langevin dynamics applied to posterior distributions, but the key difference is that the likelihood gradient terms are minibatched, like in SGD. SGLD, like Langevin dynamics, produces samples from a posterior distribution of parameters based on available data. First described by Welling and Teh in 2011, the method has applications in many contexts which require optimization, and is most notably applied in machine learning problems.

A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo samplers, but has since been used in diverse settings in statistics, machine learning and computer science.

(Stochastic) variance reduction is an algorithmic approach to minimizing functions that can be decomposed into finite sums. By exploiting the finite sum structure, variance reduction techniques are able to achieve convergence rates that are impossible to achieve with methods that treat the objective as an infinite sum, as in the classical Stochastic approximation setting.

References

  1. Toulis, Panos; Airoldi, Edoardo (2015). "Scalable estimation strategies based on stochastic approximations: classical results and new insights". Statistics and Computing. 25 (4): 781–795. doi:10.1007/s11222-015-9560-y. PMC   4484776 . PMID   26139959.
  2. Le Ny, Jerome. "Introduction to Stochastic Approximation Algorithms" (PDF). Polytechnique Montreal. Teaching Notes. Retrieved 16 November 2016.
  3. 1 2 Robbins, H.; Monro, S. (1951). "A Stochastic Approximation Method". The Annals of Mathematical Statistics. 22 (3): 400. doi: 10.1214/aoms/1177729586 .
  4. 1 2 Blum, Julius R. (1954-06-01). "Approximation Methods which Converge with Probability one". The Annals of Mathematical Statistics. 25 (2): 382–386. doi: 10.1214/aoms/1177728794 . ISSN   0003-4851.
  5. Sacks, J. (1958). "Asymptotic Distribution of Stochastic Approximation Procedures". The Annals of Mathematical Statistics. 29 (2): 373–405. doi: 10.1214/aoms/1177706619 . JSTOR   2237335.
  6. 1 2 Nemirovski, A.; Juditsky, A.; Lan, G.; Shapiro, A. (2009). "Robust Stochastic Approximation Approach to Stochastic Programming". SIAM Journal on Optimization. 19 (4): 1574. doi:10.1137/070704277.
  7. Problem Complexity and Method Efficiency in Optimization, A. Nemirovski and D. Yudin, Wiley -Intersci. Ser. Discrete Math15John WileyNew York (1983) .
  8. 1 2 Introduction to Stochastic Search and Optimization: Estimation, Simulation and Control, J.C. Spall, John WileyHoboken, NJ, (2003).
  9. Chung, K. L. (1954-09-01). "On a Stochastic Approximation Method". The Annals of Mathematical Statistics. 25 (3): 463–483. doi: 10.1214/aoms/1177728716 . ISSN   0003-4851.
  10. Fabian, Vaclav (1968-08-01). "On Asymptotic Normality in Stochastic Approximation". The Annals of Mathematical Statistics. 39 (4): 1327–1332. doi: 10.1214/aoms/1177698258 . ISSN   0003-4851.
  11. Lai, T. L.; Robbins, Herbert (1979-11-01). "Adaptive Design and Stochastic Approximation". The Annals of Statistics. 7 (6): 1196–1221. doi: 10.1214/aos/1176344840 . ISSN   0090-5364.
  12. Lai, Tze Leung; Robbins, Herbert (1981-09-01). "Consistency and asymptotic efficiency of slope estimates in stochastic approximation schemes". Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete. 56 (3): 329–360. doi: 10.1007/BF00536178 . ISSN   0044-3719. S2CID   122109044.
  13. Polyak, B T (1991). "New stochastic approximation type procedures. (In Russian.)". Automation and Remote Control. 7 (7).
  14. Ruppert, David (1988). Efficient estimators from a slowly converging robbins-monro process (Technical Report 781). Cornell University School of Operations Research and Industrial Engineering.
  15. 1 2 Polyak, B. T.; Juditsky, A. B. (1992). "Acceleration of Stochastic Approximation by Averaging". SIAM Journal on Control and Optimization. 30 (4): 838. doi:10.1137/0330046.
  16. On Cezari's convergence of the steepest descent method for approximating saddle points of convex-concave functions, A. Nemirovski and D. Yudin, Dokl. Akad. Nauk SSR2939, (1978 (Russian)), Soviet Math. Dokl. 19 (1978 (English)).
  17. Kushner, Harold; George Yin, G. (2003-07-17). Stochastic Approximation and Recursive Algorithms and | Harold Kushner | Springer. www.springer.com. ISBN   9780387008943 . Retrieved 2016-05-16.
  18. Bouleau, N.; Lepingle, D. (1994). Numerical Methods for stochastic Processes. New York: John Wiley. ISBN   9780471546412.
  19. Kiefer, J.; Wolfowitz, J. (1952). "Stochastic Estimation of the Maximum of a Regression Function". The Annals of Mathematical Statistics. 23 (3): 462. doi: 10.1214/aoms/1177729392 .
  20. Spall, J. C. (2000). "Adaptive stochastic approximation by the simultaneous perturbation method". IEEE Transactions on Automatic Control. 45 (10): 1839–1853. doi:10.1109/TAC.2000.880982.
  21. 1 2 Kushner, H. J.; Yin, G. G. (1997). Stochastic Approximation Algorithms and Applications. doi:10.1007/978-1-4899-2696-8. ISBN   978-1-4899-2698-2.
  22. Stochastic Approximation and Recursive Estimation, Mikhail Borisovich Nevel'son and Rafail Zalmanovich Has'minskiĭ, translated by Israel Program for Scientific Translations and B. Silver, Providence, RI: American Mathematical Society, 1973, 1976. ISBN   0-8218-1597-0.
  23. Martin, R.; Masreliez, C. (1975). "Robust estimation via stochastic approximation". IEEE Transactions on Information Theory. 21 (3): 263. doi:10.1109/TIT.1975.1055386.
  24. Dvoretzky, Aryeh (1956). "On stochastic approximation". In Neyman, Jerzy (ed.). Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, 1954–1955, vol. I. University of California Press. pp. 39–55. MR   0084911.