Scenario optimization

Last updated

The scenario approach or scenario optimization approach is a technique for obtaining solutions to robust optimization and chance-constrained optimization problems based on a sample of the constraints. It also relates to inductive reasoning in modeling and decision-making. The technique has existed for decades as a heuristic approach and has more recently been given a systematic theoretical foundation.

Contents

In optimization, robustness features translate into constraints that are parameterized by the uncertain elements of the problem. In the scenario method, [1] [2] [3] a solution is obtained by only looking at a random sample of constraints (heuristic approach) called scenarios and a deeply-grounded theory tells the user how “robust” the corresponding solution is related to other constraints. This theory justifies the use of randomization in robust and chance-constrained optimization.

Data-driven optimization

At times, scenarios are obtained as random extractions from a model. More often, however, scenarios are instances of the uncertain constraints that are obtained as observations (data-driven science). In this latter case, no model of uncertainty is needed to generate scenarios. Moreover, most remarkably, also in this case scenario optimization comes accompanied by a full-fledged theory because all scenario optimization results are distribution-free and can therefore be applied even when a model of uncertainty is not available.

Theoretical results

For constraints that are convex (e.g. in semidefinite problems, involving LMIs (Linear Matrix Inequalities)), a deep theoretical analysis has been established which shows that the probability that a new constraint is not satisfied follows a distribution that is dominated by a Beta distribution. This result is tight since it is exact for a whole class of convex problems. [3] More generally, various empirical levels have been shown to follow a Dirichlet distribution, whose marginals are beta distribution. [4] The scenario approach with regularization has also been considered, [5] and handy algorithms with reduced computational complexity are available. [6] Extensions to more complex, non-convex, set-ups are still objects of active investigation.

Along the scenario approach, it is also possible to pursue a risk-return trade-off. [7] [8] Moreover, a full-fledged method can be used to apply this approach to control. [9] First constraints are sampled and then the user starts removing some of the constraints in succession. This can be done in different ways, even according to greedy algorithms. After elimination of one more constraint, the optimal solution is updated, and the corresponding optimal value is determined. As this procedure moves on, the user constructs an empirical “curve of values”, i.e. the curve representing the value achieved after the removing of an increasing number of constraints. The scenario theory provides precise evaluations of how robust the various solutions are.

A remarkable advance in the theory has been established by the recent wait-and-judge approach: [10] one assesses the complexity of the solution (as precisely defined in the referenced article) and from its value formulates precise evaluations on the robustness of the solution. These results shed light on deeply-grounded links between the concepts of complexity and risk. A related approach, named "Repetitive Scenario Design" aims at reducing the sample complexity of the solution by repeatedly alternating a scenario design phase (with reduced number of samples) with a randomized check of the feasibility of the ensuing solution. [11]

Example

Consider a function which represents the return of an investment; it depends on our vector of investment choices and on the market state which will be experienced at the end of the investment period.

Given a stochastic model for the market conditions, we consider of the possible states (randomization of uncertainty). Alternatively, the scenarios can be obtained from a record of observations.

We set out to solve the scenario optimization program

This corresponds to choosing a portfolio vector x so as to obtain the best possible return in the worst-case scenario. [12] [13]

After solving (1), an optimal investment strategy is achieved along with the corresponding optimal return . While has been obtained by looking at possible market states only, the scenario theory tells us that the solution is robust up to a level , that is, the return will be achieved with probability for other market states.

In quantitative finance, the worst-case approach can be overconservative. One alternative is to discard some odd situations to reduce pessimism; [7] moreover, scenario optimization can be applied to other risk-measures including CVaR – Conditional Value at Risk – so adding to the flexibility of its use. [14]

Application fields

Fields of application include: prediction, systems theory, regression analysis (Interval Predictor Models in particular), Actuarial science, optimal control, financial mathematics, machine learning, decision making, supply chain, and management.

Related Research Articles

<span class="mw-page-title-main">Travelling salesman problem</span> NP-hard problem in combinatorial optimization

The travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research.

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

<span class="mw-page-title-main">Quantization (signal processing)</span> Process of mapping a continuous set to a countable set

Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.

<span class="mw-page-title-main">Loss function</span> Mathematical relation assigning a probability event to a cost

In mathematical optimization and decision theory, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite, in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy.

In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. A stochastic program is an optimization problem in which some or all problem parameters are uncertain, but follow known probability distributions. This framework contrasts with deterministic optimization, in which all problem parameters are assumed to be known exactly. The goal of stochastic programming is to find a decision which both optimizes some criteria chosen by the decision maker, and appropriately accounts for the uncertainty of the problem parameters. Because many real-world decisions involve uncertainty, stochastic programming has found applications in a broad range of areas ranging from finance to transportation to energy optimization.

Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function is equivalent to the minimization of the function .

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem. Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality.

Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization. Stochastic optimization methods generalize deterministic methods for deterministic problems.

Robust optimization is a field of mathematical optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution. It is related to, but often distinguished from, probabilistic optimization methods such as chance-constrained optimization.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

In geometry, the Chebyshev center of a bounded set having non-empty interior is the center of the minimal-radius ball enclosing the entire set , or alternatively the center of largest inscribed ball of .

<span class="mw-page-title-main">Matrix completion</span>

Matrix completion is the task of filling in the missing entries of a partially observed matrix, which is equivalent to performing data imputation in statistics. A wide range of datasets are naturally organized in matrix form. One example is the movie-ratings matrix, as appears in the Netflix problem: Given a ratings matrix in which each entry represents the rating of movie by customer , if customer has watched movie and is otherwise missing, we would like to predict the remaining entries in order to make good recommendations to customers on what to watch next. Another example is the document-term matrix: The frequencies of words used in a collection of documents can be represented as a matrix, where each entry corresponds to the number of times the associated term appears in the indicated document.

In the mathematical theory of probability, the drift-plus-penalty method is used for optimization of queueing networks and other stochastic systems.

<span class="mw-page-title-main">Sparse dictionary learning</span> Representation learning method

Sparse dictionary learning is a representation learning method which aims at finding a sparse representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than the one of the signals being observed. The above two properties lead to having seemingly redundant atoms that allow multiple representations of the same signal but also provide an improvement in sparsity and flexibility of the representation.

<span class="mw-page-title-main">Marco Claudio Campi</span>

Marco Claudio Campi is an engineer and a mathematician who specializes in data science and inductive methods. He holds a permanent appointment with the University of Brescia, Italy, while also collaborating with various research institutions, universities and NASA. Since 2012, he has been a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and since 2020 a Fellow of the International Federation of Automatic Control.

In regression analysis, an interval predictor model (IPM) is an approach to regression where bounds on the function to be approximated are obtained. This differs from other techniques in machine learning, where usually one wishes to estimate point values or an entire probability distribution. Interval Predictor Models are sometimes referred to as a nonparametric regression technique, because a potentially infinite set of functions are contained by the IPM, and no specific distribution is implied for the regressed variables.

References

  1. Calafiore, Giuseppe; Campi, M.C. (2005). "Uncertain convex programs: Randomized solutions and confidence levels". Mathematical Programming. 102: 25–46. doi:10.1007/s10107-003-0499-y. S2CID   1063933.
  2. Calafiore, G.C.; Campi, M.C. (2006). "The Scenario Approach to Robust Control Design". IEEE Transactions on Automatic Control. 51 (5): 742–753. doi:10.1109/TAC.2006.875041. S2CID   49263.
  3. 1 2 Campi, M. C.; Garatti, S. (2008). "The Exact Feasibility of Randomized Solutions of Uncertain Convex Programs". SIAM Journal on Optimization. 19 (3): 1211–1230. doi:10.1137/07069821X.
  4. Carè, A.; Garatti, S.; Campi, M. C. (2015). "Scenario Min-Max Optimization and the Risk of Empirical Costs". SIAM Journal on Optimization. 25 (4): 2061–2080. doi:10.1137/130928546. hdl: 11311/979283 .
  5. Campi, M. C.; Carè, A. (2013). "Random Convex Programs with L1-Regularization: Sparsity and Generalization". SIAM Journal on Control and Optimization. 51 (5): 3532–3557. doi:10.1137/110856204.
  6. Carè, Algo; Garatti, Simone; Campi, Marco C. (2014). "FAST—Fast Algorithm for the Scenario Technique". Operations Research. 62 (3): 662–671. doi:10.1287/opre.2014.1257. hdl: 11311/937164 .
  7. 1 2 Campi, M. C.; Garatti, S. (2011). "A Sampling-and-Discarding Approach to Chance-Constrained Optimization: Feasibility and Optimality". Journal of Optimization Theory and Applications. 148 (2): 257–280. doi:10.1007/s10957-010-9754-6. S2CID   7856112.
  8. Calafiore, Giuseppe Carlo (2010). "Random Convex Programs". SIAM Journal on Optimization. 20 (6): 3427–3464. doi:10.1137/090773490.
  9. "Modulating robustness in control design: Principles and algorithms". IEEE Control Systems Magazine. 33 (2): 36–51. 2013. doi:10.1109/MCS.2012.2234964. S2CID   24072721.
  10. Campi, M. C.; Garatti, S. (2018). "Wait-and-judge scenario optimization". Mathematical Programming. 167: 155–189. doi:10.1007/s10107-016-1056-9. hdl: 11311/1002492 . S2CID   39523265.
  11. Calafiore, Giuseppe C. (2017). "Repetitive Scenario Design". IEEE Transactions on Automatic Control. 62 (3): 1125–1137. arXiv: 1602.03796 . doi:10.1109/TAC.2016.2575859. S2CID   47572451.
  12. Pagnoncelli, B. K.; Reich, D.; Campi, M. C. (2012). "Risk-Return Trade-off with the Scenario Approach in Practice: A Case Study in Portfolio Selection". Journal of Optimization Theory and Applications. 155 (2): 707–722. doi:10.1007/s10957-012-0074-x. S2CID   1509645.
  13. Calafiore, Giuseppe Carlo (2013). "Direct data-driven portfolio optimization with guaranteed shortfall probability". Automatica. 49 (2): 370–380. doi:10.1016/j.automatica.2012.11.012. S2CID   5762583.
  14. Ramponi, Federico Alessandro; Campi, Marco C. (2018). "Expected shortfall: Heuristics and certificates". European Journal of Operational Research. 267 (3): 1003–1013. doi:10.1016/j.ejor.2017.11.022. S2CID   3553018.