The dynamic lot-size model in inventory theory, is a generalization of the economic order quantity model that takes into account that demand for the product varies over time. The model was introduced by Harvey M. Wagner and Thomson M. Whitin in 1958. [1] [2]
We have available a forecast of product demand dt over a relevant time horizon t=1,2,...,N (for example we might know how many widgets will be needed each week for the next 52 weeks). There is a setup cost st incurred for each order and there is an inventory holding cost it per item per period (st and it can also vary with time if desired). The problem is how many units xt to order now to minimize the sum of setup cost and inventory cost. Let us denote inventory:
The functional equation representing minimal cost policy is:
Where H() is the Heaviside step function. Wagner and Whitin [1] proved the following four theorems:
The precedent theorems are used in the proof of the Planning Horizon Theorem. [1] Let
denote the minimal cost program for periods 1 to t. If at period t* the minimum in F(t) occurs for j = t** ≤ t*, then in periods t > t* it is sufficient to consider only t** ≤ j ≤ t. In particular, if t* = t**, then it is sufficient to consider programs such that xt* > 0.
Wagner and Whitin gave an algorithm for finding the optimal solution by dynamic programming. [1] Start with t*=1:
Because this method was perceived by some as too complex, a number of authors also developed approximate heuristics (e.g., the Silver-Meal heuristic [3] ) for the problem.
Dynamic programming is both a mathematical optimization method and an algorithmic paradigm. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.
In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.
Economic order quantity (EOQ), also known as financial purchase quantity or economic buying quantity, is the order quantity that minimizes the total holding costs and ordering costs in inventory management. It is one of the oldest classical production scheduling models. The model was developed by Ford W. Harris in 1913, but the consultant R. H. Wilson applied it extensively, and he and K. Andler are given credit for their in-depth analysis.
The Needleman–Wunsch algorithm is an algorithm used in bioinformatics to align protein or nucleotide sequences. It was one of the first applications of dynamic programming to compare biological sequences. The algorithm was developed by Saul B. Needleman and Christian D. Wunsch and published in 1970. The algorithm essentially divides a large problem into a series of smaller problems, and it uses the solutions to the smaller problems to find an optimal solution to the larger problem. It is also sometimes referred to as the optimal matching algorithm and the global alignment technique. The Needleman–Wunsch algorithm is still widely used for optimal global alignment, particularly when the quality of the global alignment is of the utmost importance. The algorithm assigns a score to every possible alignment, and the purpose of the algorithm is to find all possible alignments having the highest score.
In mathematics, a Markov decision process (MDP) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming. MDPs were known at least as early as the 1950s; a core body of research on Markov decision processes resulted from Ronald Howard's 1960 book, Dynamic Programming and Markov Processes. They are used in many disciplines, including robotics, automatic control, economics and manufacturing. The name of MDPs comes from the Russian mathematician Andrey Markov as they are an extension of Markov chains.
A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes. The equation applies to algebraic structures with a total ordering; for algebraic structures with a partial ordering, the generic Bellman's equation can be used.
The economic lot scheduling problem (ELSP) is a problem in operations management and inventory theory that has been studied by many researchers for more than 50 years. The term was first used in 1958 by professor Jack D. Rogers of Berkeley, who extended the economic order quantity model to the case where there are several products to be produced on the same machine, so that one must decide both the lot size for each product and when each lot should be produced. The method illustrated by Jack D. Rogers draws on a 1956 paper from Welch, W. Evert. The ELSP is a mathematical model of a common issue for almost any company or industry: planning what to manufacture, when to manufacture and how much to manufacture.
The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic function is called the LQ problem. One of the main results in the theory is that the solution is provided by the linear–quadratic regulator (LQR), a feedback controller whose equations are given below.
In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.
The vehicle routing problem (VRP) is a combinatorial optimization and integer programming problem which asks "What is the optimal set of routes for a fleet of vehicles to traverse in order to deliver to a given set of customers?" It generalises the travelling salesman problem (TSP). It first appeared in a paper by George Dantzig and John Ramser in 1959, in which the first algorithmic approach was written and was applied to petrol deliveries. Often, the context is that of delivering goods located at a central depot to customers who have placed orders for such goods. The objective of the VRP is to minimize the total route cost. In 1964, Clarke and Wright improved on Dantzig and Ramser's approach using an effective greedy algorithm called the savings algorithm.
Material theory is the sub-specialty within operations research and operations management that is concerned with the design of production/inventory systems to minimize costs: it studies the decisions faced by firms and the military in connection with manufacturing, warehousing, supply chains, spare part allocation and so on and provides the mathematical foundation for logistics. The inventory control problem is the problem faced by a firm that must decide how much to order in each time period to meet demand for its products. The problem can be modeled using mathematical techniques of optimal control, dynamic programming and network optimization. The study of such models is part of inventory theory.
The Silver–Meal heuristic is a production planning method in manufacturing, composed in 1973 by Edward A. Silver and H.C. Meal. Its purpose is to determine production quantities to meet the requirement of operations at minimum cost.
Mean-field game theory is the study of strategic decision making by small interacting agents in very large populations. It lies at the intersection of game theory with stochastic analysis and control theory. The use of the term "mean field" is inspired by mean-field theory in physics, which considers the behavior of systems of large numbers of particles where individual particles have negligible impacts upon the system. In other words, each agent acts according to his minimization or maximization problem taking into account other agents’ decisions and because their population is large we can assume the number of agents goes to infinity and a representative agent exists.
Harvey Maurice Wagner was an American management scientist, consultant, and Professor of Operations Research and Innovation Management at the University of North Carolina, Chapel Hill, known for his books on Operations Research and his seminal work on the dynamic lot-size model with Thomson M. Whitin.
Thomson McLintock Whitin was an American management scientist, and Emeritus Professor of Economics and Social Sciences at Wesleyan University, known for his work on inventory control and inventory management.
Albert Peter Marie (Albert) Wagelmans is a Dutch economist and Professor of Management Science at the Erasmus School of Economics (ESE) of the Erasmus University Rotterdam working in the fields of mathematical optimization methods for production, public transport and health care planning.
Anthonius Wilhelmus Johannes (Antoon) Kolen was a Dutch mathematician and Professor at the Maastricht University, in the Department of Quantitative Economics. He is known for his work on dynamic programming, such as interval scheduling and mathematical optimization.
Constantinus P. M. van Hoesel is a Dutch mathematician, and Professor of Operations Research at the Maastricht University, and head of its Quantitative Economics Group, known for his work on mathematical optimization.
Dynamic discrete choice (DDC) models, also known as discrete choice models of dynamic programming, model an agent's choices over discrete options that have future implications. Rather than assuming observed choices are the result of static utility maximization, observed choices in DDC models are assumed to result from an agent's maximization of the present value of utility, generalizing the utility theory upon which discrete choice models are based.
The (Q,r) model is a class of models in inventory theory. A general (Q,r) model can be extended from both the EOQ model and the base stock model