Gekko (optimization software)

Last updated
GEKKO
Developer(s) Logan Beal and John Hedengren
Stable release
1.0.7 / March 5, 2024;44 days ago (2024-03-05)
Repository
Operating system Cross-Platform
Type Technical computing
License MIT
Website gekko.readthedocs.io/en/latest/

The GEKKO Python package [1] solves large-scale mixed-integer and differential algebraic equations with nonlinear programming solvers (IPOPT, APOPT, BPOPT, SNOPT, MINOS). Modes of operation include machine learning, data reconciliation, real-time optimization, dynamic simulation, and nonlinear model predictive control. In addition, the package solves Linear programming (LP), Quadratic programming (QP), Quadratically constrained quadratic program (QCQP), Nonlinear programming (NLP), Mixed integer programming (MIP), and Mixed integer linear programming (MILP). GEKKO is available in Python and installed with pip from PyPI of the Python Software Foundation.

Contents

pipinstallgekko

GEKKO works on all platforms and with Python 2.7 and 3+. By default, the problem is sent to a public server where the solution is computed and returned to Python. There are Windows, MacOS, Linux, and ARM (Raspberry Pi) processor options to solve without an Internet connection. GEKKO is an extension of the APMonitor Optimization Suite but has integrated the modeling and solution visualization directly within Python. A mathematical model is expressed in terms of variables and equations such as the Hock & Schittkowski Benchmark Problem #71 [2] used to test the performance of nonlinear programming solvers. This particular optimization problem has an objective function and subject to the inequality constraint and equality constraint . The four variables must be between a lower bound of 1 and an upper bound of 5. The initial guess values are . This optimization problem is solved with GEKKO as shown below.

fromgekkoimportGEKKOm=GEKKO()# Initialize gekko# Initialize variablesx1=m.Var(value=1,lb=1,ub=5)x2=m.Var(value=5,lb=1,ub=5)x3=m.Var(value=5,lb=1,ub=5)x4=m.Var(value=1,lb=1,ub=5)# Equationsm.Equation(x1*x2*x3*x4>=25)m.Equation(x1**2+x2**2+x3**2+x4**2==40)m.Minimize(x1*x4*(x1+x2+x3)+x3)m.solve(disp=False)# Solveprint("x1: "+str(x1.value))print("x2: "+str(x2.value))print("x3: "+str(x3.value))print("x4: "+str(x4.value))print("Objective: "+str(m.options.objfcnval))

Applications of GEKKO

Applications include cogeneration (power and heat), [3] drilling automation, [4] severe slugging control, [5] solar thermal energy production, [6] solid oxide fuel cells, [7] [8] flow assurance, [9] Enhanced oil recovery, [10] Essential oil extraction, [11] and Unmanned Aerial Vehicles (UAVs). [12] There are many other references to APMonitor and GEKKO as a sample of the types of applications that can be solved. GEKKO is developed from the National Science Foundation (NSF) research grant #1547110 [13] [14] [15] [16] and is detailed in a Special Issue collection on combined scheduling and control. [17] Other notable mentions of GEKKO are the listing in the Decision Tree for Optimization Software, [18] added support for APOPT and BPOPT solvers, [19] projects reports of the online Dynamic Optimization course from international participants. [20] GEKKO is a topic in online forums where users are solving optimization and optimal control problems. [21] [22] GEKKO is used for advanced control in the Temperature Control Lab (TCLab) [23] for process control education at 20 universities. [24] [25] [26] [27]

Machine learning

Artificial Neural Network Artificial Neural Network Example.png
Artificial Neural Network

One application of machine learning is to perform regression from training data to build a correlation. In this example, deep learning generates a model from training data that is generated with the function . An artificial neural network with three layers is used for this example. The first layer is linear, the second layer has a hyperbolic tangent activation function, and the third layer is linear. The program produces parameter weights that minimize the sum of squared errors between the measured data points and the neural network predictions at those points. GEKKO uses gradient-based optimizers to determine the optimal weight values instead of standard methods such as backpropagation. The gradients are determined by automatic differentiation, similar to other popular packages. The problem is solved as a constrained optimization problem and is converged when the solver satisfies Karush–Kuhn–Tucker conditions. Using a gradient-based optimizer allows additional constraints that may be imposed with domain knowledge of the data or system.

fromgekkoimportbrainimportnumpyasnpb=brain.Brain()b.input_layer(1)b.layer(linear=3)b.layer(tanh=3)b.layer(linear=3)b.output_layer(1)x=np.linspace(-np.pi,3*np.pi,20)y=1-np.cos(x)b.learn(x,y)

The neural network model is tested across the range of training data as well as for extrapolation to demonstrate poor predictions outside of the training data. Predictions outside the training data set are improved with hybrid machine learning that uses fundamental principles (if available) to impose a structure that is valid over a wider range of conditions. In the example above, the hyperbolic tangent activation function (hidden layer 2) could be replaced with a sine or cosine function to improve extrapolation. The final part of the script displays the neural network model, the original function, and the sampled data points used for fitting.

importmatplotlib.pyplotaspltxp=np.linspace(-2*np.pi,4*np.pi,100)yp=b.think(xp)plt.figure()plt.plot(x,y,"bo")plt.plot(xp,yp[0],"r-")plt.show()

Optimal control

Optimal control problem benchmark (Luus) with an integral objective, inequality, and differential constraint. Optimal Control Luus.png
Optimal control problem benchmark (Luus) with an integral objective, inequality, and differential constraint.

Optimal control is the use of mathematical optimization to obtain a policy that is constrained by differential , equality , or inequality equations and minimizes an objective/reward function . The basic optimal control is solved with GEKKO by integrating the objective and transcribing the differential equation into algebraic form with orthogonal collocation on finite elements.

fromgekkoimportGEKKOimportnumpyasnpimportmatplotlib.pyplotaspltm=GEKKO()# initialize gekkont=101m.time=np.linspace(0,2,nt)# Variablesx1=m.Var(value=1)x2=m.Var(value=0)u=m.Var(value=0,lb=-1,ub=1)p=np.zeros(nt)# mark final time pointp[-1]=1.0final=m.Param(value=p)# Equationsm.Equation(x1.dt()==u)m.Equation(x2.dt()==0.5*x1**2)m.Minimize(x2*final)m.options.IMODE=6# optimal control modem.solve()# solveplt.figure(1)# plot resultsplt.plot(m.time,x1.value,"k-",label=r"$x_1$")plt.plot(m.time,x2.value,"b-",label=r"$x_2$")plt.plot(m.time,u.value,"r--",label=r"$u$")plt.legend(loc="best")plt.xlabel("Time")plt.ylabel("Value")plt.show()

See also

Related Research Articles

In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints. It is named after the mathematician Joseph-Louis Lagrange.

<span class="mw-page-title-main">Quartic equation</span> Polynomial equation

In mathematics, a quartic equation is one which can be expressed as a quartic function equaling zero. The general form of a quartic equation is

<span class="mw-page-title-main">Optimal control</span> Mathematical way of attaining a desired output from a dynamic system

Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.

Branch and bound is a method for solving optimization problems by breaking them down into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the optimal solution. It is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.

<span class="mw-page-title-main">Secant method</span> Root-finding method

In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method. However, the secant method predates Newton's method by over 3000 years.

In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of calculation of the extrema of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.

<span class="mw-page-title-main">Bellman equation</span> Necessary condition for optimality associated with dynamic programming

A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes. The equation applies to algebraic structures with a total ordering; for algebraic structures with a partial ordering, the generic Bellman's equation can be used.

<span class="mw-page-title-main">Golden-section search</span> Technique for finding an extremum of a function

The golden-section search is a technique for finding an extremum of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an interval containing multiple extrema, it will converge to one of them. If the only extremum on the interval is on a boundary of the interval, it will converge to that boundary point. The method operates by successively narrowing the range of values on the specified interval, which makes it relatively slow, but very robust. The technique derives its name from the fact that the algorithm maintains the function values for four points whose three interval widths are in the ratio φ:1:φ, where φ is the golden ratio. These ratios are maintained for each iteration and are maximally efficient. Excepting boundary points, when searching for a minimum, the central point is always less than or equal to the outer points, assuring that a minimum is contained between the outer points. The converse is true when searching for a maximum. The algorithm is the limit of Fibonacci search for many function evaluations. Fibonacci search and golden-section search were discovered by Kiefer (1953).

<span class="mw-page-title-main">Varignon frame</span>

The Varignon frame, named after Pierre Varignon, is a mechanical device which can be used to determine an optimal location of a warehouse for the destribution of goods to a set of shops. Optimal means that the sum of the weighted distances of the shops to the warehouse should be minimal. The frame consists of a board with n holes corresponding to the n shops at the locations , n strings are tied together in a knot at one end, the loose ends are passed, one each, through the holes and are attached to weights below the board. If the influence of friction and other odds of the real world are neglected, the knot will take a position of equilibrium . It can be shown, that point is the optimal location which minimizes the weighted sum of distances

<span class="mw-page-title-main">Hopf bifurcation</span> Critical point where a periodic solution arises

In the mathematical theory of bifurcations, a Hopfbifurcation is a critical point where, as a parameter changes, a system's stability switches and a periodic solution arises. More accurately, it is a local bifurcation in which a fixed point of a dynamical system loses stability, as a pair of complex conjugate eigenvalues—of the linearization around the fixed point—crosses the complex plane imaginary axis as a parameter crosses a threshold value. Under reasonably generic assumptions about the dynamical system, the fixed point becomes a small-amplitude limit cycle as the parameter changes.

In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a system of linear equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either strictly diagonally dominant, or symmetric and positive definite. It was only mentioned in a private letter from Gauss to his student Gerling in 1823. A publication was not delivered before 1874 by Seidel.

In numerical analysis, Broyden's method is a quasi-Newton method for finding roots in k variables. It was originally described by C. G. Broyden in 1965.

Advanced process monitor (APMonitor) is a modeling language for differential algebraic (DAE) equations. It is a free web-service or local server for solving representations of physical systems in the form of implicit DAE models. APMonitor is suited for large-scale problems and solves linear programming, integer programming, nonlinear programming, nonlinear mixed integer programming, dynamic simulation, moving horizon estimation, and nonlinear model predictive control. APMonitor does not solve the problems directly, but calls nonlinear programming solvers such as APOPT, BPOPT, IPOPT, MINOS, and SNOPT. The APMonitor API provides exact first and second derivatives of continuous functions to the solvers through automatic differentiation and in sparse matrix form.

The PROPT MATLAB Optimal Control Software is a new generation platform for solving applied optimal control and parameters estimation problems.

In mathematics and optimization, a pseudo-Boolean function is a function of the form

Moving horizon estimation (MHE) is an optimization approach that uses a series of measurements observed over time, containing noise and other inaccuracies, and produces estimates of unknown variables or parameters. Unlike deterministic approaches, MHE requires an iterative approach that relies on linear programming or nonlinear programming solvers to find a solution.

APOPT is a software package for solving large-scale optimization problems of any of these forms:

In numerical mathematics, interval propagation or interval constraint propagation is the problem of contracting interval domains associated to variables of R without removing any value that is consistent with a set of constraints. It can be used to propagate uncertainties in the situation where errors are represented by intervals. Interval propagation considers an estimation problem as a constraint satisfaction problem.

References

  1. Beal, L. (2018). "GEKKO Optimization Suite". Processes. 6 (8): 106. doi: 10.3390/pr6080106 .
  2. W. Hock and K. Schittkowski, Test Examples for Nonlinear Programming Codes, Lecture Notes in Economics and Mathematical Systems, Vol. 187, Springer 1981.
  3. Mojica, J. (2017). "Optimal combined long-term facility design and short-term operational strategy for CHP capacity investments". Energy. 118: 97–115. doi:10.1016/j.energy.2016.12.009.
  4. Eaton, A. (2017). "Real time model identification using multi-fidelity models in managed pressure drilling". Computers & Chemical Engineering. 97: 76–84. doi:10.1016/j.compchemeng.2016.11.008.
  5. Eaton, A. (2015). "Post-installed fiber optic pressure sensors on subsea production risers for severe slugging control" (PDF). OMAE 2015 Proceedings, St. John's, Canada.
  6. Powell, K. (2014). "Dynamic Optimization of a Hybrid Solar Thermal and Fossil Fuel System". Solar Energy. 108: 210–218. Bibcode:2014SoEn..108..210P. doi:10.1016/j.solener.2014.07.004.
  7. Spivey, B. (2010). "Dynamic Modeling of Reliability Constraints in Solid Oxide Fuel Cells and Implications for Advanced Control" (PDF). AIChE Annual Meeting Proceedings, Salt Lake City, Utah.
  8. Spivey, B. (2012). "Dynamic modeling, simulation, and MIMO predictive control of a tubular solid oxide fuel cell". Journal of Process Control. 22 (8): 1502–1520. doi:10.1016/j.jprocont.2012.01.015.
  9. Hedengren, J. (2018). New flow assurance system with high speed subsea fiber optic monitoring of pressure and temperature. ASME 37th International Conference on Ocean, Offshore and Arctic Engineering, OMAE2018/78079, Madrid, Spain. pp. V005T04A034. doi:10.1115/OMAE2018-78079. ISBN   978-0-7918-5124-1.
  10. Udy, J. (2017). "Reduced order modeling for reservoir injection optimization and forecasting" (PDF). FOCAPO / CPC 2017, Tucson, AZ.
  11. Valderrama, F. (2018). "An optimal control approach to steam distillation of essential oils from aromatic plants". Computers & Chemical Engineering. 117: 25–31. doi:10.1016/j.compchemeng.2018.05.009.
  12. Sun, L. (2013). "Optimal Trajectory Generation using Model Predictive Control for Aerially Towed Cable Systems" (PDF). Journal of Guidance, Control, and Dynamics. 37 (2): 525–539. Bibcode:2014JGCD...37..525S. doi:10.2514/1.60820.
  13. Beal, L. (2018). "Integrated scheduling and control in discrete-time with dynamic parameters and constraints". Computers & Chemical Engineering. 115: 361–376. doi: 10.1016/j.compchemeng.2018.04.010 .
  14. Beal, L. (2017). "Combined model predictive control and scheduling with dominant time constant compensation". Computers & Chemical Engineering. 104: 271–282. doi:10.1016/j.compchemeng.2017.04.024.
  15. Beal, L. (2017). "Economic benefit from progressive integration of scheduling and control for continuous chemical processes" (PDF). Processes. 5 (4): 84. doi: 10.3390/pr5040084 .
  16. Petersen, D. (2017). "Combined noncyclic scheduling and advanced control for continuous chemical processes" (PDF). Processes. 5 (4): 83. doi: 10.3390/pr5040083 . S2CID   3354604.
  17. Hedengren, J. (2018). "Special issue: combined scheduling and control". Processes. 6 (3): 24. doi: 10.3390/pr6030024 .
  18. Mittleman, Hans (1 May 2018). "Decision Tree for Optimization Software". Plato. Arizona State University. Retrieved 1 May 2018. Object-oriented python library for mixed-integer and differential-algebraic equations
  19. "Solver Solutions". Advanced Process Solutions, LLC. Retrieved 1 May 2018. GEKKO Python with APOPT or BPOPT Solvers
  20. Everton, Colling. "Dynamic Optimization Projects". Petrobras. Petrobras, Statoil, Facebook. Retrieved 1 May 2018. Example Presentation: Everton Colling of Petrobras shares his experience with GEKKO for modeling and nonlinear control of distillation
  21. "APMonitor Google Group: GEKKO". Google. Retrieved 1 May 2018.
  22. "Computational Science: Is there a high quality nonlinear programming solver for Python?". SciComp. Retrieved 1 May 2018.
  23. Kantor, Jeff (2 May 2018). "TCLab Documentation" (PDF). ReadTheDocs. University of Notre Dame. Retrieved 2 May 2018. pip install tclab
  24. Kantor, Jeff (2 May 2018). "Chemical Process Control". GitHub. University of Notre Dame. Retrieved 2 May 2018. Using the Temperature Control Lab (TCLab)
  25. Hedengren, John (2 May 2018). "Advanced Temperature Control Lab". Dynamic Optimization Course. Brigham Young University. Retrieved 2 May 2018. Hands-on applications of advanced temperature control
  26. Sandrock, Carl (2 May 2018). "Jupyter notebooks for Dynamics and Control". GitHub. University of Pretoria, South Africa. Retrieved 2 May 2018. CPN321 (Process Dynamics), and CPB421 (Process Control) at the Chemical Engineering department of the University of Pretoria
  27. "CACHE News (Winter 2018): Incorporating Dynamic Simulation into Chemical Engineering Curricula" (PDF). CACHE: Computer Aids for Chemical Engineering. University of Texas at Austin. 2 May 2018. Retrieved 2 May 2018. Short Course at the ASEE 2017 Summer School hosted at SCSU by Hedengren (BYU), Grover (Georgia Tech), and Badgwell (ExxonMobil)