Developer(s) | Logan Beal and John D. Hedengren |
---|---|
Stable release | 1.2.1 / July 3, 2024 |
Repository | |
Operating system | Cross-Platform |
Type | Technical computing |
License | MIT |
Website | gekko |
The GEKKO Python package [1] solves large-scale mixed-integer and differential algebraic equations with nonlinear programming solvers (IPOPT, APOPT, BPOPT, SNOPT, MINOS). Modes of operation include machine learning, data reconciliation, real-time optimization, dynamic simulation, and nonlinear model predictive control. In addition, the package solves Linear programming (LP), Quadratic programming (QP), Quadratically constrained quadratic program (QCQP), Nonlinear programming (NLP), Mixed integer programming (MIP), and Mixed integer linear programming (MILP). GEKKO is available in Python and installed with pip from PyPI of the Python Software Foundation.
pipinstallgekko
GEKKO works on all platforms and with Python 2.7 and 3+. By default, the problem is sent to a public server where the solution is computed and returned to Python. There are Windows, MacOS, Linux, and ARM (Raspberry Pi) processor options to solve without an Internet connection. GEKKO is an extension of the APMonitor Optimization Suite but has integrated the modeling and solution visualization directly within Python. A mathematical model is expressed in terms of variables and equations such as the Hock & Schittkowski Benchmark Problem #71 [2] used to test the performance of nonlinear programming solvers. This particular optimization problem has an objective function and subject to the inequality constraint and equality constraint . The four variables must be between a lower bound of 1 and an upper bound of 5. The initial guess values are . This optimization problem is solved with GEKKO as shown below.
fromgekkoimportGEKKOm=GEKKO()# Initialize gekko# Initialize variablesx1=m.Var(value=1,lb=1,ub=5)x2=m.Var(value=5,lb=1,ub=5)x3=m.Var(value=5,lb=1,ub=5)x4=m.Var(value=1,lb=1,ub=5)# Equationsm.Equation(x1*x2*x3*x4>=25)m.Equation(x1**2+x2**2+x3**2+x4**2==40)m.Minimize(x1*x4*(x1+x2+x3)+x3)m.solve(disp=False)# Solveprint("x1: "+str(x1.value))print("x2: "+str(x2.value))print("x3: "+str(x3.value))print("x4: "+str(x4.value))print("Objective: "+str(m.options.objfcnval))
Applications include cogeneration (power and heat), [3] drilling automation, [4] severe slugging control, [5] solar thermal energy production, [6] solid oxide fuel cells, [7] [8] flow assurance, [9] Enhanced oil recovery, [10] Essential oil extraction, [11] and Unmanned Aerial Vehicles (UAVs). [12] There are many other references to APMonitor and GEKKO as a sample of the types of applications that can be solved. GEKKO is developed from the National Science Foundation (NSF) research grant #1547110 [13] [14] [15] [16] and is detailed in a Special Issue collection on combined scheduling and control. [17] Other notable mentions of GEKKO are the listing in the Decision Tree for Optimization Software, [18] added support for APOPT and BPOPT solvers, [19] projects reports of the online Dynamic Optimization course from international participants. [20] GEKKO is a topic in online forums where users are solving optimization and optimal control problems. [21] [22] GEKKO is used for advanced control in the Temperature Control Lab (TCLab) [23] for process control education at 20 universities. [24] [25] [26] [27]
One application of machine learning is to perform regression from training data to build a correlation. In this example, deep learning generates a model from training data that is generated with the function . An artificial neural network with three layers is used for this example. The first layer is linear, the second layer has a hyperbolic tangent activation function, and the third layer is linear. The program produces parameter weights that minimize the sum of squared errors between the measured data points and the neural network predictions at those points. GEKKO uses gradient-based optimizers to determine the optimal weight values instead of standard methods such as backpropagation. The gradients are determined by automatic differentiation, similar to other popular packages. The problem is solved as a constrained optimization problem and is converged when the solver satisfies Karush–Kuhn–Tucker conditions. Using a gradient-based optimizer allows additional constraints that may be imposed with domain knowledge of the data or system.
fromgekkoimportbrainimportnumpyasnpb=brain.Brain()b.input_layer(1)b.layer(linear=3)b.layer(tanh=3)b.layer(linear=3)b.output_layer(1)x=np.linspace(-np.pi,3*np.pi,20)y=1-np.cos(x)b.learn(x,y)
The neural network model is tested across the range of training data as well as for extrapolation to demonstrate poor predictions outside of the training data. Predictions outside the training data set are improved with hybrid machine learning that uses fundamental principles (if available) to impose a structure that is valid over a wider range of conditions. In the example above, the hyperbolic tangent activation function (hidden layer 2) could be replaced with a sine or cosine function to improve extrapolation. The final part of the script displays the neural network model, the original function, and the sampled data points used for fitting.
importmatplotlib.pyplotaspltxp=np.linspace(-2*np.pi,4*np.pi,100)yp=b.think(xp)plt.figure()plt.plot(x,y,"bo")plt.plot(xp,yp[0],"r-")plt.show()
Optimal control is the use of mathematical optimization to obtain a policy that is constrained by differential , equality , or inequality equations and minimizes an objective/reward function . The basic optimal control is solved with GEKKO by integrating the objective and transcribing the differential equation into algebraic form with orthogonal collocation on finite elements.
fromgekkoimportGEKKOimportnumpyasnpimportmatplotlib.pyplotaspltm=GEKKO()# initialize gekkont=101m.time=np.linspace(0,2,nt)# Variablesx1=m.Var(value=1)x2=m.Var(value=0)u=m.Var(value=0,lb=-1,ub=1)p=np.zeros(nt)# mark final time pointp[-1]=1.0final=m.Param(value=p)# Equationsm.Equation(x1.dt()==u)m.Equation(x2.dt()==0.5*x1**2)m.Minimize(x2*final)m.options.IMODE=6# optimal control modem.solve()# solveplt.figure(1)# plot resultsplt.plot(m.time,x1.value,"k-",label=r"$x_1$")plt.plot(m.time,x2.value,"b-",label=r"$x_2$")plt.plot(m.time,u.value,"r--",label=r"$u$")plt.legend(loc="best")plt.xlabel("Time")plt.ylabel("Value")plt.show()
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots of a real-valued function. The most basic version starts with a real-valued function f, its derivative f′, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming.
Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.
In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite-difference approximation of Newton's method. However, the secant method predates Newton's method by over 3000 years.
In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints are not linear equalities or the objective function is not a linear function. An optimization problem is one of calculation of the extrema of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.
Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.
A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes. The equation applies to algebraic structures with a total ordering; for algebraic structures with a partial ordering, the generic Bellman's equation can be used.
Muller's method is a root-finding algorithm, a numerical method for solving equations of the form f(x) = 0. It was first presented by David E. Muller in 1956.
In transportation engineering, traffic flow is the study of interactions between travellers and infrastructure, with the aim of understanding and developing an optimal transport network with efficient movement of traffic and minimal traffic congestion problems.
In physics and mathematics, the Ikeda map is a discrete-time dynamical system given by the complex map
The Lorenz system is a system of ordinary differential equations first studied by mathematician and meteorologist Edward Lorenz. It is notable for having chaotic solutions for certain parameter values and initial conditions. In particular, the Lorenz attractor is a set of chaotic solutions of the Lorenz system. The term "butterfly effect" in popular media may stem from the real-world implications of the Lorenz attractor, namely that tiny changes in initial conditions evolve to completely different trajectories. This underscores that chaotic systems can be completely deterministic and yet still be inherently impractical or even impossible to predict over longer periods of time. For example, even the small flap of a butterfly's wings could set the earth's athmosphere on a vastly different trajectory, in which for example a hurricane occurs where it otherwise would have not. The shape of the Lorenz attractor itself, when plotted in phase space, may also be seen to resemble a butterfly.
Advanced process monitor (APMonitor) is a modeling language for differential algebraic (DAE) equations. It is a free web-service or local server for solving representations of physical systems in the form of implicit DAE models. APMonitor is suited for large-scale problems and solves linear programming, integer programming, nonlinear programming, nonlinear mixed integer programming, dynamic simulation, moving horizon estimation, and nonlinear model predictive control. APMonitor does not solve the problems directly, but calls nonlinear programming solvers such as APOPT, BPOPT, IPOPT, MINOS, and SNOPT. The APMonitor API provides exact first and second derivatives of continuous functions to the solvers through automatic differentiation and in sparse matrix form.
The PROPT MATLAB Optimal Control Software is a new generation platform for solving applied optimal control and parameters estimation problems.
JModelica.org is a commercial software platform based on the Modelica modeling language for modeling, simulating, optimizing and analyzing complex dynamic systems. The platform is maintained and developed by Modelon AB in collaboration with academic and industrial institutions, notably Lund University and the Lund Center for Control of Complex Systems (LCCC). The platform has been used in industrial projects with applications in robotics, vehicle systems, energy systems, CO2 separation and polyethylene production.
Moving horizon estimation (MHE) is an optimization approach that uses a series of measurements observed over time, containing noise and other inaccuracies, and produces estimates of unknown variables or parameters. Unlike deterministic approaches, MHE requires an iterative approach that relies on linear programming or nonlinear programming solvers to find a solution.
APOPT is a software package for solving large-scale optimization problems of any of these forms:
Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs).They overcome the low data availability of some biological and engineering systems that makes most state-of-the-art machine learning techniques lack robustness, rendering them ineffective in these scenarios. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the correctness of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples.
Object-oriented python library for mixed-integer and differential-algebraic equations
GEKKO Python with APOPT or BPOPT Solvers
Example Presentation: Everton Colling of Petrobras shares his experience with GEKKO for modeling and nonlinear control of distillation
pip install tclab
Using the Temperature Control Lab (TCLab)
Hands-on applications of advanced temperature control
CPN321 (Process Dynamics), and CPB421 (Process Control) at the Chemical Engineering department of the University of Pretoria
Short Course at the ASEE 2017 Summer School hosted at SCSU by Hedengren (BYU), Grover (Georgia Tech), and Badgwell (ExxonMobil)