Machine learning control

Last updated

Machine learning control (MLC) is a subfield of machine learning, intelligent control and control theory which solves optimal control problems with methods of machine learning. Key applications are complex nonlinear systems for which linear control theory methods are not applicable.

Contents

Types of problems and tasks

Four types of problems are commonly encountered.

MLC comprises, for instance, neural network control, genetic algorithm based control, genetic programming control, reinforcement learning control, and has methodological overlaps with other data-driven control, like artificial intelligence and robot control.

Applications

MLC has been successfully applied to many nonlinear control problems, exploring unknown and often unexpected actuation mechanisms. Example applications include

As for all general nonlinear methods, MLC comes with no guaranteed convergence, optimality or robustness for a range of operating conditions.

Related Research Articles

<span class="mw-page-title-main">Artificial neural network</span> Computational model used in machine learning, based on connected, hierarchical functions

Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains.

<span class="mw-page-title-main">Genetic algorithm</span> Competitive algorithm for searching a problem space

In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover and selection. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, etc.

<span class="mw-page-title-main">Reinforcement learning</span> Field of machine learning

Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning.

In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions. Evolution of the population then takes place after the repeated application of the above operators.

<span class="mw-page-title-main">Evolutionary computation</span> Trial and error problem solvers with a metaheuristic or stochastic optimization character

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.

<span class="mw-page-title-main">Particle swarm optimization</span> Iterative simulation method

In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formula over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.

Intelligent control is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms.

<span class="mw-page-title-main">Learning classifier system</span> Paradigm of rule-based machine learning methods

Learning classifier systems, or LCS, are a paradigm of rule-based machine learning methods that combine a discovery component with a learning component. Learning classifier systems seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions. This approach allows complex solution spaces to be broken up into smaller, simpler parts.

A memetic algorithm (MA) in computer science and operations research, is an extension of the traditional genetic algorithm. It may provide a sufficiently good solution to an optimization problem. It uses a local search technique to reduce the likelihood of premature convergence.

Multi-objective optimization is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

In recent years, the use of biologically inspired methods such as the evolutionary algorithm have been increasingly employed to solve and analyze complex computational problems. BELBIC is one such controller which is proposed by Caro Lucas, Danial Shahmirzadi and Nima Sheikholeslami and adopts the network model developed by Moren and Balkenius to mimic those parts of the brain which are known to produce emotion.

Design Automation usually refers to electronic design automation, or Design Automation which is a Product Configurator. Extending Computer-Aided Design (CAD), automated design and Computer-Automated Design (CAutoD) are more concerned with a broader range of applications, such as automotive engineering, civil engineering, composite material design, control engineering, dynamic system identification and optimization, financial systems, industrial equipment, mechatronic systems, steel construction, structural optimisation, and the invention of novel systems.

<span class="mw-page-title-main">Meta-optimization</span>

In numerical optimization, meta-optimization is the use of one optimization method to tune another optimization method. Meta-optimization is reported to have been used as early as in the late 1970s by Mercer and Sampson for finding optimal parameter settings of a genetic algorithm.

Bayesian optimization is a sequential design strategy for global optimization of black-box functions that does not assume any functional forms. It is usually employed to optimize expensive-to-evaluate functions.

<span class="mw-page-title-main">Symbolic regression</span> Type of regression analysis

Symbolic regression (SR) is a type of regression analysis that searches the space of mathematical expressions to find the model that best fits a given dataset, both in terms of accuracy and simplicity.

<span class="mw-page-title-main">Outline of machine learning</span> Overview of and topical guide to machine learning

The following outline is provided as an overview of and topical guide to machine learning. Machine learning is a subfield of soft computing within computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a "field of study that gives computers the ability to learn without being explicitly programmed". Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions.

In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters are learned.

Multi-task optimization is a paradigm in the optimization literature that focuses on solving multiple self-contained tasks simultaneously. The paradigm has been inspired by the well-established concepts of transfer learning and multi-task learning in predictive analytics.

Frank L. Lewis is an American electrical engineer, academic and researcher. He is a professor of electrical engineering, Moncrief-O’Donnell Endowed Chair, and head of Advanced Controls and Sensors Group at The University of Texas at Arlington (UTA). He is a member of UTA Academy of Distinguished Teachers and a charter member of UTA Academy of Distinguished Scholars.

References

  1. Thomas Bäck & Hans-Paul Schwefel (Spring 1993) "An overview of evolutionary algorithms for parameter optimization", Journal of Evolutionary Computation (MIT Press), vol. 1, no. 1, pp. 1-23
  2. 1 2 N. Benard, J. Pons-Prats, J. Periaux, G. Bugeda, J.-P. Bonnet & E. Moreau, (2015) "Multi-Input Genetic Algorithm for Experimental Optimization of the Reattachment Downstream of a Backward-Facing Step with Surface Plasma Actuator", Paper AIAA 2015-2957 at 46th AIAA Plasmadynamics and Lasers Conference, Dallas, TX, USA, pp. 1-23.
  3. Zbigniew Michalewicz, Cezary Z. Janikow & Jacek B. Krawczyk (July 1992) "A modified genetic algorithm for optimal control problems", [Computers & Mathematics with Applications], vol. 23, no 12, pp. 83-94.
  4. C. Lee, J. Kim, D. Babcock & R. Goodman (1997) "Application of neural networks to turbulence control for drag reduction", Physics of Fluids, vol. 6, no. 9, pp. 1740-1747
  5. D. C. Dracopoulos & S. Kent (December 1997) "Genetic programming for prediction and control", Neural Computing & Applications (Springer), vol. 6, no. 4, pp. 214-228.
  6. Andrew G. Barto (December 1994) "Reinforcement learning control", Current Opinion in Neurobiology, vol. 6, no. 4, pp. 888–893
  7. Dimitris. C. Dracopoulos & Antonia. J. Jones (1994) Neuro-genetic adaptive attitude control, Neural Computing & Applications (Springer), vol. 2, no. 4, pp. 183-204.
  8. Jonathan A. Wright, Heather A. Loosemore & Raziyeh Farmani (2002) "Optimization of building thermal design and control by multi-criterion genetic algorithm, [Energy and Buildings], vol. 34, no. 9, pp. 959-972.
  9. Steven J. Brunton & Bernd R. Noack (2015) Closed-loop turbulence control: Progress and challenges, Applied Mechanics Reviews, vol. 67, no. 5, article 050801, pp. 1-48.
  10. J. Javadi-Moghaddam, & A. Bagheri (2010 "An adaptive neuro-fuzzy sliding mode based genetic algorithm control system for under water remotely operated vehicle", Expert Systems with Applications, vol. 37 no. 1, pp. 647-660.
  11. Peter J. Fleming, R. C. Purshouse (2002 "Evolutionary algorithms in control systems engineering: a survey" Control Engineering Practice, vol. 10, no. 11, pp. 1223-1241

Further reading