LIONsolver

Last updated
LIONsolver
Developer(s) Reactive Search srl
Stable release
2.0.198 / October 9, 2011;12 years ago (2011-10-09)
Operating system Windows, Mac OS X, Unix
Available in English
Type Business intelligence software
License Proprietary software, free for academic use
Website lionoso.com

LIONsolver is an integrated software for data mining, business intelligence, analytics, and modeling and reactive business intelligence approach. [1] A non-profit version is also available as LIONoso.

Contents

LIONsolver is used to build models, visualize them, and improve business and engineering processes.

It is a tool for decision making based on data and quantitative model and it can be connected to most databases and external programs.

The software is fully integrated with the Grapheur business intelligence and intended for more advanced users.

Overview

LIONsolver originates from research principles in Reactive Search Optimization [2] advocating the use of self-tuning schemes acting while a software system is running. Learning and Intelligent OptimizatioN refers to the integration of online machine learning schemes into the optimization software, so that it becomes capable of learning from its previous runs and from human feedback. A related approach is that of Programming by Optimization, [3] which provides a direct way of defining design spaces involving Reactive Search Optimization, and of Autonomous Search [4] advocating adapting problem-solving algorithms.

Version 2.0 of the software was released on Oct 1, 2011, covering also the Unix and Mac OS X operating systems in addition to Windows.

The modeling components include neural networks, polynomials, locally weighted Bayesian regression, k-means clustering, and self-organizing maps. A free academic license for non-commercial use and class use is available.

The software architecture of LIONsolver [5] permits interactive multi-objective optimization, with a user interface for visualizing the results and facilitating the solution analysis and decision making process. The architecture allows for problem-specific extensions, and it is applicable as a post-processing tool for all optimization schemes with a number of different potential solutions. When the architecture is tightly coupled to a specific problem-solving or optimization method, effective interactive schemes where the final decision maker is in the loop can be developed. [6]

On Apr 24, 2013 LIONsolver received the first prize of the Michael J. Fox FoundationKaggle Parkinson's Data Challenge, a contest leveraging "the wisdom of the crowd" to benefit people with Parkinson's disease. [7]

See also

Related Research Articles

Distributed Artificial Intelligence (DAI) also called Decentralized Artificial Intelligence is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems.

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.

In computer science, local search is a heuristic method for solving computationally hard optimization problems. Local search can be used on problems that can be formulated as finding a solution maximizing a criterion among a number of candidate solutions. Local search algorithms move from solution to solution in the space of candidate solutions by applying local changes, until a solution deemed optimal is found or a time bound is elapsed.

Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function is equivalent to the minimization of the function .

<span class="mw-page-title-main">Ant colony optimization algorithms</span> Optimization algorithm

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.

<span class="mw-page-title-main">Multi-agent system</span> Built of multiple interacting agents

A multi-agent system is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning.

<span class="mw-page-title-main">Weka (software)</span> Suite of machine learning software written in Java

Waikato Environment for Knowledge Analysis (Weka) is a collection of machine learning and data analysis free software licensed under the GNU General Public License. It was developed at the University of Waikato, New Zealand and is the companion software to the book "Data Mining: Practical Machine Learning Tools and Techniques".

A memetic algorithm (MA) in computer science and operations research, is an extension of the traditional genetic algorithm (GA) or more general evolutionary algorithm (EA). It may provide a sufficiently good solution to an optimization problem. It uses a suitable heuristic or local search technique to improve the quality of solutions generated by the EA and to reduce the likelihood of premature convergence.

Stochastic optimization (SO) methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization. Stochastic optimization methods generalize deterministic methods for deterministic problems.

Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

A hyper-heuristic is a heuristic search method that seeks to automate, often by the incorporation of machine learning techniques, the process of selecting, combining, generating or adapting several simpler heuristics to efficiently solve computational search problems. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem.

<span class="mw-page-title-main">Roberto Battiti</span>

Roberto Battiti is an Italian computer scientist, Professor of computer science at the University of Trento, director of the LIONlab, and deputy director of the DISI Department and delegate for technology transfer.

<span class="mw-page-title-main">ELKI</span> Data mining framework

ELKI is a data mining software framework developed for use in research and teaching. It was originally at the database systems research unit of Professor Hans-Peter Kriegel at the Ludwig Maximilian University of Munich, Germany, and now continued at the Technical University of Dortmund, Germany. It aims at allowing the development and evaluation of advanced data mining algorithms and their interaction with database index structures.

Given a system transforming a set of inputs to output values, described by a mathematical function f, optimization refers to the generation and selection of the best solution from some set of available alternatives, by systematically choosing input values from within an allowed set, computing the value of the function, and recording the best value found during the process. Many real-world and theoretical problems may be modeled in this general framework. For example, the inputs can be design parameters of a motor while the output can be the power consumption. Other inputs can be business choices with the output being obtained profit. or describing the configuration of a physical system with the output being its energy.

Bayesian optimization is a sequential design strategy for global optimization of black-box functions that does not assume any functional forms. It is usually employed to optimize expensive-to-evaluate functions.

Data mining, the process of discovering patterns in large data sets, has been used in many applications.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process.

Multi-task optimization is a paradigm in the optimization literature that focuses on solving multiple self-contained tasks simultaneously. The paradigm has been inspired by the well-established concepts of transfer learning and multi-task learning in predictive analytics.

References

  1. Battiti, Roberto; Mauro Brunato; Franco Mascia (2008). Reactive Search and Intelligent Optimization. Springer Verlag. ISBN   978-0-387-09623-0.
  2. Battiti, Roberto; Gianpietro Tecchiolli (1994). "The reactive tabu search" (PDF). ORSA Journal on Computing. 6 (2): 126–140. doi:10.1287/ijoc.6.2.126.
  3. Holger, Hoos (2012). "Programming by optimization". Communications of the ACM. 55 (2): 70–80. doi: 10.1145/2076450.2076469 .
  4. Youssef, Hamadi; E. Monfroy; F. Saubion (2012). Autonomous Search. New York: Springer Verlag. ISBN   978-3-642-21433-2.
  5. Battiti, Roberto; Mauro Brunato (2010). Learning and Intelligent Optimization [Proceedings Learning and Intelligent OptimizatioN LION 4, Jan 18-22, 2010, Venice, Italy.](PDF). Lecture Notes in Computer Science. Vol. 6073. pp. 232–246. doi:10.1007/978-3-642-13800-3. ISBN   978-3-642-13799-0.
  6. Battiti, Roberto; Andrea Passerini (2010). "Brain-Computer Evolutionary Multi-Objective Optimization (BC-EMO): a genetic algorithm adapting to the decision maker" (PDF). IEEE Transactions on Evolutionary Computation. 14 (15): 671–687. doi:10.1109/TEVC.2010.2058118.
  7. ""Machine Learning Approach" to Smartphone Data Garners $10,000 First Prize in The Michael J. Fox Foundation Parkinson's Data Challenge". MJFF. April 24, 2013.