Kalyanmoy Deb | |
---|---|
Born | Tripura, India |
Academic background | |
Alma mater | IIT Kharagpur, University of Alabama |
Thesis | Binary and Floating-Point Function Optimization using Messy Genetic Algorithms (1991) |
Doctoral advisor | David E. Goldberg |
Academic work | |
Discipline | Multiobjective optimization and evolutionary algorithms |
Institutions | Department of Electrical and Computer Engineering,Michigan State University |
Kalyanmoy Deb is an Indian computer scientist. Deb is the Herman E. &Ruth J. Koenig Endowed Chair Professor in the Department of Electrical and Computing Engineering at Michigan State University. [1] Deb is also a professor in the Department of Computer Science and Engineering and the Department of Mechanical Engineering at Michigan State University. [2]
Deb established the Kanpur Genetic Algorithms Laboratory at IIT Kanpur in 1997 and the Computational Optimization and Innovation (COIN) Laboratory at Michigan State in 2013. [3] [4] In 2001,Wiley published a textbook written by Deb titled Multi-Objective Optimization using Evolutionary Algorithms as part of its series titled "Systems and Optimization". [5] In an analysis of the network of authors in the academic field of evolutionary computation by Carlos Cotta and Juan-Julián Merelo,Deb was identified as one of the most central authors in the community and was designated as a "sociometric superstar" of the field. [6] Deb has several honors,including the Shanti Swarup Bhatnagar award in engineering sciences (2005),the Thomson Citation Laureate award for his highly cited research in computer science (1996–2005),and the MCDM Edgeworth-Pareto Award for a record of creativity to the extent that the field of multiple-criteria decision making would not exist in its current form in 2008. Deb has been awarded the Infosys Prize in Engineering and Computer Science from Infosys Limited,Bangalore,India for his contributions to evolutionary multi-objective optimization,which have led to "advances in non-linear constraints,decision uncertainty,programming and numerical methods,computational efficiency of large-scale problems,and optimization algorithms." [7] He is also a recipient of the 2012 TWAS Prize from the World Academy of Sciences. [8]
Deb received his B.Tech. in Mechanical Engineering (1985) from IIT Kharagpur and his MS (1989) and PhD (1991) in Engineering Mechanics from the University of Alabama. [9] His PhD advisor was David E. Goldberg, [10] and his PhD thesis was titled Binary and Floating-Point Function Optimization using Messy Genetic Algorithms. [11] From 1991 to 1992 he was a postdoc at UIUC. In 1993,he became a professor of mechanical engineering at IIT Kanpur,where he went on to hold the Deva Raj Endowed Chair (2007–2010) and the Gurmukh and Veena Mehta Endowed Chair (2011–2013). For his next position,he left for the Michigan State University,where has been the Herman E. &Ruth J. Koenig Endowed Chair since 2013.
Deb is a highly cited researcher,with 138,000+ Google Scholar citations and an h-index of 116. A large fraction of his citations come from his work on nondominated-sorting [12] genetic algorithms for multiobjective optimization. In 1994,Deb and coauthor Nidamarthi Srinivas introduced one of [note 1] the first nondominated-sorting genetic algorithms,which they termed "NSGA". [13]
In 2002,Deb and coauthors Amrit Pratap,Sameer Agarwal,and T.A.M.T. Meyarivan introduced a notion of crowding distance for an individual,which "calculates a measure of how close an individual is to its neighbors." [14] They also introduced a faster [note 2] way to implement nondominated sorting,by for every individual keeping track of which other individuals it strictly dominates. By incorporating crowding distance,elitism, [note 3] and the faster implementation of nondominated sorting into the original NSGA,Deb and his coauthors modified the original NSGA and made it faster and more reliable. [note 4] They termed this modification "NSGA-II". According to the Web of Science Core Collection database,this paper was the first paper solely by Indian authors to have more than 5,000 citations. [15] [16]
In 2013,Deb and coauthor Himanshu Jain proposed a modification of NSGA-II for solving many-objective optimization problems with 10+ objectives. [note 5] [17] They termed this modification "NSGA-III".
This section of a biography of a living person does not include any references or sources .(January 2022) |
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover and selection. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, etc.
In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions. Evolution of the population then takes place after the repeated application of the above operators.
In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.
In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formula over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
A fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims. Fitness functions are used in evolutionary algorithms (EA), such as genetic programming and genetic algorithms to guide simulations towards optimal design solutions.
In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, tune, or select a heuristic that may provide a sufficiently good solution to an optimization problem or a machine learning problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems.
Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making. Conflicting criteria are typical in evaluating options: cost or price is usually one of the main criteria, and some measure of quality is typically another criterion, easily in conflict with the cost. In purchasing a car, cost, comfort, safety, and fuel economy may be some of the main criteria we consider – it is unusual that the cheapest car is the most comfortable and the safest one. In portfolio management, managers are interested in getting high returns while simultaneously reducing risks; however, the stocks that have the potential of bringing high returns typically carry high risk of losing money. In a service industry, customer satisfaction and the cost of providing service are fundamental conflicting criteria.
Selection is the stage of a genetic algorithm or more general evolutionary algorithm in which individual genomes are chosen from a population for later breeding.
In computer science and operations research, Genetic fuzzy systems are fuzzy systems constructed by using genetic algorithms or genetic programming, which mimic the process of natural evolution, to identify its structure and parameter.
In multi-objective optimization, the Pareto front is the set of all Pareto efficient solutions. The concept is widely used in engineering. It allows the designer to restrict attention to the set of efficient choices, and to make tradeoffs within this set, rather than considering the full range of every parameter.
David Edward Goldberg is an American computer scientist, civil engineer, and former professor. Until 2010, he was a professor in the department of Industrial and Enterprise Systems Engineering (IESE) at the University of Illinois at Urbana-Champaign and was noted for his work in the field of genetic algorithms. He was the director of the Illinois Genetic Algorithms Laboratory and the co-founder & chief scientist of Nextumi, which later changed its name to ShareThis. He is the author of Genetic Algorithms in Search, Optimization and Machine Learning, one of the most cited books in computer science.
Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.
ParadisEO is a white-box object-oriented framework dedicated to the flexible design of metaheuristics. It uses EO, a template-based, ANSI-C++ compliant computation library. ParadisEO is portable across both Windows system and sequential platforms. ParadisEO is distributed under the CeCill license and can be used under several environments.
In applied mathematics, multimodal optimization deals with optimization tasks that involve finding all or most of the multiple solutions of a problem, as opposed to a single best solution. Evolutionary multimodal optimization is a branch of evolutionary computation, which is closely related to machine learning. Wong provides a short survey, wherein the chapter of Shir and the book of Preuss cover the topic in more detail.
Reward-based selection is a technique used in evolutionary algorithms for selecting potentially useful solutions for recombination. The probability of being selected for an individual is proportional to the cumulative reward obtained by the individual. The cumulative reward can be computed as a sum of the individual reward and the reward, inherited from parents.
MCACEA is a general framework that uses a single evolutionary algorithm (EA) per agent sharing their optimal solutions to coordinate the evolutions of the EAs populations using cooperation objectives. This framework can be used to optimize some characteristics of multiple cooperating agents in mathematical optimization problems. More specifically, due to its nature in which both individual and cooperation objectives are optimize, MCACEA is used in multi-objective optimization problems.
Peter John Fleming is a Professor of Industrial Systems and Control in the Department of Automatic Control and Systems Engineering at the University of Sheffield, and till June 2012 he was the director of the Rolls-Royce University Technology Centre for Control and Systems Engineering. He works in the field of control and systems engineering and is known for his work on evolutionary computation applied to systems engineering. Fleming is Editor-in-Chief of the International Journal of Systems Science.
In applied mathematics, test functions, known as artificial landscapes, are useful to evaluate characteristics of optimization algorithms, such as:
The MOEA Framework is an open-source evolutionary computation library for Java that specializes in multi-objective optimization. It supports a variety of multiobjective evolutionary algorithms (MOEAs), including genetic algorithms, genetic programming, grammatical evolution, differential evolution, and particle swarm optimization. As a result, it has been used to conduct numerous comparative studies to assess the efficiency, reliability, and controllability of state-of-the-art MOEAs.
The Interactive Decision Maps technique of multi-objective optimization is based on approximating the Edgeworth-Pareto Hull (EPH) of the feasible objective set, that is, the feasible objective set broadened by the objective points dominated by it. Alternatively, this set is known as Free Disposal Hull. It is important that the EPH has the same Pareto front as the feasible objective set, but the bi-objective slices of the EPH look much simpler. The frontiers of bi-objective slices of the EPH contain the slices of the Pareto front. It is important that, in contrast to the Pareto front itself, the EPH is usually stable in respect to disturbances of data. The IDM technique applies fast on-line display of bi-objective slices of the EPH approximated in advance.