This article needs additional citations for verification .(March 2023) |
Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems. [1] [2]
SI systems consist typically of a population of simple agents or boids interacting locally with one another and with their environment. [3] The inspiration often comes from nature, especially biological systems. [4] The agents follow very simple rules, and although there is no centralized control structure dictating how individual agents should behave, local, and to a certain degree random, interactions between such agents lead to the emergence of "intelligent" global behavior, unknown to the individual agents. [5] Examples of swarm intelligence in natural systems include ant colonies, bee colonies, bird flocking, hawks hunting, animal herding, bacterial growth, fish schooling and microbial intelligence.
The application of swarm principles to robots is called swarm robotics while swarm intelligence refers to the more general set of algorithms. Swarm prediction has been used in the context of forecasting problems. Similar approaches to those proposed for swarm robotics are considered for genetically modified organisms in synthetic collective intelligence. [6]
Boids is an artificial life program, developed by Craig Reynolds in 1986, which simulates flocking. It was published in 1987 in the proceedings of the ACM SIGGRAPH conference. [7] The name "boid" corresponds to a shortened version of "bird-oid object", which refers to a bird-like object. [8]
As with most artificial life simulations, Boids is an example of emergent behavior; that is, the complexity of Boids arises from the interaction of individual agents (the boids, in this case) adhering to a set of simple rules. The rules applied in the simplest Boids world are as follows:
More complex rules can be added, such as obstacle avoidance and goal seeking.
Self-propelled particles (SPP), also referred to as the Vicsek model , was introduced in 1995 by Vicsek et al. [9] as a special case of the boids model introduced in 1986 by Reynolds. [7] A swarm is modelled in SPP by a collection of particles that move with a constant speed but respond to a random perturbation by adopting at each time increment the average direction of motion of the other particles in their local neighbourhood. [10] SPP models predict that swarming animals share certain properties at the group level, regardless of the type of animals in the swarm. [11] Swarming systems give rise to emergent behaviours which occur at many different scales, some of which are turning out to be both universal and robust. It has become a challenge in theoretical physics to find minimal statistical models that capture these behaviours. [12] [13] [14]
Evolutionary algorithms (EA), particle swarm optimization (PSO), differential evolution (DE), ant colony optimization (ACO) and their variants dominate the field of nature-inspired metaheuristics. [15] This list includes algorithms published up to circa the year 2000. A large number of more recent metaphor-inspired metaheuristics have started to attract criticism in the research community for hiding their lack of novelty behind an elaborate metaphor. For algorithms published since that time, see List of metaphor-based metaheuristics.
Metaheuristics lack a confidence in a solution. [16] When appropriate parameters are determined, and when sufficient convergence stage is achieved, they often find a solution that is optimal, or near close to optimum – nevertheless, if one does not know optimal solution in advance, a quality of a solution is not known. [16] In spite of this obvious drawback it has been shown that these types of algorithms work well in practice, and have been extensively researched, and developed. [17] [18] [19] [20] [21] On the other hand, it is possible to avoid this drawback by calculating solution quality for a special case where such calculation is possible, and after such run it is known that every solution that is at least as good as the solution a special case had, has at least a solution confidence a special case had. One such instance is Ant-inspired Monte Carlo algorithm for Minimum Feedback Arc Set where this has been achieved probabilistically via hybridization of Monte Carlo algorithm with Ant Colony Optimization technique. [22]
Ant colony optimization (ACO), introduced by Dorigo in his doctoral dissertation, is a class of optimization algorithms modeled on the actions of an ant colony. ACO is a probabilistic technique useful in problems that deal with finding better paths through graphs. Artificial 'ants'—simulation agents—locate optimal solutions by moving through a parameter space representing all possible solutions. Natural ants lay down pheromones directing each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate for better solutions. [23]
Particle swarm optimization (PSO) is a global optimization algorithm for dealing with problems in which a best solution can be represented as a point or surface in an n-dimensional space. Hypotheses are plotted in this space and seeded with an initial velocity, as well as a communication channel between the particles. [24] [25] Particles then move through the solution space, and are evaluated according to some fitness criterion after each timestep. Over time, particles are accelerated towards those particles within their communication grouping which have better fitness values. The main advantage of such an approach over other global minimization strategies such as simulated annealing is that the large number of members that make up the particle swarm make the technique impressively resilient to the problem of local minima.
Karaboga introduced ABC metaheuristic in 2005 as an answer to optimize numerical problems. Inspired by honey bee foraging behavior, Karaboga's model had three components. The employed, onlooker, and scout. In practice, the artificial scout bee would expose all food source positions (solutions) good or bad. The employed bee would search for the shortest route to each position to extract the food amount (quality) of the source. If the food was depleted from the source, the employed bee would become a scout and randomly search for other food sources. Each source that became abandoned created negative feedback meaning, the answers found were poor solutions. The onlooker bees wait for employed bees to either abandon a source or give information that the source has a large quantity of food and is worth sending additional resources to. The more an onlooker bee is recruited, the more positive the feedback is meaning that the answer is likely a good solution.
Artificial Swarm Intelligence (ASI) is method of amplifying the collective intelligence of networked human groups using control algorithms modeled after natural swarms. Sometimes referred to as Human Swarming or Swarm AI, the technology connects groups of human participants into real-time systems that deliberate and converge on solutions as dynamic swarms when simultaneously presented with a question [26] [27] [28] ASI has been used for a wide range of applications, from enabling business teams to generate highly accurate financial forecasts [29] to enabling sports fans to outperform Vegas betting markets. [30] ASI has also been used to enable groups of doctors to generate diagnoses with significantly higher accuracy than traditional methods. [31] [32] ASI has been used by the Food and Agriculture Organization (FAO) of the United Nations to help forecast famines in hotspots around the world. [33] [ better source needed ]
Swarm Intelligence-based techniques can be used in a number of applications. The U.S. military is investigating swarm techniques for controlling unmanned vehicles. The European Space Agency is thinking about an orbital swarm for self-assembly and interferometry. NASA is investigating the use of swarm technology for planetary mapping. A 1992 paper by M. Anthony Lewis and George A. Bekey discusses the possibility of using swarm intelligence to control nanobots within the body for the purpose of killing cancer tumors. [34] Conversely al-Rifaie and Aber have used stochastic diffusion search to help locate tumours. [35] [36] Swarm intelligence (SI) is increasingly applied in Internet of Things (IoT) [37] [38] systems, and by association to Intent-Based Networking (IBN), [39] due to its ability to handle complex, distributed tasks through decentralized, self-organizing algorithms. Swarm intelligence has also been applied for data mining [40] and cluster analysis. [41] Ant-based models are further subject of modern management theory. [42]
The use of swarm intelligence in telecommunication networks has also been researched, in the form of ant-based routing. This was pioneered separately by Dorigo et al. and Hewlett-Packard in the mid-1990s, with a number of variants existing. Basically, this uses a probabilistic routing table rewarding/reinforcing the route successfully traversed by each "ant" (a small control packet) which flood the network. Reinforcement of the route in the forwards, reverse direction and both simultaneously have been researched: backwards reinforcement requires a symmetric network and couples the two directions together; forwards reinforcement rewards a route before the outcome is known (but then one would pay for the cinema before one knows how good the film is). As the system behaves stochastically and is therefore lacking repeatability, there are large hurdles to commercial deployment. Mobile media and new technologies have the potential to change the threshold for collective action due to swarm intelligence (Rheingold: 2002, P175).
The location of transmission infrastructure for wireless communication networks is an important engineering problem involving competing objectives. A minimal selection of locations (or sites) are required subject to providing adequate area coverage for users. A very different, ant-inspired swarm intelligence algorithm, stochastic diffusion search (SDS), has been successfully used to provide a general model for this problem, related to circle packing and set covering. It has been shown that the SDS can be applied to identify suitable solutions even for large problem instances. [43]
Airlines have also used ant-based routing in assigning aircraft arrivals to airport gates. At Southwest Airlines a software program uses swarm theory, or swarm intelligence—the idea that a colony of ants works better than one alone. Each pilot acts like an ant searching for the best airport gate. "The pilot learns from his experience what's the best for him, and it turns out that that's the best solution for the airline," Douglas A. Lawson explains. As a result, the "colony" of pilots always go to gates they can arrive at and depart from quickly. The program can even alert a pilot of plane back-ups before they happen. "We can anticipate that it's going to happen, so we'll have a gate available," Lawson says. [44]
Artists are using swarm technology as a means of creating complex interactive systems or simulating crowds.[ citation needed ]
The Lord of the Rings film trilogy made use of similar technology, known as Massive (software), during battle scenes. Swarm technology is particularly attractive because it is cheap, robust, and simple.
Stanley and Stella in: Breaking the Ice was the first movie to make use of swarm technology for rendering, realistically depicting the movements of groups of fish and birds using the Boids system.[ citation needed ]
Tim Burton's Batman Returns also made use of swarm technology for showing the movements of a group of bats. [45]
Airlines have used swarm theory to simulate passengers boarding a plane. Southwest Airlines researcher Douglas A. Lawson used an ant-based computer simulation employing only six interaction rules to evaluate boarding times using various boarding methods.(Miller, 2010, xii-xviii). [46]
Networks of distributed users can be organized into "human swarms" through the implementation of real-time closed-loop control systems. [47] [48] Developed by Louis Rosenberg in 2015, human swarming, also called artificial swarm intelligence, allows the collective intelligence of interconnected groups of people online to be harnessed. [49] [50] The collective intelligence of the group often exceeds the abilities of any one member of the group. [51]
Stanford University School of Medicine published in 2018 a study showing that groups of human doctors, when connected together by real-time swarming algorithms, could diagnose medical conditions with substantially higher accuracy than individual doctors or groups of doctors working together using traditional crowd-sourcing methods. In one such study, swarms of human radiologists connected together were tasked with diagnosing chest x-rays and demonstrated a 33% reduction in diagnostic errors as compared to the traditional human methods, and a 22% improvement over traditional machine-learning. [31] [52] [53] [32]
The University of California San Francisco (UCSF) School of Medicine released a preprint in 2021 about the diagnosis of MRI images by small groups of collaborating doctors. The study showed a 23% increase in diagnostic accuracy when using Artificial Swarm Intelligence (ASI) technology compared to majority voting. [54] [55]
Swarm grammars are swarms of stochastic grammars that can be evolved to describe complex properties such as found in art and architecture. [56] These grammars interact as agents behaving according to rules of swarm intelligence. Such behavior can also suggest deep learning algorithms, in particular when mapping of such swarms to neural circuits is considered. [57]
In a series of works, al-Rifaie et al. [58] have successfully used two swarm intelligence algorithms—one mimicking the behaviour of one species of ants (Leptothorax acervorum) foraging (stochastic diffusion search, SDS) and the other algorithm mimicking the behaviour of birds flocking (particle swarm optimization, PSO)—to describe a novel integration strategy exploiting the local search properties of the PSO with global SDS behaviour. The resulting hybrid algorithm is used to sketch novel drawings of an input image, exploiting an artistic tension between the local behaviour of the 'birds flocking'—as they seek to follow the input sketch—and the global behaviour of the "ants foraging"—as they seek to encourage the flock to explore novel regions of the canvas. The "creativity" of this hybrid swarm system has been analysed under the philosophical light of the "rhizome" in the context of Deleuze's "Orchid and Wasp" metaphor. [59]
A more recent work of al-Rifaie et al., "Swarmic Sketches and Attention Mechanism", [60] introduces a novel approach deploying the mechanism of 'attention' by adapting SDS to selectively attend to detailed areas of a digital canvas. Once the attention of the swarm is drawn to a certain line within the canvas, the capability of PSO is used to produce a 'swarmic sketch' of the attended line. The swarms move throughout the digital canvas in an attempt to satisfy their dynamic roles—attention to areas with more details—associated with them via their fitness function. Having associated the rendering process with the concepts of attention, the performance of the participating swarms creates a unique, non-identical sketch each time the 'artist' swarms embark on interpreting the input line drawings. In other works, while PSO is responsible for the sketching process, SDS controls the attention of the swarm.
In a similar work, "Swarmic Paintings and Colour Attention", [61] non-photorealistic images are produced using SDS algorithm which, in the context of this work, is responsible for colour attention.
The "computational creativity" of the above-mentioned systems are discussed in [58] [62] [63] through the two prerequisites of creativity (i.e. freedom and constraints) within the swarm intelligence's two infamous phases of exploration and exploitation.
Michael Theodore and Nikolaus Correll use swarm intelligent art installation to explore what it takes to have engineered systems to appear lifelike. [64]
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems via biologically inspired operators such as selection, crossover, and mutation. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, and causal inference.
Evolutionary algorithms (EA) reproduce essential elements of the biological evolution in a computer algorithm in order to solve “difficult” problems, at least approximately, for which no exact or satisfactory solution methods are known. They belong to the class of metaheuristics and are a subset of population based bio-inspired algorithms and evolutionary computation, which itself are part of the field of computational intelligence. The mechanisms of biological evolution that an EA mainly imitates are reproduction, mutation, recombination and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators.
Swarm behaviour, or swarming, is a collective behaviour exhibited by entities, particularly animals, of similar size which aggregate together, perhaps milling about the same spot or perhaps moving en masse or migrating in some direction. It is a highly interdisciplinary topic.
Evolutionary computation from computer science is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.
In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
Boids is an artificial life program, developed by Craig Reynolds in 1986, which simulates the flocking behaviour of birds, and related group motion. His paper on this topic was published in 1987 in the proceedings of the ACM SIGGRAPH conference. The name "boid" corresponds to a shortened version of "bird-oid object", which refers to a bird-like object. Reynolds' boid model is one example of a larger general concept, for which many other variations have been developed since. The closely related work of Ichiro Aoki is noteworthy because it was published in 1982 — five years before Reynolds' boids paper.
In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems that can be reduced to finding good paths through graphs. Artificial ants represent multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a preferred method for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.
In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, tune, or select a heuristic that may provide a sufficiently good solution to an optimization problem or a machine learning problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems. Their use is always of interest when exact or other (approximate) methods are not available or are not expedient, either because the calculation time is too long or because, for example, the solution provided is too imprecise.
Marco Dorigo is a research director for the Belgian Funds for Scientific Research and a co-director of IRIDIA, the artificial intelligence lab of the Université Libre de Bruxelles. He received a PhD in System and Information Engineering in 1992 from the Polytechnic University of Milan with a thesis titled Optimization, learning, and natural algorithms. He is the leading proponent of the ant colony optimization metaheuristic, and one of the founders of the swarm intelligence research field. Recently he got involved with research in swarm robotics: he is the coordinator of Swarm-bots: Swarms of self-assembling artifacts and of Swarmanoid: Towards humanoid robotic swarms, two swarm robotics projects funded by the Future and Emerging Technologies Program of the European Commission. He is also the founding editor and editor in chief of Swarm Intelligence, the principal peer-reviewed publication dedicated to reporting research and new developments in this multidisciplinary field.
The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.
Stochastic diffusion search (SDS) was first described in 1989 as a population-based, pattern-matching algorithm. It belongs to a family of swarm intelligence and naturally inspired search and optimisation algorithms which includes ant colony optimization, particle swarm optimization and genetic algorithms; as such SDS was the first Swarm Intelligence metaheuristic. Unlike stigmergetic communication employed in ant colony optimization, which is based on modification of the physical properties of a simulated environment, SDS uses a form of direct (one-to-one) communication between the agents similar to the tandem calling mechanism employed by one species of ants, Leptothorax acervorum.
Ant robotics is a special case of swarm robotics. Swarm robots are simple robots with limited sensing and computational capabilities. This makes it feasible to deploy teams of swarm robots and take advantage of the resulting fault tolerance and parallelism. Swarm robots cannot use conventional planning methods due to their limited sensing and computational capabilities. Thus, their behavior is often driven by local interactions. Ant robots are swarm robots that can communicate via markings, similar to ants that lay and follow pheromone trails. Some ant robots use long-lasting trails. Others use short-lasting trails including heat and alcohol. Others even use virtual trails.
In mathematical optimization, the firefly algorithm is a metaheuristic proposed by Xin-She Yang and inspired by the flashing behavior of fireflies.
Meta-optimization from numerical optimization is the use of one optimization method to tune another optimization method. Meta-optimization is reported to have been used as early as in the late 1970s by Mercer and Sampson for finding optimal parameter settings of a genetic algorithm.
This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.
The Fly Algorithm is a computational method within the field of evolutionary algorithms, designed for direct exploration of 3D spaces in applications such as computer stereo vision, robotics, and medical imaging. Unlike traditional image-based stereovision, which relies on matching features to construct 3D information, the Fly Algorithm operates by generating a 3D representation directly from random points, termed "flies." Each fly is a coordinate in 3D space, evaluated for its accuracy by comparing its projections in a scene. By iteratively refining the positions of flies based on fitness criteria, the algorithm can construct an optimized spatial representation. The Fly Algorithm has expanded into various fields, including applications in digital art, where it is used to generate complex visual patterns.
This is a chronological table of metaheuristic algorithms that only contains fundamental computational intelligence algorithms. Hybrid algorithms and multi-objective algorithms are not listed in the table below.
Maurice Clerc is a French mathematician.
Atulya K. Nagar is a mathematical physicist, academic and author. He holds the Foundation Chair as Professor of Mathematics and is the Pro-Vice-Chancellor for Research at Liverpool Hope University.