Dispersive flies optimisation

Last updated
Swarm behaviour in Dispersive Flies Optimisation DFO.gif
Swarm behaviour in Dispersive Flies Optimisation

Dispersive flies optimisation (DFO) is a bare-bones swarm intelligence algorithm which is inspired by the swarming behaviour of flies hovering over food sources. [1] DFO is a simple optimiser which works by iteratively trying to improve a candidate solution with regard to a numerical measure that is calculated by a fitness function. Each member of the population, a fly or an agent, holds a candidate solution whose suitability can be evaluated by their fitness value. Optimisation problems are often formulated as either minimisation or maximisation problems.

Contents

DFO [2] was introduced with the intention of analysing a simplified swarm intelligence algorithm with the fewest tunable parameters and components. In the first work on DFO, this algorithm was compared against a few other existing swarm intelligence techniques using error, efficiency and diversity measures. It is shown that despite the simplicity of the algorithm, which only uses agents’ position vectors at time t to generate the position vectors for time t + 1, it exhibits a competitive performance. Since its inception, DFO has been used in a variety of applications including medical imaging and image analysis as well as data mining and machine learning.

Algorithm

DFO bears many similarities with other existing continuous, population-based optimisers (e.g. particle swarm optimization and differential evolution). In that, the swarming behaviour of the individuals consists of two tightly connected mechanisms, one is the formation of the swarm and the other is its breaking or weakening. DFO works by facilitating the information exchange between the members of the population (the swarming flies). Each fly represents a position in a d-dimensional search space: , and the fitness of each fly is calculated by the fitness function , which takes into account the flies' d dimensions: .

The pseudocode below represents one iteration of the algorithm:

for i = 1 : N flies     end for i    = arg min for i = 1 : N and for d = 1 : D dimensions         ifelseend ifend for d end for i   

In the algorithm above, represents fly at dimension and time ; presents 's best neighbouring fly in ring topology (left or right, using flies indexes), at dimension and time ; and is the swarm's best fly. Using this update equation, the swarm's population update depends on each fly's best neighbour (which is used as the focus , and the difference between the current fly and the best in swarm represents the spread of movement, ).

Other than the population size , the only tunable parameter is the disturbance threshold , which controls the dimension-wise restart in each fly vector. This mechanism is proposed to control the diversity of the swarm.

Other notable minimalist swarm algorithm is Bare bones particle swarms (BB-PSO), [3] which is based on particle swarm optimisation, along with bare bones differential evolution (BBDE) [4] which is a hybrid of the bare bones particle swarm optimiser and differential evolution, aiming to reduce the number of parameters. Alhakbani in her PhD thesis [5] covers many aspects of the algorithms including several DFO applications in feature selection as well as parameter tuning.

Applications

Some of the recent applications of DFO are listed below:

Related Research Articles

<span class="mw-page-title-main">Particle swarm optimization</span> Iterative simulation method

In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formula over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.

A fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims. Fitness functions are used in evolutionary algorithms (EA), such as genetic programming and genetic algorithms to guide simulations towards optimal design solutions.

<span class="mw-page-title-main">Ant colony optimization algorithms</span> Optimization algorithm

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.

In plasma physics, the particle-in-cell (PIC) method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed simultaneously on Eulerian (stationary) mesh points.

Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.

<span class="mw-page-title-main">Feature selection</span> Procedure in machine learning and statistics

Feature selection is the process of selecting a subset of relevant features for use in model construction. Stylometry and DNA microarray analysis are two cases where feature selection is used. It should be distinguished from feature extraction.

<span class="mw-page-title-main">Differential evolution</span>

In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the optimized problem and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found.

In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The general steps involved are: (i) choose novel unconstrained coordinates, (ii) introduce explicit constraint forces, (iii) minimize constraint forces implicitly by the technique of Lagrange multipliers or projection methods.

In computer science and operations research, the artificial bee colony algorithm (ABC) is an optimization algorithm based on the intelligent foraging behaviour of honey bee swarm, proposed by Derviş Karaboğa in 2005.

In mathematical optimization, the firefly algorithm is a metaheuristic proposed by Xin-She Yang and inspired by the flashing behavior of fireflies.

Design Automation usually refers to electronic design automation, or Design Automation which is a Product Configurator. Extending Computer-Aided Design (CAD), automated design and Computer-Automated Design (CAutoD) are more concerned with a broader range of applications, such as automotive engineering, civil engineering, composite material design, control engineering, dynamic system identification and optimization, financial systems, industrial equipment, mechatronic systems, steel construction, structural optimisation, and the invention of novel systems.

In operations research, cuckoo search is an optimization algorithm developed by Xin-She Yang and Suash Deb in 2009. It has been shown to be a special case of the well-known -evolution strategy. It was inspired by the obligate brood parasitism of some cuckoo species by laying their eggs in the nests of host birds of other species. Some host birds can engage direct conflict with the intruding cuckoos. For example, if a host bird discovers the eggs are not their own, it will either throw these alien eggs away or simply abandon its nest and build a new nest elsewhere. Some cuckoo species such as the New World brood-parasitic Tapera have evolved in such a way that female parasitic cuckoos are often very specialized in the mimicry in colors and pattern of the eggs of a few chosen host species. Cuckoo search idealized such breeding behavior, and thus can be applied for various optimization problems.

Mean-field particle methods are a broad class of interacting type Monte Carlo algorithms for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability measures can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depends on the distributions of the current random states. A natural way to simulate these sophisticated nonlinear Markov processes is to sample a large number of copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and Markov chain Monte Carlo methods these mean-field particle techniques rely on sequential interacting samples. The terminology mean-field reflects the fact that each of the samples interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes. In other words, starting with a chaotic configuration based on independent copies of initial state of the nonlinear Markov chain model, the chaos propagates at any time horizon as the size the system tends to infinity; that is, finite blocks of particles reduces to independent copies of the nonlinear Markov process. This result is called the propagation of chaos property. The terminology "propagation of chaos" originated with the work of Mark Kac in 1976 on a colliding mean-field kinetic gas model.

Quadrature-based moment methods (QBMM) are a class of computational fluid dynamics (CFD) methods for solving Kinetic theory and is optimal for simulating phases such as rarefied gases or dispersed phases of a multiphase flow. The smallest "particle" entities which are tracked may be molecules of a single phase or granular "particles" such as aerosols, droplets, bubbles, precipitates, powders, dust, soot, etc. Moments of the Boltzmann equation are solved to predict the phase behavior as a continuous (Eulerian) medium, and is applicable for arbitrary Knudsen number and arbitrary Stokes number . Source terms for collision models such as Bhatnagar-Gross-Krook (BGK) and models for evaporation, coalescence, breakage, and aggregation are also available. By retaining a quadrature approximation of a probability density function (PDF), a set of abscissas and weights retain the physical solution and allow for the construction of moments that generate a set of partial differential equations (PDE's). QBMM has shown promising preliminary results for modeling granular gases or dispersed phases within carrier fluids and offers an alternative to Lagrangian methods such as Discrete Particle Simulation (DPS). The Lattice Boltzmann Method (LBM) shares some strong similarities in concept, but it relies on fixed abscissas whereas quadrature-based methods are more adaptive. Additionally, the Navier–Stokes equations(N-S) can be derived from the moment method approach.

The constructive cooperative coevolutionary algorithm (also called C3) is a global optimisation algorithm in artificial intelligence based on the multi-start architecture of the greedy randomized adaptive search procedure (GRASP). It incorporates the existing cooperative coevolutionary algorithm (CC). The considered problem is decomposed into subproblems. These subproblems are optimised separately while exchanging information in order to solve the complete problem. An optimisation algorithm, usually but not necessarily an evolutionary algorithm, is embedded in C3 for optimising those subproblems. The nature of the embedded optimisation algorithm determines whether C3's behaviour is deterministic or stochastic.

<span class="mw-page-title-main">Sparse dictionary learning</span> Representation learning method

Sparse dictionary learning is a representation learning method which aims at finding a sparse representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than the one of the signals being observed. The above two properties lead to having seemingly redundant atoms that allow multiple representations of the same signal but also provide an improvement in sparsity and flexibility of the representation.

Fish School Search (FSS), proposed by Bastos Filho and Lima Neto in 2008 is, in its basic version, an unimodal optimization algorithm inspired on the collective behavior of fish schools. The mechanisms of feeding and coordinated movement were used as inspiration to create the search operators. The core idea is to make the fishes “swim” toward the positive gradient in order to “eat” and “gain weight”. Collectively, the heavier fishes have more influence on the search process as a whole, what makes the barycenter of the fish school moves toward better places in the search space over the iterations.

<span class="mw-page-title-main">Maurice Clerc (mathematician)</span> French mathematician

Maurice Clerc is a French mathematician.

References

  1. Downes, J. A. (January 1969). "The Swarming and Mating Flight of Diptera". Annual Review of Entomology. 14 (1): 271–298. doi:10.1146/annurev.en.14.010169.001415.
  2. al-Rifaie, Mohammad Majid (2014). "Dispersive Flies Optimisation". Proceedings of the 2014 Federated Conference on Computer Science and Information Systems. Vol. 2. pp. 529–538. doi:10.15439/2014f142. ISBN   978-83-60810-58-3. S2CID   3032155.
  3. Kennedy, J. (2003). "Bare bones particle swarms". Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS'03 (Cat. No.03EX706). pp. 80–87. doi:10.1109/SIS.2003.1202251. ISBN   978-0-7803-7914-5. S2CID   37185749.
  4. Omran, Mahamed G.H.; Engelbrecht, Andries P.; Salman, Ayed (July 2009). "Bare bones differential evolution" (PDF). European Journal of Operational Research. 196 (1): 128–139. doi:10.1016/j.ejor.2008.02.035. hdl: 2263/8794 .
  5. Alhakbani, Haya (2018). Handling Class Imbalance Using Swarm Intelligence Techniques, Hybrid Data and Algorithmic Level Solutions. London, UK: [PhD Thesis] Goldsmiths, University of London.
  6. Alhakbani, H. A.; al-Rifaie, M. M. (2017). "Optimising SVM to classify imbalanced data using dispersive flies optimisation". Proceedings of the 2017 Federated Conference on Computer Science and Information Systems. Vol. 11. pp. 399–402. doi:10.15439/2017F91. ISBN   978-83-946253-7-5. S2CID   22345522.
  7. al-Rifaie, Mohammad Majid; Ursyn, Anna; Zimmer, Robert; Javaheri Javid, Mohammad Ali (2017). "On Symmetry, Aesthetics and Quantifying Symmetrical Complexity". Computational Intelligence in Music, Sound, Art and Design. Lecture Notes in Computer Science. Vol. 10198. pp. 17–32. doi:10.1007/978-3-319-55750-2_2. ISBN   978-3-319-55749-6.
  8. al-Rifaie, Mohammad Majid; Fol Leymarie, Frédéric; Latham, William; Bishop, Mark (2017). "Swarmic autopoiesis and computational creativity" (PDF). Connection Science. 29 (4): 276–294. Bibcode:2017ConSc..29..276A. doi:10.1080/09540091.2016.1274960. S2CID   5591506.
  9. al-Rifaie, Mohammad Majid; Aber, Ahmed (2016). "Dispersive Flies Optimisation and Medical Imaging". Recent Advances in Computational Optimization (PDF). Studies in Computational Intelligence. Vol. 610. pp. 183–203. doi:10.1007/978-3-319-21133-6_11. ISBN   978-3-319-21132-9.
  10. King, Michael; al-Rifaie, Mohammad Majid (2017). "Building simple non-identical organic structures with dispersive flies optimisation and a* path-finding". AISB 2017: Games and AI: 336–340.
  11. Hooman, O. M. J.; al-Rifaie, M. M.; Nicolaou, M. A. (2018). "Deep Neuroevolution: Training Deep Neural Networks for False Alarm Detection in Intensive Care Units". 2018 26th European Signal Processing Conference (EUSIPCO) (PDF). pp. 1157–1161. doi:10.23919/EUSIPCO.2018.8552944. ISBN   978-9-0827-9701-5. S2CID   52825619.
  12. Aparajeya, Prashant; Leymarie, Frederic Fol; al-Rifaie, Mohammad Majid (2019). "Swarm-Based Identification of Animation Key Points from 2D-medialness Maps" (PDF). Computational Intelligence in Music, Sound, Art and Design. Lecture Notes in Computer Science. Vol. 11453. Springer International Publishing. pp. 69–83. doi:10.1007/978-3-030-16667-0_5. ISBN   978-3-030-16666-3. S2CID   106406853.