Fish School Search

Last updated

Fish School Search (FSS), proposed by Bastos Filho and Lima Neto in 2008 is, in its basic version, [1] an unimodal optimization algorithm inspired on the collective behavior of fish schools. The mechanisms of feeding and coordinated movement were used as inspiration to create the search operators. The core idea is to make the fishes “swim” toward the positive gradient in order to “eat” and “gain weight”. Collectively, the heavier fishes have more influence on the search process as a whole, what makes the barycenter of the fish school moves toward better places in the search space over the iterations. [2]

Contents

The FSS uses the following principles: [3]

  1. Simple computations in all individuals (i.e. fish)
  2. Various means of storing information (i.e. weights of fish and school barycenter)
  3. Local computations (i.e. swimming is composed of distinct components)
  4. Low communications between neighboring individuals (i.e. fish are to think local but also be socially aware)
  5. Minimum centralized control (mainly for self-controlling of the school radius)
  6. Some distinct diversity mechanisms (this to avoid undesirable flocking behavior)
  7. Scalability (in terms of complexity of the optimization/search tasks)
  8. Autonomy (i.e. ability to self-control functioning)

Algorithm

FSS is a population based search algorithm inspired in the behavior of swimming fishes that expand and contract while looking for food. Each fish -dimensional location represents a possible solution for the optimization problem. The algorithm makes use of weights for all the fishes which represents cumulative account on how successful has been the search for each fish in the school. FSS is composed of the feeding and movement operators, the latter being divided into three sub-components, which are: [4]

Individual component of the movement

Every fish in the school performs a local search looking for promising regions in the search space. It is done as represented below:

where and represent the position of the fish before and after the individual movement operator, respectively. is a uniformly distributed random number varying from -1 up to 1 and is a parameter that defines the maximum displacement for this movement. The new position is only accepted if the fitness of the fish improves with the position change. If it is not the case, the fish remains in the same position and .

Collective-instinctive component of the movement

An average of the individual movements is calculated based on the following:

The vector represents the weighted average of the displacements of each fish. It means that the fishes that experienced a higher improvement will attract fishes into its position. After the vector computation, every fish will be encouraged to move according to:

Collective-volitive component of the movement

This operator is used in order to regulate the exploration/exploitation ability of the school during the search process. First of all, the barycenter of the school is calculated based on the position and the weight of each fish:

and then, if the total school weight has increased from the last to the current iteration, the fishes are attracted to the barycenter according to equation A. If the total school weight has not improved, the fishes are spread away from the barycenter according to equation B:

Eq. A:

Eq. B:

where defines the size of the maximum displacement performed with the use of this operator. is the euclidean distance between the fish position and the school barycenter. is a uniformly distributed random number varying from 0 up to 1.

Besides the movement operators, it was also defined a feeding operator used in order to update the weights of every fish according to:

where is the weight parameter for fish , is the fitness variation between the last and the new position, and represents the maximum absolute value of the fitness variation among all the fishes in the school. is only allowed to vary from 1 up to , which is a user defined attribute. The weights of all fishes are initialized with the value .

The pseudo-code for FSS

  1. Initialize user parameters
  2. Initialize fishes positions randomly
  3. while Stopping condition is not met do
  4. Calculate fitness for each fish
  5. Run individual operator movement
  6. Calculate fitness for each fish
  7. Run feeding operator
  8. Run collective-instinctive movement operator
  9. Run collective-volitive movement operator
  10. end while

The parameters and decay linearly according to:

and similarly:

where and are user defined initial values for and , respectively. is the maximum number of iterations allowed in the search process.

Variations of FSS

dFSS (Density-based Fish School Search)

This version excels for multimodal hyper-dimensional functions. It includes modifications in the previous operators: Feeding and Swimming, as well as new: Memory and Partition operators. The latter two were introduced to account for the partition of the main school into subgroups. Some changes were also included in the stop conditions that now also have to consider subswarms. [5]

wFSS (Weight-based Fish School Search)

wFSS is a weight based niching version of FSS intended to produce multiple solutions. The niching strategy is based on a new operator called link formator. This operator is used to define leaders for the fishes in order to form sub-schools. [6]

FSS-SAR (Stagnation Avoidance Routine Fish School Search)

In the original version of the algorithm, the individual movement component is only allowed to move a fish if it improves the fitness. However, in a very smooth search space, there would be many moving trials with no success and the algorithm could fail to converge. To solve these issues, was introduced a parameter X for which 0 <= X <= 1 in the individual component of the movement. X decays exponentially along with the iterations and measures a probability for a worsening allowance for each fish. It means that, every time a fish tries to move to a position that does not improve its fitness, a random number is chosen and if it is smaller than X the movement is allowed. [7]

bFSS (Binary Fish School Search)

The bFSS intended to cope with premature convergence. Proposing the use of a binary encoding scheme for the internal mechanisms of the fish school search. It combined the FSS with fuzzy modeling in a wrapper approach for Feature Selection. [8]

MOFSS (Multi-Objective Fish School Search)

In the MOFSS the operators are adapted to solve multi-objective problems. The algorithm deploys an External Archive to store the best non-dominated solutions found during the search process. This approach has been extensively used for different bio-inspired multiobjective optimizers. [9] [10] Furthermore, the solutions within the External Archive are used to guide the fish movements in the proposal version. [11]

See also

Related Research Articles

<span class="mw-page-title-main">Particle swarm optimization</span> Iterative simulation method

In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formula over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.

A fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims. Fitness functions are used in evolutionary algorithms (EA), such as genetic programming and genetic algorithms to guide simulations towards optimal design solutions.

<span class="mw-page-title-main">Ant colony optimization algorithms</span> Optimization algorithm

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.

<span class="mw-page-title-main">Stochastic gradient descent</span> Optimization algorithm

Stochastic gradient descent is an iterative method for optimizing an objective function with suitable smoothness properties. It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient by an estimate thereof. Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in trade for a lower convergence rate.

<span class="mw-page-title-main">Backpropagation</span> Optimization algorithm for artificial neural networks

In machine learning, backpropagation is a widely used algorithm for training feedforward artificial neural networks. Generalizations of backpropagation exist for other artificial neural networks (ANNs), and for functions generally. These classes of algorithms are all referred to generically as "backpropagation". In fitting a neural network, backpropagation computes the gradient of the loss function with respect to the weights of the network for a single input–output example, and does so efficiently, unlike a naive direct computation of the gradient with respect to each weight individually. This efficiency makes it feasible to use gradient methods for training multilayer networks, updating weights to minimize loss; gradient descent, or variants such as stochastic gradient descent, are commonly used. The backpropagation algorithm works by computing the gradient of the loss function with respect to each weight by the chain rule, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this is an example of dynamic programming.

Mehrotra's predictor–corrector method in optimization is a specific interior point method for linear programming. It was proposed in 1989 by Sanjay Mehrotra.

<span class="mw-page-title-main">Estimation of distribution algorithm</span>

Estimation of distribution algorithms (EDAs), sometimes called probabilistic model-building genetic algorithms (PMBGAs), are stochastic optimization methods that guide the search for the optimum by building and sampling explicit probabilistic models of promising candidate solutions. Optimization is viewed as a series of incremental updates of a probabilistic model, starting with the model encoding an uninformative prior over admissible solutions and ending with the model that generates only the global optima.

Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. The "full" Newton's method requires the Jacobian in order to search for zeros, or the Hessian for finding extrema.

Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function

Multi-objective optimization is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

In operations research, cuckoo search is an optimization algorithm developed by Xin-She Yang and Suash Deb in 2009. It was inspired by the obligate brood parasitism of some cuckoo species by laying their eggs in the nests of host birds of other species. Some host birds can engage direct conflict with the intruding cuckoos. For example, if a host bird discovers the eggs are not their own, it will either throw these alien eggs away or simply abandon its nest and build a new nest elsewhere. Some cuckoo species such as the New World brood-parasitic Tapera have evolved in such a way that female parasitic cuckoos are often very specialized in the mimicry in colors and pattern of the eggs of a few chosen host species. Cuckoo search idealized such breeding behavior, and thus can be applied for various optimization problems. It has been shown that cuckoo search is a special case of the well-known -evolution strategy.

Moving horizon estimation (MHE) is an optimization approach that uses a series of measurements observed over time, containing noise and other inaccuracies, and produces estimates of unknown variables or parameters. Unlike deterministic approaches, MHE requires an iterative approach that relies on linear programming or nonlinear programming solvers to find a solution.

<span class="mw-page-title-main">Vanishing gradient problem</span> Machine learning model training problem

In machine learning, the vanishing gradient problem is encountered when training artificial neural networks with gradient-based learning methods and backpropagation. In such methods, during each iteration of training each of the neural network's weights receives an update proportional to the partial derivative of the error function with respect to the current weight. The problem is that in some cases, the gradient will be vanishingly small, effectively preventing the weight from changing its value. In the worst case, this may completely stop the neural network from further training. As one example of the problem cause, traditional activation functions such as the hyperbolic tangent function have gradients in the range (0,1], and backpropagation computes gradients by the chain rule. This has the effect of multiplying n of these small numbers to compute gradients of the early layers in an n-layer network, meaning that the gradient decreases exponentially with n while the early layers train very slowly.

The multiplicative weights update method is an algorithmic technique most commonly used for decision making and prediction, and also widely deployed in game theory and algorithm design. The simplest use case is the problem of prediction from expert advice, in which a decision maker needs to iteratively decide on an expert whose advice to follow. The method assigns initial weights to the experts, and updates these weights multiplicatively and iteratively according to the feedback of how well an expert performed: reducing it in case of poor performance, and increasing it otherwise. It was discovered repeatedly in very diverse fields such as machine learning, optimization, theoretical computer science, and game theory.

Quantum counting algorithm is a quantum algorithm for efficiently counting the number of solutions for a given search problem. The algorithm is based on the quantum phase estimation algorithm and on Grover's search algorithm.

<span class="mw-page-title-main">Spiral optimization algorithm</span> Optimization algorithm

In mathematics, the spiral optimization (SPO) algorithm is a metaheuristic inspired by spiral phenomena in nature.

A central problem in algorithmic graph theory is the shortest path problem. One of the generalizations of the shortest path problem is known as the single-source-shortest-paths (SSSP) problem, which consists of finding the shortest path between every pair of vertices in a graph. There are classical sequential algorithms which solve this problem, such as Dijkstra's algorithm. In this article, however, we present two parallel algorithms solving this problem.

Powell's dog leg method is an iterative optimisation algorithm for the solution of non-linear least squares problems, introduced in 1970 by Michael J. D. Powell. Similarly to the Levenberg–Marquardt algorithm, it combines the Gauss–Newton algorithm with gradient descent, but it uses an explicit trust region. At each iteration, if the step from the Gauss–Newton algorithm is within the trust region, it is used to update the current solution. If not, the algorithm searches for the minimum of the objective function along the steepest descent direction, known as Cauchy point. If the Cauchy point is outside of the trust region, it is truncated to the boundary of the latter and it is taken as the new solution. If the Cauchy point is inside the trust region, the new solution is taken at the intersection between the trust region boundary and the line joining the Cauchy point and the Gauss-Newton step.

References

  1. C. J. A. B Filho., F. B. de Lima Neto, A. J. C. C.. Lins, A. I. S. Nascimento., and M. P. Lima, "A novel search algorithm based on fish school behavior," Systems, Man and Cybernetics, SMC 2008. IEEE International Conference on, 2008, pp. 2646-2651.
  2. de Lima Neto, Fernando Buarque, and Marcelo Gomes Pereira de Lacerda. "Multimodal Fish School Search Algorithms Based on Local Information for School Splitting." 2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence. IEEE, 2013
  3. "FBLN – Prof. Dr. Fernando Buarque de Lima Neto, B.Sc. M.Sc. DIC-Ph.D.(Imperial College/UK) Hab(BR) SM-IEEE(USA) Alexander von Humboldt-Fellow(DE) Academy of Science-Fellow(PE/BR)".
  4. J. B. Monteiro, I. M. C. Albuquerque, F. B. L. Neto, and F. V. S. Ferreira, “Optimizing multi-plateau functions with FSS-SAR (Stagnation Avoidance Routine),” Submitted to IEEE Symposium Series on Computational Intelligence, 2016.
  5. Madeiro, S. S., de Lima-Neto, F. B., Bastos-Filho, C. J. A., & do Nascimento Figueiredo, E. M. (2011, June). Density as the segregation mechanism in fish school search for multimodal optimization problems. In International Conference in Swarm Intelligence (pp. 563-572). Springer Berlin Heidelberg.
  6. F. Buarque De Lima Neto and M. Gomes Pereira de Lacerda, “Weight based fish school search,” in Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on. IEEE, 2014, pp. 270–277.
  7. J. B. Monteiro, I. M. C. Albuquerque, F. B. L. Neto, and F. V. S. Ferreira, “Optimizing multi-plateau functions with FSS-SAR (Stagnation Avoidance Routine),” Submitted to IEEE Symposium Series on Computational Intelligence, 2016.
  8. Sargo, João AG, et al. "Binary Fish School Search applied to feature selection: Application to ICU readmissions." 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 2014.
  9. Deb, K., Thiele, L., Laumanns, M., & Zitzler, E.(2002) Scalable Multi-Objective Optimization Test Problems, In: IEEE Congress on Evolutionary Computation (pp. 825–830).
  10. Nebro, A. J., Durillo, J. J., Garça-Nieto, J., Coello Coello, C. A., Luna, F., & Alba, E. (2009) SMPSO: A new PSO-based metaheuristic for multi-objective optimization [ dead link ], In: IEEE Symposium on Computational Intelligence in Multicriteria Decision-Making (pp. 66–73). doi:10.1109/MCDM.2009.4938830
  11. Bastos-Filho, Carmelo JA, and Augusto CS Guimarães. "Multi-Objective Fish School Search." International Journal of Swarm Intelligence Research (IJSIR) 6.1 (2015): 23-40.