Biogeography-based optimization

Last updated

Biogeography-based optimization (BBO) is an evolutionary algorithm (EA) that optimizes a function by stochastically and iteratively improving candidate solutions with regard to a given measure of quality, or fitness function. BBO belongs to the class of metaheuristics since it includes many variations, and since it does not make any assumptions about the problem and can therefore be applied to a wide class of problems.

Contents

BBO is typically used to optimize multidimensional real-valued functions, but it does not use the gradient of the function, which means that it does not require the function to be differentiable as required by classic optimization methods such as gradient descent and quasi-newton methods. BBO can therefore be used on discontinuous functions.

BBO optimizes a problem by maintaining a population of candidate solutions, and creating new candidate solutions by combining existing ones according to a simple formula. In this way the objective function is treated as a black box that merely provides a measure of quality given a candidate solution, and the function's gradient is not needed.

Like many EAs, BBO was motivated by a natural process; in particular, BBO was motivated by biogeography, which is the study of the distribution of biological species through time and space. [1] BBO was originally introduced by Dan Simon in 2008. [2]

Underlying principles

Mathematical models of biogeography describe speciation (the evolution of new species), the migration of species (animals, fish, birds, or insects) between islands, and the extinction of species. [3] Islands that are friendly to life are said to have a high habitat suitability index (HSI). [4] Features that correlate with HSI include rainfall, vegetative diversity, topographic diversity, land area, temperature, and others. The features that determine are called suitability index variables (SIVs). In terms of habitability, SIVs are the independent variables and HSI is the dependent variable.

Islands with a high HSI can support many species, and islands with a low HSI can support only a few species. Islands with a high HSI have many species that emigrate to nearby habitats because of the large populations and the large numbers of species that they host. Note that emigration from an island with a high HSI does not occur because species want to leave their home; after all, their home island is an attractive place to live. Emigration occurs because of the accumulation of random effects on a large number of species with large populations. Emigration occurs as animals ride flotsam, swim, fly, or ride the wind to neighboring islands. When a species emigrates from an island, it does not mean that the species completely disappears from its original island; only a few representatives emigrate, so an emigrating species remains present on its original island while at the same time migrating to a neighboring island. However, in BBO it is assumed that emigration from an island results in extinction from that island. This assumption is necessary in BBO because species represent the independent variables of a function, and each island represents a candidate solution to a function optimization problem.

Islands with a high HSI not only have a high emigration rate, but they also have a low immigration rate because they already support many species. Species that migrate to such islands will tend to die in spite of the island's high HSI, because there is too much competition for resources from other species.

Islands with a low HSI have a high immigration rate because of their low populations. Again, this is not because species want to immigrate to such islands; after all, these islands are undesirable places to live. The reason that immigration occurs to these islands is because there is a lot of room for additional species. Whether or not the immigrating species can survive in its new home, and for how long, is another question. However, species diversity is correlated with HSI, so when more species arrive at a low HSI island, the island's HSI will tend to increase. [4]

The figure on the right illustrates an island migration model. [3] The immigration rate and the emigration rate are functions of the number of species on the island. The maximum possible immigration rate occurs when there are zero species on the island. As the number of species increases, the island becomes more crowded, fewer species are able to survive immigration, and the immigration rate decreases. The largest possible number of species that the habitat can support is , at which point the immigration rate is zero. If there are no species on the island, then the emigration rate is zero. As the number of species on the island increases, it becomes more crowded, more species representatives are able to leave the island, and the emigration rate increases. When the island contains the largest number of possible species , the emigration rate reaches its maximum possible value .

Model of immigration
l
{\displaystyle \lambda }
and emigration
m
{\displaystyle \mu }
probabilities.
S
0
{\displaystyle S_{0}}
is the equilibrium species count, and
S
max
{\displaystyle S_{\max }}
is the maximum number of species that the island can support.
I
{\displaystyle I}
and
E
{\displaystyle E}
are the maximum immigration and emigration rates, respectively. Species Migration Model.png
Model of immigration and emigration probabilities. is the equilibrium species count, and is the maximum number of species that the island can support. and are the maximum immigration and emigration rates, respectively.

In BBO, is the probability that a given independent variable in the -th candidate solution will be replaced; that is, is the immigration probability of . If an independent variable is to be replaced, then the emigrating candidate solution is chosen with a probability that is proportional to the emigration probability . This is usually performed using roulette wheel selection.

for , where is the number of candidate solutions in the population.

Algorithm

Like most other EAs, BBO includes mutation. A basic BBO algorithm with a population size of for optimizing an -dimensional function can be described as follows.

Initialize a population of  candidate solutions While not(termination criterion)     For each, set emigration probability  fitness of , do         with For each, set immigration probability doFor each individual doFor each independent variable index do             Use  to probabilistically decide whether to immigrate to If immigrating then                 Use  to probabilistically select the emigrating individual End if         Next independent variable index:          Probabilistically mutate      Next individual:  Next generation

Discussion of the BBO algorithm

Algorithmic variations

Many variations have been proposed to the basic BBO algorithm, among which are the following.

where , and corresponds to standard migration as shown in the algorithm above. Blended BBO is based on blended crossover in genetic algorithms, [6] and has been shown to outperform standard BBO. [7]

Hybridization

Software

MATLAB

functionBBO% Biogeography-based optimization (BBO) to minimize a continuous function% This program was tested with MATLAB R2012bGenerationLimit=50;% generation count limit PopulationSize=50;% population sizeProblemDimension=20;% number of variables in each solution (i.e., problem dimension)MutationProbability=0.04;% mutation probability per solution per independent variableNumberOfElites=2;% how many of the best solutions to keep from one generation to the nextMinDomain=-2.048;% lower bound of each element of the function domainMaxDomain=+2.048;% upper bound of each element of the function domain% Initialize the populationrng(round(sum(100*clock)));% initialize the random number generatorx=zeros(PopulationSize,ProblemDimension);% allocate memory for the populationforindex=1:PopulationSize% randomly initialize the populationx(index,:)=MinDomain+(MaxDomain-MinDomain)*rand(1,ProblemDimension);endCost=RosenbrockCost(x);% compute the cost of each individual  [x,Cost]=PopulationSort(x,Cost);% sort the population from best to worstMinimumCost=zeros(GenerationLimit,1);% allocate memoryMinimumCost(1)=Cost(1);% save the best cost at each generation in the MinimumCost arraydisp(['Generation 0 min cost = ',num2str(MinimumCost(1))]);z=zeros(PopulationSize,ProblemDimension);% allocate memory for the temporary population% Compute migration rates, assuming the population is sorted from most fit to least fitmu=(PopulationSize+1-(1:PopulationSize))/(PopulationSize+1);% emigration ratelambda=1-mu;% immigration rateforGeneration=1:GenerationLimit% Save the best solutions and costs in the elite arraysEliteSolutions=x(1:NumberOfElites,:);EliteCosts=Cost(1:NumberOfElites);% Use migration rates to decide how much information to share between solutionsfork=1:PopulationSize% Probabilistic migration to the k-th solutionforj=1:ProblemDimensionifrand<lambda(k)% Should we immigrate?% Yes - Pick a solution from which to emigrate (roulette wheel selection)RandomNum=rand*sum(mu);Select=mu(1);SelectIndex=1;while(RandomNum>Select)&&(SelectIndex<PopulationSize)SelectIndex=SelectIndex+1;Select=Select+mu(SelectIndex);endz(k,j)=x(SelectIndex,j);% this is the migration stepelsez(k,j)=x(k,j);% no migration for this independent variableendendend% Mutationfork=1:PopulationSizeforParameterIndex=1:ProblemDimensionifrand<MutationProbabilityz(k,ParameterIndex)=MinDomain+(MaxDomain-MinDomain)*rand;endendendx=z;% replace the solutions with their new migrated and mutated versionsCost=RosenbrockCost(x);% calculate cost[x,Cost]=PopulationSort(x,Cost);% sort the population and costs from best to worstfork=1:NumberOfElites% replace the worst individuals with the previous generation's elitesx(PopulationSize-k+1,:)=EliteSolutions(k,:);Cost(PopulationSize-k+1)=EliteCosts(k);end[x,Cost]=PopulationSort(x,Cost);% sort the population and costs from best to worstMinimumCost(Generation+1)=Cost(1);disp(['Generation ',num2str(Generation),' min cost = ',num2str(MinimumCost(Generation+1))])end% Wrap it up by displaying the best solution and by plotting the resultsdisp(['Best solution found = ',num2str(x(1,:))])closeallplot(0:GenerationLimit,MinimumCost);xlabel('Generation')ylabel('Minimum Cost')return%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function[x, Cost] = PopulationSort(x, Cost)% Sort the population and costs from best to worst[Cost,indices]=sort(Cost,'ascend');x=x(indices,:);return%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%function[Cost]=RosenbrockCost(x)% Compute the Rosenbrock function value of each element in xNumberOfDimensions=size(x,2);Cost=zeros(size(x,1),1);% allocate memory for the Cost arrayforPopulationIndex=1:length(x)Cost(PopulationIndex)=0;fori=1:NumberOfDimensions-1Temp1=x(PopulationIndex,i);Temp2=x(PopulationIndex,i+1);Cost(PopulationIndex)=Cost(PopulationIndex)+100*(Temp2-Temp1^2)^2+(Temp1-1)^2;endendreturn

R

Extensions

BBO has been extended to noisy functions (that is, functions whose fitness evaluation is corrupted by noise); [21] constrained functions; [22] combinatorial functions; [23] and multi-objective functions. [24] [25] Moreover, a micro biogeography-inspired multi-objective optimization algorithm (μBiMO) was implemented: it is suitable for solving multi-objective optimisations in the field of industrial design because it is based on a small number of islands (hence the name μBiMO), i.e. few objective function calls are required. [26]

Mathematical analyses

BBO has been mathematically analyzed using Markov models [27] and dynamic system models. [28]

Applications

Scholars have applied BBO into various academic and industrial applications. They found BBO performed better than state-of-the-art global optimization methods.

For example, Wang et al. proved BBO performed equal performance with FSCABC but with simpler codes. [29]

Yang et al. showed BBO was superior to GA, PSO, and ABC. [30]

Related Research Articles

Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.

<span class="mw-page-title-main">Linear programming</span> Method to solve some optimization problems

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming.

Branch and bound is a method for solving optimization problems by breaking them down into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the optimal solution. It is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.

<span class="mw-page-title-main">Expectation–maximization algorithm</span> Iterative method for finding maximum likelihood estimates in statistical models

In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.

Column generation or delayed column generation is an efficient algorithm for solving large linear programs.

In computer science, an evolution strategy (ES) is an optimization technique based on ideas of evolution. It belongs to the general class of evolutionary computation or artificial evolution methodologies.

<span class="mw-page-title-main">Newton's method in optimization</span> Method for finding stationary points of a function

In calculus, Newton's method (also called Newton–Raphson) is an iterative method for finding the roots of a differentiable function F, which are solutions to the equation F (x) = 0. As such, Newton's method can be applied to the derivative f of a twice-differentiable function f to find the roots of the derivative (solutions to f ′(x) = 0), also known as the critical points of f. These solutions may be minima, maxima, or saddle points; see section "Several variables" in Critical point (mathematics) and also section "Geometric interpretation" in this article. This is relevant in optimization, which aims to find (global) minima of the function f.

<span class="mw-page-title-main">Interior-point method</span> Algorithms for solving convex optimization problems

Interior-point methods are a certain class of algorithms that solve linear and nonlinear convex optimization problems.

<span class="mw-page-title-main">Estimation of distribution algorithm</span>

Estimation of distribution algorithms (EDAs), sometimes called probabilistic model-building genetic algorithms (PMBGAs), are stochastic optimization methods that guide the search for the optimum by building and sampling explicit probabilistic models of promising candidate solutions. Optimization is viewed as a series of incremental updates of a probabilistic model, starting with the model encoding an uninformative prior over admissible solutions and ending with the model that generates only the global optima.

<span class="mw-page-title-main">Differential evolution</span>

In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found.

Branch and cut is a method of combinatorial optimization for solving integer linear programs (ILPs), that is, linear programming (LP) problems where some or all the unknowns are restricted to integer values. Branch and cut involves running a branch and bound algorithm and using cutting planes to tighten the linear programming relaxations. Note that if cuts are only used to tighten the initial LP relaxation, the algorithm is called cut and branch.

In mathematical optimization, constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.

Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.

Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation and selection: in each generation (iteration) new individuals are generated by variation, usually in a stochastic way, of the current parental individuals. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, over the generation sequence, individuals with better and better -values are generated.

<span class="mw-page-title-main">Feasible region</span> Mathematical constraints that define ways of finding the best solution

In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down.

Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

Bootstrapping populations in statistics and mathematics starts with a sample observed from a random variable.

Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems and add a penalty term to the objective; the difference is that the augmented Lagrangian method adds yet another term, designed to mimic a Lagrange multiplier. The augmented Lagrangian is related to, but not identical with the method of Lagrange multipliers.

<span class="mw-page-title-main">Simulation-based optimization</span>

Simulation-based optimization integrates optimization techniques into simulation modeling and analysis. Because of the complexity of the simulation, the objective function may become difficult and expensive to evaluate. Usually, the underlying simulation model is stochastic, so that the objective function must be estimated using statistical estimation techniques.

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

References

  1. Quammen, D. (1997). The Song of the Dodo: Island Biogeography in an Age of Extinction. Scribner.
  2. Simon, D. (2008). "Biogeography-based optimization" (PDF). IEEE Transactions on Evolutionary Computation. 12 (6): 702–713. doi:10.1109/tevc.2008.919004. S2CID   8319014.
  3. 1 2 MacArthur, R.; Wilson, E. (1967). The Theory of Island Biogeography. Princeton University Press.
  4. 1 2 Wesche, T.; Goertler, G.; Hubert, W. (1987). "Modified habitat suitability index model for brown trout in southeastern Wyoming". North American Journal of Fisheries Management. 7 (2): 232–237. doi:10.1577/1548-8659(1987)7<232:mhsimf>2.0.co;2.
  5. De Jong, K. (1975). An Analysis of the Behaviour of a Class of Genetic Adaptive Systems (Ph.D.). University of Michigan.
  6. Muhlenbein, H.; Schlierkamp-Voosen, D. (1993). "Predictive models for the breeder genetic algorithm: I. Continuous parameter optimization". Evolutionary Computation. 1 (1): 25–49. doi:10.1162/evco.1993.1.1.25. S2CID   16085506.
  7. Ma, H.; Simon, D. (2011). "Blended biogeography-based optimization for constrained optimization" (PDF). Engineering Applications of Artificial Intelligence. 24 (3): 517–525. doi:10.1016/j.engappai.2010.08.005.
  8. Simon, D. (2013). Evolutionary Optimization Algorithms. Wiley.
  9. 1 2 Kundra, H.; Sood, M. (2010). "Cross-Country Path Finding using Hybrid approach of PSO and BBO" (PDF). International Journal of Computer Applications. 7 (6): 15–19. doi: 10.5120/1167-1370 .
  10. Ma, H. (2010). "An analysis of the equilibrium of migration models for biogeography-based optimization" (PDF). Information Sciences. 180 (18): 3444–3464. doi:10.1016/j.ins.2010.05.035.
  11. Zhang, Y. (2015). "Pathological Brain Detection in Magnetic Resonance Imaging Scanning by Wavelet Entropy and Hybridization of Biogeography-based Optimization and Particle Swarm Optimization" (PDF). Progress in Electromagnetics Research. 152: 41–58. doi: 10.2528/pier15040602 .
  12. Bhattacharya, A.; Chattopadhyay, P. (2010). "Hybrid differential evolution with biogeography-based optimization for solution of economic load dispatch". IEEE Transactions on Power Systems. 25 (4): 1955–1964. Bibcode:2010ITPSy..25.1955B. doi:10.1109/tpwrs.2010.2043270. S2CID   30052218.
  13. Du, D.; Simon, D.; Ergezer, M. (2009). "Biogeography-based optimization combined with evolutionary strategy and immigration refusal" (PDF). IEEE Conference on Systems, Man, and Cybernetics. San Antonio, Texas. pp. 1023–1028.
  14. Ergezer, M.; Simon, D.; Du, D. (2009). "Oppositional biogeography-based optimization" (PDF). IEEE Conference on Systems, Man, and Cybernetics. San Antonio, Texas. pp. 1035–1040.
  15. Kundra, H.; Kaur, A.; Panchal, V. (2009). "An integrated approach to biogeography based optimization with case-based reasoning for exploring groundwater possibility" (PDF). The Delving: Journal of Technology and Engineering Sciences. 1 (1): 32–38.
  16. Lohokare, M.; Pattnaik, S.; Devi, S.; Panigrahi, B.; Das, S.; Bakwad, K. (2009). "Intelligent biogeography-based optimization for discrete variables". World Congress on Nature and Biologically Inspired Computing. Coimbatore, India. pp. 1088–1093. doi:10.1109/NABIC.2009.5393808.
  17. Wang, G.; Guo, L.; Duan, H.; Wang, H.; Liu, L.; Shao, M. (2013). "Hybridizing harmony search with biogeography based optimization for global numerical optimization". Journal of Computational and Theoretical Nanoscience. 10 (10): 2312–2322. Bibcode:2013JCTN...10.2312W. doi:10.1166/jctn.2013.3207.
  18. Wang, L.; Xu, Y. (2011). "An effective hybrid biogeography-based optimization algorithm for parameter estimation of chaotic systems". Expert Systems with Applications. 38 (12): 15103–15109. doi:10.1016/j.eswa.2011.05.011.
  19. Simon, D.; Omran, M.; Clerc, M. "Linearized Biogeography-Based Optimization with Re-initialization and Local Search" . Retrieved 6 September 2013.
  20. "Bbo: Biogeography-Based Optimization". 2014-09-18.
  21. Ma, H.; Fei, M.; Simon, D.; Yu, M. "Biogeography-Based Optimization for Noisy Fitness Functions" . Retrieved 7 September 2013.
  22. Roy, P.; Ghoshal, S.; Thakur, S. (2010). "Biogeography based optimization for multi-constraint optimal power flow with emission and non-smooth cost function". Expert Systems with Applications. 37 (12): 8221–8228. doi:10.1016/j.eswa.2010.05.064.
  23. Song, Y.; Liu, M.; Wang, Z. (2010). "Biogeography-based optimization for the traveling salesman problems". International Joint Conference on Computational Science and Optimization. Huangshan, Anhui, China. pp. 295–299.
  24. Roy, P.; Ghoshal, S.; Thakur, S. (2010). "Multi-objective optimal power flow using biogeography-based optimization". Electric Power Components and Systems. 38 (12): 1406–1426. doi:10.1080/15325001003735176. S2CID   109069222.
  25. Di Barba, P.; Dughiero, F.; Mognaschi, M.E.; Savini, A.; Wiak, S. (2016). "Biogeography-Inspired Multiobjective Optimization and MEMS Design". IEEE Transactions on Magnetics. 52 (3): 1–4. Bibcode:2016ITM....5288982D. doi:10.1109/TMAG.2015.2488982. S2CID   17355264.
  26. Mognaschi, M.E. (2017). "Micro biogeography-inspired multi-objective optimisation for industrial electromagnetic design". Electronics Letters. 53 (22): 1458–1460. doi:10.1049/el.2017.3072.
  27. Simon, D.; Ergezer, M.; Du, D.; Rarick, R. (2011). "Markov models for biogeography-based optimization" (PDF). IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics. 41 (1): 299–306. doi:10.1109/tsmcb.2010.2051149. PMID   20595090. S2CID   11852624.
  28. Simon, D. (2011). "A dynamic system model of biogeography-based optimization" (PDF). Applied Soft Computing. 1 (8): 5652–5661. doi:10.1016/j.asoc.2011.03.028.
  29. Wang, S. (2015). "Fruit Classification by Wavelet-Entropy and Feedforward Neural Network trained by Fitness-scaled Chaotic ABC and Biogeography-based Optimization". Entropy. 17 (8): 5711–5728. Bibcode:2015Entrp..17.5711W. doi: 10.3390/e17085711 .
  30. Yang, G.; Yang, J. (2015). "Automated classification of brain images using wavelet-energy and biogeography-based optimization". Multimedia Tools and Applications. 75 (23): 15601–15617. doi:10.1007/s11042-015-2649-7. S2CID   254825916.