Developed by | Binu D [1] |
---|---|
Category | Metaheuristics [2] [ circular reference ] |
Year of development | 2019 [1] |
Publisher | IEEE [1] |
Language | Matlab [3] |
Citation count | 49 [4] |
The rider optimization algorithm (ROA) [1] [5] [6] is devised based on a novel computing method, namely fictional computing that undergoes series of process to solve the issues of optimizations using imaginary facts and notions. ROA relies on the groups of rider that struggle to reach the target. ROA employs rider groups that take a trip to reach common target in order to become winner. In ROA, the count of groups is four wherein equal riders are placed.
The four groups adapted in ROA are attacker, overtaker, follower, and bypass rider. Each group undergoes series of strategy to attain the target. The goal of bypass rider is to attain target by bypassing leader's path. The follower tries to follow the position of leader in axis. Furthermore, the follower employs multidirectional search space considering leading rider, which is useful for algorithm as it improves convergence rate. The overtaker undergoes its own position to attain target considering nearby locations of leader. The benefit of overtaker is that it facilitates faster convergence with huge global neighbourhood. As per ROA, the global optimal convergence is function of overtaker, whose position relies on the position of the leader, success rate, and directional indicator. The attacker adapts position of leader to accomplish destination by using its utmost speed. Moreover, it is responsible for initializing the multidirectional search using fast search for accelerating search speed.
Despite the riders undergoes a specific method, the major factors employed for reaching the target are correct riding of vehicles and proper management of accelerator, steering, brake and gear. At each time instance, the riders alter its position towards target by regulating these factors and follow the prescribed method using current success rate. The leader is defined using the success rate at current instance. The process is repeated till the riders go into off time that is maximal instant provided to riders to attain intended location. After reaching off time, the rider at leading position is termed winner.
The ROA [1] [5] [6] is motivated from riders, who contend to reach anticipated location. The steps employed in ROA algorithm are defined below:
The foremost step is the initialization of algorithm which is done using four groups of riders represented as , and initializations of its positions are performed in arbitrary manner. The initialization of group is given by,
| (1) |
where, signifies count of riders, and signifies position of rider in size at time instant.
The count of riders is evaluated with count of riders of each group and is expressed as,
| (2) |
where, signifies bypass rider, represent follower, signifies overtaker, represent attacker, and signifies rag bull rider. Hence, the relation amongst the aforementioned attributes is represented as,
| (3) |
After rider group parameters initialization, the rate of success considering each rider is evaluated. The rate of success is computed with distance and is measured between rider location and target and is formulated as,
| (4) |
where, symbolize position of rider and indicate target position. To elevate rate of success, distance must be minimized and hence, distance reciprocal offers the success rate of rider.
The rate of success is employed as significant part in discovering leader. The rider that reside in near target location is supposed to contain highest rate of success.
The position of rider in each group is updated to discover rider at leading position and hence is winner. Thus, the rider update the position using the features of each rider defined on the definition. The update position of each rider is explained below:
The follower has an inclination to update position based on location of leading rider to attain target in quick manner and is expressed as,
| (5) |
where, signifies coordinate selector, represent leading rider position, indicate leader's index, signifies angle of steering considering rider in coordinate, and represent distance.
The overtaker's update position is utilized to elevate rate of success by discovering overtaker position and is represented as,
| (6) |
where, signifies direction indicator.
The attacker contains an inclination to confiscate the leaders position by following the leader's update process and is expressed as,
| (7) |
Here, the update rule of bypass riders is exhibited wherein standard bypass rider is expressed as,
| (8) |
where, signifies random number, symbolize random number between 1 and , indicate a random number ranging between 1 and and represent random number between 0 and 1.
After executing process of update, the rate of success considering each rider is computed.
The parameter of rider's update is important to discover an effective solution. Moreover, the steering angle, gears are updated with activity counter, and are updated with success rate.
The procedure is iterated repeatedly till wherein, leader is discovered. After race completion, the leading rider is considered as winner.
algorithm rider-optimization isinput: Arbitrary rider position , iteration , maximum iteration output: Leading rider Initialize solution set Initialize other parameter of rider. Find rate of success using equation ( 4 ) whilefor Update position of follower using equation ( 5 ) Update position of overtaker with equation ( 6 ) Update position of attacker with equation ( 7 ) Update position of bypass rider with equation ( 8 ) Rank the riders based on success rate using equation ( 4 ) Select the rider with high success rate Update rider parameters Return
The applications of ROA are noticed in several domains that involve: Engineering Design Optimization Problems, [7] Diabetic retinopathy detection, [8] Document clustering, [9] Plant disease detection, [10] Attack Detection, [11] Enhanced Video Super Resolution, [12] Clustering, [13] Webpages Re-ranking, [14] Task scheduling, [15] Medical Image Compression, [16] Resource allocation, [17] and multihop routing [18]
Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, road networks. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later.
Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. For large numbers of local optima, SA can find the global optima. It is often used when the search space is discrete. For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to exact algorithms such as gradient descent or branch and bound.
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for finding a local minimum of a differentiable multivariate function.
In computer science, a selection algorithm is an algorithm for finding the th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the th order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Selection algorithms include quickselect, and the median of medians algorithm. When applied to a collection of values, these algorithms take linear time, as expressed using big O notation. For data that is already structured, faster algorithms may be possible; as an extreme case, selection in an already-sorted array takes time .
In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.
In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
Stochastic gradient descent is an iterative method for optimizing an objective function with suitable smoothness properties. It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient by an estimate thereof. Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in exchange for a lower convergence rate.
In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information. It does so by gradually improving an approximation to the Hessian matrix of the loss function, obtained only from gradient evaluations via a generalized secant method.
Machine olfaction is the automated simulation of the sense of smell. An emerging application in modern engineering, it involves the use of robots or other automated systems to analyze air-borne chemicals. Such an apparatus is often called an electronic nose or e-nose. The development of machine olfaction is complicated by the fact that e-nose devices to date have responded to a limited number of chemicals, whereas odors are produced by unique sets of odorant compounds. The technology, though still in the early stages of development, promises many applications, such as: quality control in food processing, detection and diagnosis in medicine, detection of drugs, explosives and other dangerous or illegal substances, disaster response, and environmental monitoring.
Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.
Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals. Compressed sensing has applications in, for example, MRI where the incoherence condition is typically satisfied.
The merit order is a way of ranking available sources of energy, especially electrical generation, based on ascending order of price and sometimes pollution, together with amount of energy that will be generated. In a centralized management, the ranking is so that those with the lowest marginal costs are the first ones to be brought online to meet demand, and the plants with the highest marginal costs are the last to be brought on line. Dispatching generation in this way, known as economic dispatch, minimizes the cost of production of electricity. Sometimes generating units must be started out of merit order, due to transmission congestion, system reliability or other reasons.
In computer science and operations research, the artificial bee colony algorithm (ABC) is an optimization algorithm based on the intelligent foraging behaviour of honey bee swarm, proposed by Derviş Karaboğa in 2005.
Modularity is a measure of the structure of networks or graphs which measures the strength of division of a network into modules. Networks with high modularity have dense connections between the nodes within modules but sparse connections between nodes in different modules. Modularity is often used in optimization methods for detecting community structure in networks. Biological networks, including animal brains, exhibit a high degree of modularity. However, modularity maximization is not statistically consistent, and finds communities in its own null model, i.e. fully random graphs, and therefore it cannot be used to find statistically significant community structures in empirical networks. Furthermore, it has been shown that modularity suffers a resolution limit and, therefore, it is unable to detect small communities.
In statistics and combinatorial mathematics, group testing is any procedure that breaks up the task of identifying certain objects into tests on groups of items, rather than on individual ones. First studied by Robert Dorfman in 1943, group testing is a relatively new field of applied mathematics that can be applied to a wide range of practical applications and is an active area of research today.
Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. This article aims to provide an overview of the most definitive models of neuro-biological computation as well as the tools commonly used to construct and analyze them.
In computer vision, pattern recognition, and robotics, point-set registration, also known as point-cloud registration or scan matching, is the process of finding a spatial transformation that aligns two point clouds. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model, and mapping a new measurement to a known data set to identify features or to estimate its pose. Raw 3D point cloud data are typically obtained from Lidars and RGB-D cameras. 3D point clouds can also be generated from computer vision algorithms such as triangulation, bundle adjustment, and more recently, monocular image depth estimation using deep learning. For 2D point set registration used in image processing and feature-based image registration, a point set may be 2D pixel coordinates obtained by feature extraction from an image, for example corner detection. Point cloud registration has extensive applications in autonomous driving, motion estimation and 3D reconstruction, object detection and pose estimation, robotic manipulation, simultaneous localization and mapping (SLAM), panorama stitching, virtual and augmented reality, and medical imaging.
The Harrow–Hassidim–Lloyd algorithm or HHL algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.
Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks. A survey from May 2020 exposes the fact that practitioners report a dire need for better protecting machine learning systems in industrial applications.
In mathematics, the spiral optimization (SPO) algorithm is a metaheuristic inspired by spiral phenomena in nature.
{{cite journal}}
: CS1 maint: multiple names: authors list (link)