Rider optimization algorithm

Last updated
Rider Optimization Algorithm (ROA)
Developed byBinu D [1]
CategoryMetaheuristics [2] [ circular reference ]
Year of development2019 [1]
PublisherIEEE [1]
LanguageMatlab [3]
Citation count49 [4]

The rider optimization algorithm (ROA) [1] [5] [6] is devised based on a novel computing method, namely fictional computing that undergoes series of process to solve the issues of optimizations using imaginary facts and notions. ROA relies on the groups of rider that struggle to reach the target. ROA employs rider groups that take a trip to reach common target in order to become winner. In ROA, the count of groups is four wherein equal riders are placed.

Contents

The four groups adapted in ROA are attacker, overtaker, follower, and bypass rider. Each group undergoes series of strategy to attain the target. The goal of bypass rider is to attain target by bypassing leader's path. The follower tries to follow the position of leader in axis. Furthermore, the follower employs multidirectional search space considering leading rider, which is useful for algorithm as it improves convergence rate. The overtaker undergoes its own position to attain target considering nearby locations of leader. The benefit of overtaker is that it facilitates faster convergence with huge global neighbourhood. As per ROA, the global optimal convergence is function of overtaker, whose position relies on the position of the leader, success rate, and directional indicator. The attacker adapts position of leader to accomplish destination by using its utmost speed. Moreover, it is responsible for initializing the multidirectional search using fast search for accelerating search speed.

Despite the riders undergoes a specific method, the major factors employed for reaching the target are correct riding of vehicles and proper management of accelerator, steering, brake and gear. At each time instance, the riders alter its position towards target by regulating these factors and follow the prescribed method using current success rate. The leader is defined using the success rate at current instance. The process is repeated till the riders go into off time that is maximal instant provided to riders to attain intended location. After reaching off time, the rider at leading position is termed winner.

Algorithm

The ROA [1] [5] [6] is motivated from riders, who contend to reach anticipated location. The steps employed in ROA algorithm are defined below:

Initialization of Rider and other algorithmic parameters

The foremost step is the initialization of algorithm which is done using four groups of riders represented as , and initializations of its positions are performed in arbitrary manner. The initialization of group is given by,

 

 

 

 

(1)

where, signifies count of riders, and signifies position of rider in size at time instant.

The count of riders is evaluated with count of riders of each group and is expressed as,

 

 

 

 

(2)

where, signifies bypass rider, represent follower, signifies overtaker, represent attacker, and signifies rag bull rider. Hence, the relation amongst the aforementioned attributes is represented as,

 

 

 

 

(3)

Finding rate of success

After rider group parameters initialization, the rate of success considering each rider is evaluated. The rate of success is computed with distance and is measured between rider location and target and is formulated as,

 

 

 

 

(4)

where, symbolize position of rider and indicate target position. To elevate rate of success, distance must be minimized and hence, distance reciprocal offers the success rate of rider.

Determination of leading rider

The rate of success is employed as significant part in discovering leader. The rider that reside in near target location is supposed to contain highest rate of success.

Evaluate the rider’s update position

The position of rider in each group is updated to discover rider at leading position and hence is winner. Thus, the rider update the position using the features of each rider defined on the definition. The update position of each rider is explained below:

The follower has an inclination to update position based on location of leading rider to attain target in quick manner and is expressed as,

 

 

 

 

(5)

where, signifies coordinate selector, represent leading rider position, indicate leader's index, signifies angle of steering considering rider in coordinate, and represent distance.

The overtaker's update position is utilized to elevate rate of success by discovering overtaker position and is represented as,

 

 

 

 

(6)

where, signifies direction indicator.

The attacker contains an inclination to confiscate the leaders position by following the leader's update process and is expressed as,

 

 

 

 

(7)

Here, the update rule of bypass riders is exhibited wherein standard bypass rider is expressed as,

 

 

 

 

(8)

where, signifies random number, symbolize random number between 1 and , indicate a random number ranging between 1 and and represent random number between 0 and 1.

Finding success rate

After executing process of update, the rate of success considering each rider is computed.

Update of Rider parameter

The parameter of rider's update is important to discover an effective solution. Moreover, the steering angle, gears are updated with activity counter, and are updated with success rate.

Off time of rider

The procedure is iterated repeatedly till wherein, leader is discovered. After race completion, the leading rider is considered as winner.

algorithm rider-optimization isinput: Arbitrary rider position ,            iteration ,            maximum iteration output: Leading rider       Initialize solution set     Initialize other parameter of rider.     Find rate of success using equation ( 4 )      whilefor             Update position of follower using equation ( 5 )             Update position of overtaker with equation ( 6 )             Update position of attacker with equation ( 7 )             Update position of bypass rider with equation ( 8 )             Rank the riders based on success rate using equation ( 4 )             Select the rider with  high success rate             Update rider parameters             Return 

Applications

The applications of ROA are noticed in several domains that involve: Engineering Design Optimization Problems, [7] Diabetic retinopathy detection, [8] Document clustering, [9] Plant disease detection, [10] Attack Detection, [11] Enhanced Video Super Resolution, [12] Clustering, [13] Webpages Re-ranking, [14] Task scheduling, [15] Medical Image Compression, [16] Resource allocation, [17] and multihop routing [18]

Related Research Articles

<span class="mw-page-title-main">Dijkstra's algorithm</span> Graph search algorithm

Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, road networks. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later.

<span class="mw-page-title-main">Simulated annealing</span> Probabilistic optimization technique and metaheuristic

Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. For large numbers of local optima, SA can find the global optima. It is often used when the search space is discrete. For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to exact algorithms such as gradient descent or branch and bound.

<span class="mw-page-title-main">Gradient descent</span> Optimization algorithm

Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for finding a local minimum of a differentiable multivariate function.

In computer science, a selection algorithm is an algorithm for finding the th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the th order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Selection algorithms include quickselect, and the median of medians algorithm. When applied to a collection of values, these algorithms take linear time, as expressed using big O notation. For data that is already structured, faster algorithms may be possible; as an extreme case, selection in an already-sorted array takes time .

<span class="mw-page-title-main">Ant colony optimization algorithms</span> Optimization algorithm

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.

In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.

Stochastic gradient descent is an iterative method for optimizing an objective function with suitable smoothness properties. It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient by an estimate thereof. Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in exchange for a lower convergence rate.

In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information. It does so by gradually improving an approximation to the Hessian matrix of the loss function, obtained only from gradient evaluations via a generalized secant method.

Machine olfaction is the automated simulation of the sense of smell. An emerging application in modern engineering, it involves the use of robots or other automated systems to analyze air-borne chemicals. Such an apparatus is often called an electronic nose or e-nose. The development of machine olfaction is complicated by the fact that e-nose devices to date have responded to a limited number of chemicals, whereas odors are produced by unique sets of odorant compounds. The technology, though still in the early stages of development, promises many applications, such as: quality control in food processing, detection and diagnosis in medicine, detection of drugs, explosives and other dangerous or illegal substances, disaster response, and environmental monitoring.

Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.

Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals. Compressed sensing has applications in, for example, MRI where the incoherence condition is typically satisfied.

<span class="mw-page-title-main">Merit order</span> Ranking of available sources of energy

The merit order is a way of ranking available sources of energy, especially electrical generation, based on ascending order of price and sometimes pollution, together with amount of energy that will be generated. In a centralized management, the ranking is so that those with the lowest marginal costs are the first ones to be brought online to meet demand, and the plants with the highest marginal costs are the last to be brought on line. Dispatching generation in this way, known as economic dispatch, minimizes the cost of production of electricity. Sometimes generating units must be started out of merit order, due to transmission congestion, system reliability or other reasons.

In computer science and operations research, the artificial bee colony algorithm (ABC) is an optimization algorithm based on the intelligent foraging behaviour of honey bee swarm, proposed by Derviş Karaboğa in 2005.

<span class="mw-page-title-main">Modularity (networks)</span> Measure of network community structure

Modularity is a measure of the structure of networks or graphs which measures the strength of division of a network into modules. Networks with high modularity have dense connections between the nodes within modules but sparse connections between nodes in different modules. Modularity is often used in optimization methods for detecting community structure in networks. Biological networks, including animal brains, exhibit a high degree of modularity. However, modularity maximization is not statistically consistent, and finds communities in its own null model, i.e. fully random graphs, and therefore it cannot be used to find statistically significant community structures in empirical networks. Furthermore, it has been shown that modularity suffers a resolution limit and, therefore, it is unable to detect small communities.

<span class="mw-page-title-main">Group testing</span> Procedure that breaks up the task of identifying certain objects into tests on groups of items

In statistics and combinatorial mathematics, group testing is any procedure that breaks up the task of identifying certain objects into tests on groups of items, rather than on individual ones. First studied by Robert Dorfman in 1943, group testing is a relatively new field of applied mathematics that can be applied to a wide range of practical applications and is an active area of research today.

Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. This article aims to provide an overview of the most definitive models of neuro-biological computation as well as the tools commonly used to construct and analyze them.

<span class="mw-page-title-main">Point-set registration</span> Process of finding a spatial transformation that aligns two point clouds

In computer vision, pattern recognition, and robotics, point-set registration, also known as point-cloud registration or scan matching, is the process of finding a spatial transformation that aligns two point clouds. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model, and mapping a new measurement to a known data set to identify features or to estimate its pose. Raw 3D point cloud data are typically obtained from Lidars and RGB-D cameras. 3D point clouds can also be generated from computer vision algorithms such as triangulation, bundle adjustment, and more recently, monocular image depth estimation using deep learning. For 2D point set registration used in image processing and feature-based image registration, a point set may be 2D pixel coordinates obtained by feature extraction from an image, for example corner detection. Point cloud registration has extensive applications in autonomous driving, motion estimation and 3D reconstruction, object detection and pose estimation, robotic manipulation, simultaneous localization and mapping (SLAM), panorama stitching, virtual and augmented reality, and medical imaging.

The Harrow–Hassidim–Lloyd algorithm or HHL algorithm is a quantum algorithm for numerically solving a system of linear equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.

Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks. A survey from May 2020 exposes the fact that practitioners report a dire need for better protecting machine learning systems in industrial applications.

<span class="mw-page-title-main">Spiral optimization algorithm</span> Optimization algorithm

In mathematics, the spiral optimization (SPO) algorithm is a metaheuristic inspired by spiral phenomena in nature.

References

  1. 1 2 3 4 5 Binu D and Kariyappa BS (2019). "RideNN: A new rider optimization algorithm based neural network for fault diagnosis of analog circuits". IEEE Transactions on Instrumentation & Measurement. 68 (1): 2–26. Bibcode:2019ITIM...68....2B. doi:10.1109/TIM.2018.2836058. S2CID   54459927.
  2. "Metaheuristic". Wikipedia.
  3. Binu, D (24 March 2019). "Rider Optimization Algorithm". MathWorks.
  4. Binu, D. "GoogleScholar".
  5. 1 2 Binu D and Kariyappa BS (2020). "Multi-Rider Optimization-based Neural Network for Fault Isolation in Analog Circuits". Journal of Circuits, Systems and Computers. 30 (3). doi:10.1142/S0218126621500481. S2CID   219914332.
  6. 1 2 Binu D and Kariyappa BS (2020). "Rider Deep LSTM Network for Hybrid Distance Score-based Fault Prediction in Analog Circuits". IEEE Transactions on Industrial Electronics. 68 (10): 1. doi:10.1109/TIE.2020.3028796. S2CID   226439786.
  7. Wang G., Yuan Y. and Guo W (2019). "An Improved Rider Optimization Algorithm for solving Engineering Optimization Problems". IEEE Access. 7: 80570–80576. doi: 10.1109/ACCESS.2019.2923468 . S2CID   195775696.
  8. Jadhav AS., Patil PB. and Biradar S (2020). "Optimal feature selection-based diabetic retinopathy detection using improved rider optimization algorithm enabled with deep learning". Evolutionary Intelligence: 1–18.
  9. Yarlagadda M., Rao KG. and Srikrishna A (2019). "Frequent itemset-based feature selection and Rider Moth Search Algorithm for document clustering". Journal of King Saud University-Computer and Information Sciences. 34 (4): 1098–1109. doi: 10.1016/j.jksuci.2019.09.002 .
  10. Cristin R., Kumar BS., Priya C and Karthick K (2020). "Deep neural network based Rider-Cuckoo Search Algorithm for plant disease detection". Artificial Intelligence Review: 1–26.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  11. Sarma, S.K (2020). "Rider Optimization based Optimized Deep-CNN towards Attack Detection in IoT". In Proceedings of 4th International Conference on Intelligent Computing and Control Systems (ICICCS): 163–169.
  12. Jagdale RH and Shah SK (2020). "Modified Rider Optimization-based V Channel Magnification for Enhanced Video Super Resolution". International Journal of Image and Graphics . 21. doi:10.1142/S0219467821500030. S2CID   225249612.
  13. Poluru RK and Ramasamy LK (2020). "Optimal cluster head selection using modified rider assisted clustering for IoT". IET Communications. 14 (13): 2189–2201. doi: 10.1049/iet-com.2020.0236 . S2CID   219455360.
  14. Sankpal LJ and Patil SH (2020). "Rider-Rank Algorithm-Based Feature Extraction for Re-ranking the Webpages in the Search Engine". The Computer Journal. 63 (10): 1479–1489. doi:10.1093/comjnl/bxaa032.
  15. Alameen A and Gupta A (2020). "Fitness rate-based rider optimization enabled for optimal task scheduling in cloud". Information Security Journal: A Global Perspective. 29 (6): 1–17. doi:10.1080/19393555.2020.1769780. S2CID   220846722.
  16. Sreenivasulu P and Varadharajan S (2020). "Algorithmic Analysis on Medical Image Compression Using Improved Rider Optimization Algorithm". Innovations in Computer Science and Engineering. Lecture Notes in Networks and Systems. Vol. 103. Springer. pp. 267–274. doi:10.1007/978-981-15-2043-3_32. ISBN   978-981-15-2042-6. S2CID   215911629.
  17. Vhatkar KN and Bhole GP (2020). "Improved rider optimization for optimal container resource allocation in cloud with security assurance". International Journal of Pervasive Computing and Communications. 16 (3): 235–258. doi:10.1108/IJPCC-12-2019-0094. S2CID   220687409.
  18. Augustine S and Ananth JP (2020). "A modified rider optimization algorithm for multihop routing in WSN". International Journal of Numerical Modelling: Electronic Networks, Devices and Fields: 2764.