In computer science, an anytime algorithm is an algorithm that can return a valid solution to a problem even if it is interrupted before it ends. The algorithm is expected to find better and better solutions the longer it keeps running.
Most algorithms run to completion: they provide a single answer after performing some fixed amount of computation. In some cases, however, the user may wish to terminate the algorithm prior to completion. The amount of computation required may be substantial, for example, and computational resources might need to be reallocated. Most algorithms either run to completion or they provide no useful solution information. Anytime algorithms, however, are able to return a partial answer, whose quality depends on the amount of computation they were able to perform. The answer generated by anytime algorithms is an approximation of the correct answer.
An anytime algorithm may be also called an "interruptible algorithm". They are different from contract algorithms, which must declare a time in advance; in an anytime algorithm, a process can just announce that it is terminating. [1]
The goal of anytime algorithms are to give intelligent systems the ability to make results of better quality in return for turn-around time. [2] They are also supposed to be flexible in time and resources. [3] They are important because artificial intelligence or AI algorithms can take a long time to complete results. This algorithm is designed to complete in a shorter amount of time. [3] Also, these are intended to have a better understanding that the system is dependent and restricted to its agents and how they work cooperatively. [3] An example is the Newton–Raphson iteration applied to finding the square root of a number. [4] Another example that uses anytime algorithms is trajectory problems when you're aiming for a target; the object is moving through space while waiting for the algorithm to finish and even an approximate answer can significantly improve its accuracy if given early. [3]
What makes anytime algorithms unique is their ability to return many possible outcomes for any given input. [2] An anytime algorithm uses many well defined quality measures to monitor progress in problem solving and distributed computing resources. [2] It keeps searching for the best possible answer with the amount of time that it is given. [5] It may not run until completion and may improve the answer if it is allowed to run longer. [6] This is often used for large decision set problems. [7] This would generally not provide useful information unless it is allowed to finish. [8] While this may sound similar to dynamic programming, the difference is that it is fine-tuned through random adjustments, rather than sequential.
Anytime algorithms are designed so that it can be told to stop at any time and would return the best result it has found so far. [3] This is why it is called an interruptible algorithm. Certain anytime algorithms also maintain the last result, so that if they are given more time, they can continue from where they left off to obtain an even better result. [3]
When the decider has to act, there must be some ambiguity. Also, there must be some idea about how to solve this ambiguity. This idea must be translatable to a state to action diagram. [7]
The performance profile estimates the quality of the results based on the input and the amount of time that is allotted to the algorithm. [3] The better the estimate, the sooner the result would be found. [3] Some systems have a larger database that gives the probability that the output is the expected output. [3] It is important to note that one algorithm can have several performance profiles. [9] Most of the time performance profiles are constructed using mathematical statistics using representative cases. For example, in the traveling salesman problem, the performance profile was generated using a user-defined special program to generate the necessary statistics. [1] In this example, the performance profile is the mapping of time to the expected results. [1] This quality can be measured in several ways:
Initial behavior: While some algorithms start with immediate guesses, others take a more calculated approach and have a start up period before making any guesses. [9]
There are many famous anytime algorithms in the literature. We list here some of them.
Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines to applied disciplines.
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems via biologically inspired operators such as selection, crossover, and mutation. Some examples of GA applications include optimizing decision trees for better performance, solving sudoku puzzles, hyperparameter optimization, and causal inference.
A* is a graph traversal and pathfinding algorithm that is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. Given a weighted graph, a source node and a goal node, the algorithm finds the shortest path from source to goal.
Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite constraints over variables, which is solved by constraint satisfaction methods. CSPs are the subject of research in both artificial intelligence and operations research, since the regularity in their formulation provides a common basis to analyze and solve problems of many seemingly unrelated families. CSPs often exhibit high complexity, requiring a combination of heuristics and combinatorial search methods to be solved in a reasonable time. Constraint programming (CP) is the field of research that specifically focuses on tackling these kinds of problems. Additionally, the Boolean satisfiability problem (SAT), satisfiability modulo theories (SMT), mixed integer programming (MIP) and answer set programming (ASP) are all fields of research focusing on the resolution of particular forms of the constraint satisfaction problem.
Theoretical computer science is a subfield of computer science and mathematics that focuses on the abstract and mathematical foundations of computation.
In computer science, computational learning theory is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms.
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output are random variables.
In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems that can be reduced to finding good paths through graphs. Artificial ants represent multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a preferred method for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.
Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.
In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, tune, or select a heuristic that may provide a sufficiently good solution to an optimization problem or a machine learning problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a subset of solutions which is otherwise too large to be completely enumerated or otherwise explored. Metaheuristics may make relatively few assumptions about the optimization problem being solved and so may be usable for a variety of problems. Their use is always of interest when exact or other (approximate) methods are not available or are not expedient, either because the calculation time is too long or because, for example, the solution provided is too imprecise.
Biclustering, block clustering, Co-clustering or two-mode clustering is a data mining technique which allows simultaneous clustering of the rows and columns of a matrix. The term was first introduced by Boris Mirkin to name a technique introduced many years earlier, in 1972, by John A. Hartigan.
In linguistics, statistical semantics applies the methods of statistics to the problem of determining the meaning of words or phrases, ideally through unsupervised learning, to a degree of precision at least sufficient for the purpose of information retrieval.
Georg Gottlob FRS is an Austrian-Italian computer scientist who works in the areas of database theory, logic, and artificial intelligence and is Professor of Informatics at the University of Calabria. He was Professor at the University of Oxford.
Computational thinking (CT) refers to the thought processes involved in formulating problems so their solutions can be represented as computational steps and algorithms. In education, CT is a set of problem-solving methods that involve expressing problems and their solutions in ways that a computer could also execute. It involves automation of processes, but also using computing to explore, analyze, and understand processes.
Shlomo Zilberstein is an Israeli-American computer scientist. He is a Professor of Computer Science and Associate Dean for Research and Engagement in the College of Information and Computer Sciences at the University of Massachusetts, Amherst. He graduated with a B.A. in Computer Science summa cum laude from Technion – Israel Institute of Technology in 1982, and received a Ph.D. in Computer Science from University of California at Berkeley in 1993, advised by Stuart J. Russell. He is known for his contributions to artificial intelligence, anytime algorithms, multi-agent systems, and automated planning and scheduling algorithms, notably within the context of Markov decision processes (MDPs), Partially Observable MDPs (POMDPs), and Decentralized POMDPs (Dec-POMDPs).
Approximate computing is an emerging paradigm for energy-efficient and/or high-performance design. It includes a plethora of computation techniques that return a possibly inaccurate result rather than a guaranteed accurate result, and that can be used for applications where an approximate result is sufficient for its purpose. One example of such situation is for a search engine where no exact answer may exist for a certain search query and hence, many answers may be acceptable. Similarly, occasional dropping of some frames in a video application can go undetected due to perceptual limitations of humans. Approximate computing is based on the observation that in many scenarios, although performing exact computation requires large amount of resources, allowing bounded approximation can provide disproportionate gains in performance and energy, while still achieving acceptable result accuracy. For example, in k-means clustering algorithm, allowing only 5% loss in classification accuracy can provide 50 times energy saving compared to the fully accurate classification.
In computer science, anytime A* is a family of variants of the A* search algorithm. Like other anytime algorithms, it has a flexible time cost, can return a valid solution to a pathfinding or graph traversal problem even if it is interrupted before it ends, by generating a fast, non-optimal solution before progressively optimizing it. This ability to quickly generate solutions has made it attractive to Search-base sites and AI designs.
In computer science, multiway number partitioning is the problem of partitioning a multiset of numbers into a fixed number of subsets, such that the sums of the subsets are as similar as possible. It was first presented by Ronald Graham in 1969 in the context of the identical-machines scheduling problem. The problem is parametrized by a positive integer k, and called k-way number partitioning. The input to the problem is a multiset S of numbers, whose sum is k*T.
Thomas L. Dean is an American computer scientist known for his work in robot planning, probabilistic graphical models, and computational neuroscience. He was one of the first to introduce ideas from operations research and control theory to artificial intelligence. In particular, he introduced the idea of the anytime algorithm and was the first to apply the factored Markov decision process to robotics. He has authored several influential textbooks on artificial intelligence.