Timeline of algorithms

Last updated

The following timeline of algorithms outlines the development of algorithms (mainly "mathematical recipes") since their inception.

Contents

Antiquity

Medieval Period

Before 1940

1940s

1950s

1960s

1970s

1980s

1990s

2000s

2010s

Related Research Articles

<span class="mw-page-title-main">Algorithm</span> Sequence of operations for a task

In mathematics and computer science, an algorithm is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes and deduce valid inferences, achieving automation eventually. Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus".

<span class="mw-page-title-main">Donald Knuth</span> American computer scientist and mathematician (born 1938)

Donald Ervin Knuth is an American computer scientist and mathematician. He is a professor emeritus at Stanford University. He is the 1974 recipient of the ACM Turing Award, informally considered the Nobel Prize of computer science. Knuth has been called the "father of the analysis of algorithms".

In mathematics, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors greater than 1, in which case it is called a composite number, or it is not, in which case it is called a prime number. For example, 15 is a composite number because 15 = 3 · 5, but 7 is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example 60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is called prime factorization; the result is always unique up to the order of the factors by the prime factorization theorem.

<span class="mw-page-title-main">Search algorithm</span> Any algorithm which solves the search problem

In computer science, a search algorithm is an algorithm designed to solve a search problem. Search algorithms work to retrieve information stored within particular data structure, or calculated in the search space of a problem domain, with either discrete or continuous values.

<i>The Art of Computer Programming</i> Books about algorithms by Donald Knuth

The Art of Computer Programming (TAOCP) is a comprehensive monograph written by the computer scientist Donald Knuth presenting programming algorithms and their analysis. Volumes 1–5 are intended to represent the central core of computer programming for sequential machines.

In computer science, the Cocke–Younger–Kasami algorithm is a parsing algorithm for context-free grammars published by Itiroo Sakai in 1961. The algorithm is named after some of its rediscoverers: John Cocke, Daniel Younger, Tadao Kasami, and Jacob T. Schwartz. It employs bottom-up parsing and dynamic programming.

<span class="mw-page-title-main">Richard M. Karp</span> American mathematician

Richard Manning Karp is an American computer scientist and computational theorist at the University of California, Berkeley. He is most notable for his research in the theory of algorithms, for which he received a Turing Award in 1985, The Benjamin Franklin Medal in Computer and Cognitive Science in 2004, and the Kyoto Prize in 2008.

<span class="mw-page-title-main">Robert W. Floyd</span> American computer scientist (1936–2001)

Robert W. Floyd was an American computer scientist. His contributions include the design of the Floyd–Warshall algorithm, which efficiently finds all shortest paths in a graph and his work on parsing; Floyd's cycle-finding algorithm for detecting cycles in a sequence was attributed to him as well. In one isolated paper he introduced the important concept of error diffusion for rendering images, also called Floyd–Steinberg dithering. He pioneered in the field of program verification using logical assertions with the 1967 paper Assigning Meanings to Programs. This was a contribution to what later became Hoare logic. Floyd received the Turing Award in 1978.

Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures.

Lempel–Ziv–Oberhumer (LZO) is a lossless data compression algorithm that is focused on decompression speed.

<span class="mw-page-title-main">Vaughan Pratt</span>

Vaughan Pratt is a Professor Emeritus at Stanford University, who was an early pioneer in the field of computer science. Since 1969, Pratt has made several contributions to foundational areas such as search algorithms, sorting algorithms, and primality testing. More recently, his research has focused on formal modeling of concurrent systems and Chu spaces.

A timeline of events related to  information theory,  quantum information theory and statistical physics,  data compression,  error correcting codes and related subjects.

<span class="mw-page-title-main">Programming language theory</span> Branch of computer science

Programming language theory (PLT) is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of formal languages known as programming languages. Programming language theory is closely related to other fields including mathematics, software engineering, and linguistics. There are a number of academic conferences and journals in the area.

<span class="mw-page-title-main">Ravindran Kannan</span>

Ravindran Kannan is a Principal Researcher at Microsoft Research India, where he leads the algorithms research group. He is also the first adjunct faculty of Computer Science and Automation Department of Indian Institute of Science.

In numerical analysis, Gauss–Legendre quadrature is a form of Gaussian quadrature for approximating the definite integral of a function. For integrating over the interval [−1, 1], the rule takes the form:

<span class="mw-page-title-main">History of compiler construction</span>

In computing, a compiler is a computer program that transforms source code written in a programming language or computer language, into another computer language. The most common reason for transforming source code is to create an executable program.

<span class="mw-page-title-main">Computer algebra</span> Scientific area at the interface between computer science and mathematics

In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although computer algebra could be considered a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have no given value and are manipulated as symbols.

<span class="mw-page-title-main">Tandy Warnow</span> American computer scientist

Tandy Warnow is an American computer scientist and Grainger Distinguished Chair in Engineering at the University of Illinois at Urbana–Champaign. She is known for her work on the reconstruction of evolutionary trees, both in biology and in historical linguistics, and also for multiple sequence alignment methods.

In computer science and information theory, Tunstall coding is a form of entropy coding used for lossless data compression.

References

  1. Simon Singh, The Code Book , pp. 14–20
  2. Victor J. Katz (1995). "Ideas of Calculus in Islam and India", Mathematics Magazine68 (3), pp. 163–174.
  3. Bruce, Ian (June 29, 2010). "Euler's Institutionum Calculi Integralis". www.17centurymaths.com. Archived from the original on February 1, 2011. Retrieved 17 May 2023.
  4. Ciliberto, Ciro; Hirzebruch, Friedrich; Miranda, Rick; Teicher, Mina, eds. (2001). Applications of Algebraic Geometry to Coding Theory, Physics and Computation. Dordrecht: Springer Netherlands. ISBN   978-94-010-1011-5.
  5. Francis, J.G.F. (1961). "The QR Transformation, I". The Computer Journal. 4 (3): 265–271. doi: 10.1093/comjnl/4.3.265 .
  6. Kublanovskaya, Vera N. (1961). "On some algorithms for the solution of the complete eigenvalue problem". USSR Computational Mathematics and Mathematical Physics. 1 (3): 637–657. doi:10.1016/0041-5553(63)90168-X. Also published in: Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki [Journal of Computational Mathematics and Mathematical Physics], 1(4), pages 555–570 (1961).
  7. "YOLO: Real-Time Object Detection". 19 December 2023. Archived from the original on 19 December 2023. Retrieved 19 December 2023.
  8. "Understanding a Real-Time Object Detection Network: You Only Look Once (YOLOv1)". 19 December 2023. Archived from the original on 20 December 2023. Retrieved 20 December 2023.
  9. "how to use darknet to train your own neural network". 20 December 2023. Archived from the original on 20 December 2023. Retrieved 20 December 2023.
  10. "How computers learn to recognize objects instantly". 20 December 2023. Archived from the original on 20 December 2023. Retrieved 20 December 2023.
  11. "Darknet: The Open Source Framework for Deep Neural Networks". 20 December 2023. Archived from the original on 20 December 2023. Retrieved 20 December 2023.
  12. "Your Comprehensive Guide to the YOLO Family of Models". 21 December 2023. Archived from the original on 21 December 2023. Retrieved 21 December 2023.