Timeline of algorithms

Last updated

The following timeline of algorithms outlines the development of algorithms (mainly "mathematical recipes") since their inception.

Contents

Medieval Period

Before 1940

1940s

1950s

1960s

1970s

1980s

1990s

2000s

2010s

Related Research Articles

<span class="mw-page-title-main">Algorithm</span> Sequence of operations for a task

In mathematics and computer science, an algorithm is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes and deduce valid inferences, achieving automation eventually. Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus".

In number theory, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors, in which case it is called a composite number, or it is not, in which case it is called a prime number. For example, 15 is a composite number because 15 = 3 · 5, but 7 is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example 60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is called prime factorization; the result is always unique up to the order of the factors by the prime factorization theorem. A prime factorization algorithm typically involves testing whether each factor is prime after each step.

<span class="mw-page-title-main">Search algorithm</span> Any algorithm which solves the search problem

In computer science, a search algorithm is an algorithm designed to solve a search problem. Search algorithms work to retrieve information stored within particular data structure, or calculated in the search space of a problem domain, with either discrete or continuous values.

In computer science, the Cocke–Younger–Kasami algorithm is a parsing algorithm for context-free grammars published by Itiroo Sakai in 1961. The algorithm is named after some of its rediscoverers: John Cocke, Daniel Younger, Tadao Kasami, and Jacob T. Schwartz. It employs bottom-up parsing and dynamic programming.

LZ77 and LZ78 are the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978. They are also known as LZ1 and LZ2 respectively. These two algorithms form the basis for many variations including LZW, LZSS, LZMA and others. Besides their academic influence, these algorithms formed the basis of several ubiquitous compression schemes, including GIF and the DEFLATE algorithm used in PNG and ZIP.

<span class="mw-page-title-main">Clique problem</span> Task of computing complete subgraphs

In computer science, the clique problem is the computational problem of finding cliques in a graph. It has several different formulations depending on which cliques, and what information about the cliques, should be found. Common formulations of the clique problem include finding a maximum clique, finding a maximum weight clique in a weighted graph, listing all maximal cliques, and solving the decision problem of testing whether a graph contains a clique larger than a given size.

<span class="mw-page-title-main">Richard M. Karp</span> American mathematician

Richard Manning Karp is an American computer scientist and computational theorist at the University of California, Berkeley. He is most notable for his research in the theory of algorithms, for which he received a Turing Award in 1985, The Benjamin Franklin Medal in Computer and Cognitive Science in 2004, and the Kyoto Prize in 2008.

Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures.

In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.

<span class="mw-page-title-main">Vaughan Pratt</span>

Vaughan Pratt is a Professor Emeritus at Stanford University, who was an early pioneer in the field of computer science. Since 1969, Pratt has made several contributions to foundational areas such as search algorithms, sorting algorithms, and primality testing. More recently, his research has focused on formal modeling of concurrent systems and Chu spaces.

A timeline of events related to  information theory,  quantum information theory and statistical physics,  data compression,  error correcting codes and related subjects.

<span class="mw-page-title-main">Programming language theory</span> Branch of computer science

Programming language theory (PLT) is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of formal languages known as programming languages. Programming language theory is closely related to other fields including mathematics, software engineering, and linguistics. There are a number of academic conferences and journals in the area.

<span class="mw-page-title-main">Grammar induction</span>

Grammar induction is the process in machine learning of learning a formal grammar from a set of observations, thus constructing a model which accounts for the characteristics of the observed objects. More generally, grammatical inference is that branch of machine learning where the instance space consists of discrete combinatorial objects such as strings, trees and graphs.

This is a timeline of pure and applied mathematics history. It is divided here into three stages, corresponding to stages in the development of mathematical notation: a "rhetorical" stage in which calculations are described purely by words, a "syncopated" stage in which quantities and common algebraic operations are beginning to be represented by symbolic abbreviations, and finally a "symbolic" stage, in which comprehensive notational systems for formulas are the norm.

In numerical analysis, Gauss–Legendre quadrature is a form of Gaussian quadrature for approximating the definite integral of a function. For integrating over the interval [−1, 1], the rule takes the form:

<span class="mw-page-title-main">History of compiler construction</span>

In computing, a compiler is a computer program that transforms source code written in a programming language or computer language, into another computer language. The most common reason for transforming source code is to create an executable program.

In computer science and information theory, Tunstall coding is a form of entropy coding used for lossless data compression.

The Lempel–Ziv complexity is a measure that was first presented in the article On the Complexity of Finite Sequences, by two Israeli computer scientists, Abraham Lempel and Jacob Ziv. This complexity measure is related to Kolmogorov complexity, but the only function it uses is the recursive copy.

References

  1. Simon Singh, The Code Book , pp. 14–20
  2. Victor J. Katz (1995). "Ideas of Calculus in Islam and India", Mathematics Magazine68 (3), pp. 163–174.
  3. Bruce, Ian (June 29, 2010). "Euler's Institutionum Calculi Integralis". www.17centurymaths.com. Archived from the original on February 1, 2011. Retrieved 17 May 2023.
  4. Ciliberto, Ciro; Hirzebruch, Friedrich; Miranda, Rick; Teicher, Mina, eds. (2001). Applications of Algebraic Geometry to Coding Theory, Physics and Computation. Dordrecht: Springer Netherlands. ISBN   978-94-010-1011-5.
  5. Francis, J.G.F. (1961). "The QR Transformation, I". The Computer Journal. 4 (3): 265–271. doi: 10.1093/comjnl/4.3.265 .
  6. Kublanovskaya, Vera N. (1961). "On some algorithms for the solution of the complete eigenvalue problem". USSR Computational Mathematics and Mathematical Physics. 1 (3): 637–657. doi:10.1016/0041-5553(63)90168-X. Also published in: Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki [Journal of Computational Mathematics and Mathematical Physics], 1(4), pages 555–570 (1961).