This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
This is a list of important publications in theoretical computer science, organized by field.
Some reasons why a particular publication might be regarded as important:
Description: This article set the limits of computer science. It defined the Turing Machine, a model for all computations. On the other hand, it proved the undecidability of the halting problem and Entscheidungsproblem and by doing so found the limits of possible computation.
The first textbook on the theory of recursive functions. The book went through many editions and earned Péter the Kossuth Prize from the Hungarian government. [1] Reviews by Raphael M. Robinson and Stephen Kleene praised the book for providing an effective elementary introduction for students. [2]
Description: this paper introduced finite automata, regular expressions, and regular languages, and established their connection.
Description: Mathematical treatment of automata, proof of core properties, and definition of non-deterministic finite automaton.
Description: This article introduced what is now known as the Chomsky hierarchy, a containment hierarchy of classes of formal grammars that generate formal languages.
Description: The paper presented the tree automaton, an extension of the automata. The tree automaton had numerous applications to proofs of correctness of programs.
The review of this early text by Carl Smith of Purdue University (in the Society for Industrial and Applied Mathematics Reviews), [3] reports that this a text with an "appropriate blend of intuition and rigor… in the exposition of proofs" that presents "the fundamental results of classical recursion theory [RT]... in a style... accessible to undergraduates with minimal mathematical background". While he states that it "would make an excellent introductory text for an introductory course in [RT] for mathematics students", he suggests that an "instructor must be prepared to substantially augment the material… " when it is used with computer science students (given a dearth of material on RT applications to this area). [3]
Description: A popular textbook.
Description: Gödel discusses the idea of efficient universal theorem prover.
Description: This technical report was the first publication talking about what later was renamed computational complexity [4]
Description: First definition of the complexity class P. One of the founding papers of complexity theory.
Description: This paper gave computational complexity its name and seed.
Description: The Blum axioms.
Description: Constructed the "Klee–Minty cube" in dimension D, whose 2D corners are each visited by Dantzig's simplex algorithm for linear optimization.
Description: This paper introduced the concept of NP-Completeness and proved that Boolean satisfiability problem (SAT) is NP-Complete. Note that similar ideas were developed independently slightly later by Leonid Levin at "Levin, Universal Search Problems. Problemy Peredachi Informatsii 9(3):265–266, 1973".
Description: This paper showed that 21 different problems are NP-Complete and showed the importance of the concept.
Description: The main importance of this book is due to its extensive list of more than 300 NP-Complete problems. This list became a common reference and definition. Though the book was published only few years after the concept was defined such an extensive list was found.
Description: This paper showed that the existence of one way functions leads to computational randomness.
Description: This paper introduced the concept of zero knowledge. [5]
Description: IP is a complexity class whose characterization (based on interactive proof systems) is quite different from the usual time/space bounded computational classes. In this paper, Shamir extended the technique of the previous paper by Lund, et al., to show that PSPACE is contained in IP, and hence IP = PSPACE, so that each problem in one complexity class is solvable in the other.
Besides the estimable press bringing these recent texts forward, they are very positively reviewed in ACM's SIGACT News by Daniel Apon of the University of Arkansas, [6] who identifies them as "textbooks for a course in complexity theory, aimed at early graduate… or... advanced undergraduate students… [with] numerous, unique strengths and very few weaknesses," and states that both are:
"excellent texts that thoroughly cover both the breadth and depth of computational complexity theory… [by] authors... each [who] are giants in theory of computing [where each will be] ...an exceptional reference text for experts in the field… [and that] ...theorists, researchers and instructors of any school of thought will find either book useful."
The reviewer notes that there is "a definite attempt in [Arora and Barak] to include very up-to-date material, while Goldreich focuses more on developing a contextual and historical foundation for each concept presented," and that he "applaud[s] all… authors for their outstanding contributions." [6]
Description: There is a polynomial time algorithm to find a maximum matching in a graph that is not bipartite and another step toward the idea of computational complexity. For more information see .
Description: This paper creates a theoretical framework for trapdoor functions and described some of their applications, like in cryptography. Note that the concept of trapdoor functions was brought at "New directions in cryptography" six years earlier (See section V "Problem Interrelationships and Trap Doors.").
Description: An introduction to computational complexity theory, the book explains its author's characterization of P-SPACE and other results.
Description: These three papers established the surprising fact that certain problems in NP remain hard even when only an approximative solution is required. See PCP theorem.
Description: The DPLL algorithm. The basic algorithm for SAT and other NP-Complete problems.
Description: First description of resolution and unification used in automated theorem proving; used in Prolog and logic programming.
Description: The use of an algorithm for minimum spanning tree as an approximation algorithm for the NP-Complete travelling salesman problem. Approximation algorithms became a common method for coping with NP-Complete problems.
Description: For long, there was no provably polynomial time algorithm for the linear programming problem. Khachiyan was the first to provide an algorithm that was polynomial (and not just was fast enough most of the time as previous algorithms). Later, Narendra Karmarkar presented a faster algorithm at: Narendra Karmarkar, "A new polynomial time algorithm for linear programming", Combinatorica, vol 4, no. 4, p. 373–395, 1984.
Description: The paper presented the Miller–Rabin primality test and outlined the program of randomized algorithms.
Description: This article described simulated annealing, which is now a very common heuristic for NP-Complete problems.
Description: This monograph has four volumes covering popular algorithms. The algorithms are written in both English and MIX assembly language (or MMIX assembly language in more recent fascicles). This makes algorithms both understandable and precise. However, the use of a low-level programming language frustrates some programmers more familiar with modern structured programming languages.
Description: An early, influential book on algorithms and data structures, with implementations in Pascal.
Description: One of the standard texts on algorithms for the period of approximately 1975–1985.
Description: Explains the Whys of algorithms and data-structures. Explains the Creative Process, the Line of Reasoning, the Design Factors behind innovative solutions.
Description: A very popular text on algorithms in the late 1980s. It was more accessible and readable (but more elementary) than Aho, Hopcroft, and Ullman. There are more recent editions.
Description: This textbook has become so popular that it is almost the de facto standard for teaching basic algorithms. The 1st edition (with first three authors) was published in 1990, the 2nd edition in 2001, and the 3rd in 2009.
Description: Proposed a computational and combinatorial approach to probability.
Description: This was the beginning of algorithmic information theory and Kolmogorov complexity. Note that though Kolmogorov complexity is named after Andrey Kolmogorov, he said that the seeds of that idea are due to Ray Solomonoff. Andrey Kolmogorov contributed a lot to this area but in later articles.
Description: An introduction to algorithmic information theory by one of the important people in the area.
Description: This paper created the field of information theory.
Description: In this paper, Hamming introduced the idea of error-correcting code. He created the Hamming code and the Hamming distance and developed methods for code optimality proofs.
Description: The Huffman coding.
Description: The LZ77 compression algorithm.
Description: A popular introduction to information theory.
Description: Floyd's landmark paper introduces the method of inductive assertions and describes how a program (presented as a flow chart) annotated with first-order assertions may be shown to satisfy a pre- and post-condition specification. The paper also introduces the concepts of loop invariant and verification condition.
Description: Hoare's paper gives a set of logical rules for reasoning rigorously about the correctness of computer programs. It defines an Algol-like programming language using what are now called Hoare triples. It has been used as the basis for most later work in the field, rather than Floyd's paper "Assigning meaning to programs", which did the same in terms of flow charts.
Description: Dijkstra's paper proposes (for the first time) that, instead of formally verifying a program after it has been written (i.e. post facto), programs and their formal proofs should be developed hand-in-hand (using predicate transformers to progressively refine weakest pre-conditions), a method known as program (or formal) refinement (or derivation), or sometimes "correctness-by-construction".
Description: The paper introduced invariance proofs of concurrent programs. It did not receive much attention because it discussed programs in terms of flow charts and was not written clearly enough.
Description: This paper, along with the same authors' paper "Verifying properties of parallel programs: an axiomatic approach". CACM . 19 (5): 279–285. May 1976. doi:10.1145/360051.360224. S2CID 9099351. extends the axiomatic approach to parallel-program verification with the notion of Interference freedom.
Description: Dijkstra's classic postgraduate-level textbook extends his earlier paper Guarded commands, nondeterminacy and formal derivation of programs and firmly establishes the principle of formally deriving programs and their proofs hand-in-hand from their specifications.
Description: Pnueli suggests temporal logic for formal verification in which the basic concept is the time dependence of events.
Description: Hoare's paper introduces the idea of concurrent processes (i.e. programs) that do not share variables but instead cooperate solely by exchanging synchronous messages. This has spurred great deal of research and advances, which are detailed in communicating sequential processes
Emerson, E. Allen; Clarke, E.M. (1980). "Characterizing correctness properties of parallel programs using fixpoints". In J. de Bakker; J. van Leewen (eds.). Automata, Languages and Programming. ICALP 1980. Vol. 85. Springer Verlag, Berlin, Heidelberg. doi:10.1007/3-540-10003-2_69. S2CID 33145816.
Description: Introduces model checking as a procedure to check correctness of concurrent programs.
Milner, Robin (1980). A Calculus of Communicating Systems. Lecture Notes in Computer Science. Vol. 92. Springer Verlag, New York, NY. doi:10.1007/3-540-10235-3. ISBN 978-3-540-10235-9. S2CID 27047741.
Description: Milner's book describes a process algebra, called CCS, that permits systems of concurrent processes to be reasoned about formally, something that has not been possible for earlier models of concurrency (semaphores, critical sections, original CSP).
Jones, Cliff (1980). Software Development: A Rigorous Approach (PDF). Prentice Hall International Upper Saddle River, NJ. S2CID 61112116.
Description: Jones' book is the first full-length exposition of the Vienna Development Method (VDM), which had evolved (principally) at IBM's Vienna research lab over the previous decade and which combines the idea of program refinement as per Dijkstra with that of data refinement (or reification) whereby algebraically defined abstract data types are formally transformed into progressively more "concrete" representations.
Description: This textbook describes Dijkstra's weakest precondition method of formal program derivation, except in a very much more accessible manner than Dijkstra's earlier A Discipline of Programming. It has jokingly been called "Dijkstra for the masses". The emphasis is on developing program and proof hand-in-hand, with the proof ideas leading the way. The examples in the book are small-scale, emphasizing basic algorithms, such as sorting, merging, and string manipulation. Subroutines (functions) are included, but not object-oriented and functional programming environments.
Description: This book is the first postgraduate-level book-length exposition of the mathematical (or functional) approach to the formal semantics of programming languages (in contrast to the operational and algebraic approaches).
Hoare, Tony (1985). Communicating Sequential Processes . Prentice Hall International Series in Computer Science. ISBN 978-0131532717 (hardback) or ISBN 978-0131532892 (paperback). (PDF available at http://www.usingcsp.com/)
Description: Hoare's book (currently the third most cited computer science reference of all time) presents an updated CSP model in which cooperating processes do not even have program variables and which, like CCS, permits systems of processes to be reasoned about formally.
Description: Girard's linear logic was a breakthrough in designing typing systems for sequential and concurrent computation, especially for resource conscious typing systems.
Milner, Robin; Joachim Parrow; David Walker (1989). "A calculus of mobile processes I" (PDF). Information and Computation. 100 (1): 1–40. doi:10.1016/0890-5401(92)90008-4. S2CID 17166120.
Description: This paper introduces the Pi-Calculus, a generalisation of CCS that allows process mobility. This extremely simple calculus has become the dominant paradigm in the theoretical study of programming languages, typing systems, and program logics. A second paper, which completes the work, can be downloaded here
Milner, Robin (1989). Communication and Concurrency. Prentice Hall International Series in Computer Science. ISBN 0-13-115007-3. S2CID 36788371.
Description: Milner's text is a more accessible, although still technically advanced, exposition of his earlier CCS work.
Description: Spivey's classic text summarises the formal specification language Z notation, which, although originated by Jean-Raymond Abrial, had evolved (principally) at Oxford University over the previous decade.
Hehner, Eric (1993). A Practical Theory of Programming. Monographs in Computer Science. New York: Springer Verlag. doi:10.1007/978-1-4419-8596-5. ISBN 978-1-4612-6444-6. S2CID 5914094.
Description: the up-to-date version of Predicative programming. The basis for C.A.R. Hoare's UTP. The simplest and most comprehensive formal methods. The latest online edition can be downloaded here
In algorithmic information theory, the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.
The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.
Edsger Wybe Dijkstra was a Dutch computer scientist, programmer, software engineer, systems scientist, and science essayist. He received the 1972 Turing Award for fundamental contributions to developing structured programming languages, and was the Schlumberger Centennial Chair of Computer Sciences at The University of Texas at Austin from 1984 until 2000.
Stephen Arthur Cook is an American-Canadian computer scientist and mathematician who has made significant contributions to the fields of complexity theory and proof complexity. He is a university professor at the University of Toronto, Department of Computer Science and Department of Mathematics.
Sir Charles Antony Richard Hoare is a British computer scientist who has made foundational contributions to programming languages, algorithms, operating systems, formal verification, and concurrent computing. His work earned him the Turing Award, usually regarded as the highest distinction in computer science, in 1980.
In computer science, the clique problem is the computational problem of finding cliques in a graph. It has several different formulations depending on which cliques, and what information about the cliques, should be found. Common formulations of the clique problem include finding a maximum clique, finding a maximum weight clique in a weighted graph, listing all maximal cliques, and solving the decision problem of testing whether a graph contains a clique larger than a given size.
In theoretical computer science, an algorithm is correct with respect to a specification if it behaves as specified. Best explored is functional correctness, which refers to the input-output behavior of the algorithm.
In programming language theory, semantics is the rigorous mathematical study of the meaning of programming languages. Semantics assigns computational meaning to valid strings in a programming language syntax.
The Gödel Prize is an annual prize for outstanding papers in the area of theoretical computer science, given jointly by the European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery Special Interest Group on Algorithms and Computational Theory. The award is named in honor of Kurt Gödel. Gödel's connection to theoretical computer science is that he was the first to mention the "P versus NP" question, in a 1956 letter to John von Neumann in which Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time.
In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic.
Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects, such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."
In computational complexity theory, the PCP theorem states that every decision problem in the NP complexity class has probabilistically checkable proofs of constant query complexity and logarithmic randomness complexity.
Aleksandr Aleksandrovich Razborov, sometimes known as Sasha Razborov, is a Soviet and Russian mathematician and computational theorist. He is Andrew McLeish Distinguished Service Professor at the University of Chicago.
In computational complexity, problems that are in the complexity class NP but are neither in the class P nor NP-complete are called NP-intermediate, and the class of such problems is called NPI. Ladner's theorem, shown in 1975 by Richard E. Ladner, is a result asserting that, if P ≠ NP, then NPI is not empty; that is, NP contains problems that are neither in P nor NP-complete. Since it is also true that if NPI problems exist, then P ≠ NP, it follows that P = NP if and only if NPI is empty.
Larry Joseph Stockmeyer was an American computer scientist. He was one of the pioneers in the field of computational complexity theory, and he also worked in the field of distributed computing. He died of pancreatic cancer.
The Annual ACM Symposium on Theory of Computing (STOC) is an academic conference in the field of theoretical computer science. STOC has been organized annually since 1969, typically in May or June; the conference is sponsored by the Association for Computing Machinery special interest group SIGACT. Acceptance rate of STOC, averaged from 1970 to 2012, is 31%, with the rate of 29% in 2012.
In computational complexity theory, a problem is NP-complete when:
Geometric complexity theory (GCT), is a research program in computational complexity theory proposed by Ketan Mulmuley and Milind Sohoni. The goal of the program is to answer the most famous open problem in computer science – whether P = NP – by showing that the complexity class P is not equal to the complexity class NP.