Leslie Valiant | |
---|---|
Born | Leslie Gabriel Valiant 28 March 1949 |
Nationality | British |
Alma mater |
|
Known for | |
Awards |
|
Scientific career | |
Fields | Mathematics Theoretical computer science Computational learning theory Theoretical neuroscience |
Institutions | |
Thesis | Decision Procedures for Families of Deterministic Pushdown Automata (1974) |
Doctoral advisor | Mike Paterson [3] |
Doctoral students | |
Website | people |
Leslie Gabriel Valiant FRS [4] [5] (born 28 March 1949) is a British American [6] computer scientist and computational theorist. [7] [8] He was born to a chemical engineer father and a translator mother. [9] He is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. [10] [11] [12] [13] Valiant was awarded the Turing Award in 2010, having been described by the A.C.M. as a heroic figure in theoretical computer science and a role model for his courage and creativity in addressing some of the deepest unsolved problems in science; in particular for his "striking combination of depth and breadth". [6]
Valiant was educated at King's College, Cambridge, [14] [6] Imperial College London, [14] [6] and the University of Warwick where he received a PhD in computer science in 1974. [15] [3]
Valiant is world-renowned for his work in Theoretical Computer Science. Among his many contributions to Complexity Theory, he introduced the notion of #P-completeness ("Sharp-P completeness") to explain why enumeration and reliability problems are intractable. He created the Probably Approximately Correct or PAC model of learning that introduced the field of Computational Learning Theory and became a theoretical basis for the development of Machine Learning. He also introduced the concept of Holographic Algorithms inspired by the Quantum Computation model. In computer systems, he is most well-known for introducing the Bulk Synchronous Parallel processing model. Analogous to the von Neumann model for a single computer architecture, BSP has been an influential model for parallel and distributed computing architectures. Recent examples are Google adopting it for computation at large scale via MapReduce, MillWheel, [16] Pregel [17] and Dataflow, and Facebook creating a graph analytics system capable of processing over 1 trillion edges. [18] [19] There have also been active open-source projects to add explicit BSP programming as well as other high-performance parallel programming models derived from BSP. Popular examples are Hadoop, Spark, Giraph, Hama, Beam and Dask. His earlier work in Automata Theory includes an algorithm for context-free parsing, which is still the asymptotically fastest known. He also works in Computational Neuroscience focusing on understanding memory and learning.
Valiant's 2013 book is Probably Approximately Correct: Nature's Algorithms for Learning and Prospering in a Complex World . [20] In it he argues, among other things, that evolutionary biology does not explain the rate at which evolution occurs, writing, for example, "The evidence for Darwin's general schema for evolution being essentially correct is convincing to the great majority of biologists. This author has been to enough natural history museums to be convinced himself. All this, however, does not mean the current theory of evolution is adequately explanatory. At present the theory of evolution can offer no account of the rate at which evolution progresses to develop complex mechanisms or to maintain them in changing environments."
Valiant started teaching at Harvard University in 1982 and is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics in the Harvard School of Engineering and Applied Sciences. Prior to 1982 he taught at Carnegie Mellon University, the University of Leeds, and the University of Edinburgh.
Valiant received the Nevanlinna Prize in 1986, the Knuth Prize in 1997, the EATCS Award in 2008, [21] and the Turing Award in 2010. [22] [23] He was elected a Fellow of the Royal Society (FRS) in 1991, [4] a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) in 1992, [24] and a member of the United States National Academy of Sciences in 2001. [25] Valiant's nomination for the Royal Society reads:
Leslie Valiant has contributed in a decisive way to the growth of theoretical computer science. His work is concerned mainly with quantifying mathematically the resource costs of solving problems on a computer. In early work (1975), he found the asymptotically fastest algorithm known for recognising context-free languages. At the same time, he pioneered the use of communication properties of graphs for analysing computations. In 1977, he defined the notion of ‘sharp-P’ (#P)-completeness and established its utility in classifying counting or enumeration problems according to computational tractability. The first application was to counting matchings (the matrix permanent function). In 1984, Leslie introduced a definition of inductive learning that, for the first time, reconciles computational feasibility with the applicability to nontrivial classes of logical rules to be learned. This notion, later called ‘probably approximately correct learning’, became a theoretical basis for the development of machine learning. In 1989, he formulated the concept of bulk synchronous computation as a unifying principle for parallel computation. Leslie received the Nevanlinna Prize in 1986, and the Turing Award in 2010. [26]
The citation for his A.M. Turing Award reads:
For transformative contributions to the theory of computation, including the theory of probably approximately correct (PAC) learning, the complexity of enumeration and of algebraic computation, and the theory of parallel and distributed computing. [6]
His two sons Gregory Valiant [27] and Paul Valiant [28] are both also theoretical computer scientists. [8]
Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines to applied disciplines. Though more often considered an academic discipline, computer science is closely related to computer programming.
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.
Discrete mathematics is the study of mathematical structures that can be considered "discrete" rather than "continuous". Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no exact definition of the term "discrete mathematics".
The #P-complete problems form a complexity class in computational complexity theory. The problems in this complexity class are defined by having the following two properties:
Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.
Richard Manning Karp is an American computer scientist and computational theorist at the University of California, Berkeley. He is most notable for his research in the theory of algorithms, for which he received a Turing Award in 1985, The Benjamin Franklin Medal in Computer and Cognitive Science in 2004, and the Kyoto Prize in 2008.
Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory.
Algorithmic learning theory is a mathematical framework for analyzing machine learning problems and algorithms. Synonyms include formal learning theory and algorithmic inductive inference. Algorithmic learning theory is different from statistical learning theory in that it does not make use of statistical assumptions and analysis. Both algorithmic and statistical learning theory are concerned with machine learning and can thus be viewed as branches of computational learning theory.
In computer science, computational learning theory is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms.
Solomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.
In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability of a program, algorithm, or problem into order-independent or partially-ordered components or units of computation.
Vijay Virkumar Vazirani is an Indian American distinguished professor of computer science in the Donald Bren School of Information and Computer Sciences at the University of California, Irvine.
The Donald E. Knuth Prize is a prize for outstanding contributions to the foundations of computer science, named after the American computer scientist Donald E. Knuth.
In theoretical computer science, a pointer machine is an atomistic abstract computational machine model akin to the random-access machine. A pointer algorithm could also be an algorithm restricted to the pointer machine model.
The bulk synchronous parallel (BSP) abstract computer is a bridging model for designing parallel algorithms. It is similar to the parallel random access machine (PRAM) model, but unlike PRAM, BSP does not take communication and synchronization for granted. In fact, quantifying the requisite synchronization and communication is an important part of analyzing a BSP algorithm.
In computability theory, super-recursive algorithms are a generalization of ordinary algorithms that are more powerful, that is, compute more than Turing machines. The term was introduced by Mark Burgin, whose book "Super-recursive algorithms" develops their theory and presents several mathematical models. Turing machines and other mathematical models of conventional algorithms allow researchers to find properties of recursive algorithms and their computations. In a similar way, mathematical models of super-recursive algorithms, such as inductive Turing machines, allow researchers to find properties of super-recursive algorithms and their computations.
The Annual ACM Symposium on Theory of Computing (STOC) is an academic conference in the field of theoretical computer science. STOC has been organized annually since 1969, typically in May or June; the conference is sponsored by the Association for Computing Machinery special interest group SIGACT. Acceptance rate of STOC, averaged from 1970 to 2012, is 31%, with the rate of 29% in 2012.
Information Processing Letters is a peer-reviewed scientific journal in the field of computer science, published by Elsevier. The aim of the journal is to enable fast dissemination of results in the field of information processing in the form of short papers. Submissions are limited to nine double-spaced pages.
Michael Justin Kearns is an American computer scientist, professor and National Center Chair at the University of Pennsylvania, the founding director of Penn's Singh Program in Networked & Social Systems Engineering (NETS), the founding director of Warren Center for Network and Data Sciences, and also holds secondary appointments in Penn's Wharton School and department of Economics. He is a leading researcher in computational learning theory and algorithmic game theory, and interested in machine learning, artificial intelligence, computational finance, algorithmic trading, computational social science and social networks. He previously led the Advisory and Research function in Morgan Stanley's Artificial Intelligence Center of Excellence team, and is currently an Amazon Scholar within Amazon Web Services.
This article incorporates text available under the CC BY 4.0 license.