Richard Lipton

Last updated
Richard Lipton
Born
Richard Jay Lipton

(1946-09-06) September 6, 1946 (age 78)
Alma mater Carnegie Mellon
Known for Karp–Lipton theorem and planar separator theorem
Awards Knuth Prize (2014)
Scientific career
Fields computer science
Institutions Yale
Berkeley
Princeton
Georgia Tech
Thesis On Synchronization Primitive Systems  (1973)
Doctoral advisor David Parnas [1]
Doctoral students

Richard Jay Lipton (born September 6, 1946) is an American computer scientist who is Associate Dean of Research, Professor, and the Frederick G. Storey Chair in Computing in the College of Computing at the Georgia Institute of Technology. He has worked in computer science theory, cryptography, and DNA computing.

Contents

Career

In 1968, Lipton received his undergraduate degree in mathematics from Case Western Reserve University. In 1973, he received his Ph.D. from Carnegie Mellon University; his dissertation, supervised by David Parnas, is entitled On Synchronization Primitive Systems. After graduating, Lipton taught at Yale 19731978, at Berkeley 19781980, and then at Princeton 19802000. Since 2000, Lipton has been at Georgia Tech. While at Princeton, Lipton worked in the field of DNA computing. Since 1996, Lipton has been the chief consulting scientist at Telcordia. In 1999, Lipton was elected a member of the National Academy of Engineering for the application of computer science theory to practice.

Karp–Lipton theorem

In 1980, along with Richard M. Karp, Lipton proved that if SAT can be solved by Boolean circuits with a polynomial number of logic gates, then the polynomial hierarchy collapses to its second level.

Parallel algorithms

Showing that a program P has some property is a simple process if the actions inside the program are uninterruptible. However, when the action is interruptible, Lipton showed that through a type of reduction and analysis, it can be shown that the reduced program has that property if and only if the original program has the property. [2] If the reduction is done by treating interruptible operations as one large uninterruptible action, even with these relaxed conditions properties can be proven for a program P. Thus, correctness proofs of a parallel system can often be greatly simplified.

Database security

Lipton studied and created database security models on how and when to restrict the queries made by users of a database such that private or secret information will not be leaked. [3] For example, querying a database of campaign donations could allow the user to discover the individual donations to political candidates or organizations. If given access to averages of data and unrestricted query access, a user could exploit the properties of those averages to gain illicit information. These queries are considered to have large "overlap" creating the insecurity. By bounding the "overlap" and number of queries, a secure database can be achieved.

Online scheduling

Richard Lipton with Andrew Tomkins introduced a randomized online interval scheduling algorithm, the 2-size version being strongly competitive, and the k-size version achieving O(log), as well as demonstrating a theoretical lower-bound of O(log). [4] This algorithm uses a private-coin for randomization and a "virtual" choice to fool a medium adversary.

Being presented with an event the user must decide whether or not to include the event in the schedule. The 2-size virtual algorithm is described by how it reacts to 1-interval or k-intervals being presented by the adversary:

Again, this 2-size algorithm is shown to be strongly-competitive. The generalized k-size algorithm which is similar to the 2-size algorithm is then shown to be O(log)-competitive.

Program checking

Lipton showed that randomized testing can be provably useful, given the problem satisfied certain properties. [5] Proving correctness of a program is one of the most important problems presented in computer science. Typically in randomized testing, in order to attain a 1/1000 chance of an error, 1000 tests must be run. However Lipton shows that if a problem has "easy" sub-parts, repeated black-box testing can attain cr error rate, with c a constant less than 1 and r being the number of tests. Therefore, the probability of error goes to zero exponentially fast as r grows.

This technique is useful to check the correctness of many types of problems.

Games with simple strategies

In the area of game theory, more specifically of non-cooperative games, Lipton together with E. Markakis and A. Mehta proved [6] the existence of epsilon-equilibrium strategies with support logarithmic in the number of pure strategies. Furthermore, the payoff of such strategies can epsilon-approximate the payoffs of exact Nash equilibria. The limited (logarithmic) size of the support provides a natural quasi-polynomial algorithm to compute epsilon-equilibria.

Query size estimation

Lipton and J. Naughton presented an adaptive random sampling algorithm for database querying [7] [8] which is applicable to any query for which answers to the query can be partitioned into disjoint subsets[ clarification needed ]. Unlike most sampling estimation algorithms—which statically determine the number of samples needed—their algorithm decides the number of samples based on the sizes of the samples, and tends to keep the running time constant (as opposed to linear in the number of samples).

Formal verification of programs

DeMillo, Lipton and Perlis [9] criticized the idea of formal verification of programs and argued that

Multi-party protocols

Chandra, Furst and Lipton [10] generalized the notion of two-party communication protocols to multi-party communication protocols. They proposed a model in which a collection of processes () have access to a set of integers (, ) except one of them, so that is denied access to . These processes are allowed to communicate in order to arrive at a consensus on a predicate. They studied this model's communication complexity, defined as the number of bits broadcast among all the processes. As an example, they studied the complexity of a k-party protocol for Exactly-N (do all ’s sum up to N?), and obtained a lower bound using the tiling method. They further applied this model to study general branching programs and obtained a time lower bound for constant-space branching programs that compute Exactly-N.

Time/space SAT tradeoff

We have no way to prove that the Boolean satisfiability problem (often abbreviated as SAT), which is NP-complete, requires exponential (or at least super-polynomial) time (this is the famous P versus NP problem), or linear (or at least super-logarithmic) space to solve. However, in the context of space–time tradeoff, one can prove that SAT cannot be computed if we apply constraints to both time and space. L. Fortnow, Lipton, D. van Melkebeek, and A. Viglas [11] proved that SAT cannot be computed by a Turing machine that takes at most O(n1.1) steps and at most O(n0.1) cells of its read–write tapes.

Awards and honors

See also

Notes

  1. Richard Lipton at the Mathematics Genealogy Project
  2. Lipton, R (1975) "Reduction: a method of proving properties of parallel programs", Communications of the ACM 18(12)
  3. Lipton, R (1979) "Secure databases: protection against user influence" Archived 2010-06-17 at the Wayback Machine , "ACM Transactions on Database Systems" 4(1)
  4. Lipton, R (1994). Online interval scheduling. Symposium on Discrete Algorithms. pp. 302–311. CiteSeerX   10.1.1.44.4548 .
  5. Lipton, R (1991) "New Directions in Testing", "DIMACS Distributed Computing and Cryptography" Vol. 2 page: 191
  6. Richard Lipton, Evangelos Markakis, Aranyak Mehta (2007) "Playing Games with Simple Strategies", "EC '03: Proceedings of the 4th ACM conference on Electronic commerce", "ACM"
  7. Richard J. Lipton, Jeffrey F. Naughton (1990) "Query Size Estimation By Adaptive Sampling", "PODS '90: Proceedings of the ninth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems"
  8. Richard J. Lipton, Jeffrey F. Naughton, Donovan A. Schneider (1990) "SIGMOD '90: Proceedings of the 1990 ACM SIGMOD international conference on Management of data "
  9. Richard A. DeMillo, Richard J. Lipton, Alan J. Perlis (1979) "Social processes and proofs of theorems and programs", "Communications of the ACM, Volume 22 Issue 5"
  10. A. K. Chandra, M. L. Furst, and R. J. Lipton (1983) "Multi-Party Protocols", "In STOC, pages 94–99. ACM, 25–2"
  11. L. Fortnow, R. Lipton, D. van Melkebeek, and A. Viglas (2005) "Time-space lower bounds for satisfiability", "J. ACM, 52:835–865, 2005. Prelim version CCC ’2000"
  12. "Dr. Richard J. Lipton". NAE Website. Retrieved 2021-09-18.
  13. "ACM Awards Knuth Prize to Pioneer for Advances in Algorithms and Complexity Theory". Association for Computing Machinery. September 15, 2014. Archived from the original on September 20, 2014.

Further reading

Related Research Articles

In computational complexity theory, the complexity class #P (pronounced "sharp P" or, sometimes "number P" or "hash P") is the set of the counting problems associated with the decision problems in the set NP. More formally, #P is the class of function problems of the form "compute f(x)", where f is the number of accepting paths of a nondeterministic Turing machine running in polynomial time. Unlike most well-known complexity classes, it is not a class of decision problems but a class of function problems. The most difficult, representative problems of this class are #P-complete.

<span class="mw-page-title-main">Interactive proof system</span>

In computational complexity theory, an interactive proof system is an abstract machine that models computation as the exchange of messages between two parties: a prover and a verifier. The parties interact by exchanging messages in order to ascertain whether a given string belongs to a language or not. The prover is assumed to possess unlimited computational resources but cannot be trusted, while the verifier has bounded computation power but is assumed to be always honest. Messages are sent between the verifier and prover until the verifier has an answer to the problem and has "convinced" itself that it is correct.

In computational learning theory, probably approximately correct (PAC) learning is a framework for mathematical analysis of machine learning. It was proposed in 1984 by Leslie Valiant.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

In theoretical computer science, the subgraph isomorphism problem is a computational task in which two graphs G and H are given as input, and one must determine whether G contains a subgraph that is isomorphic to H. Subgraph isomorphism is a generalization of both the maximum clique problem and the problem of testing whether a graph contains a Hamiltonian cycle, and is therefore NP-complete. However certain other cases of subgraph isomorphism may be solved in polynomial time.

A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output are random variables.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof, as used in the verifier-based definition of the complexity class NP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way.

In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.

In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution.

The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic.

In complexity theory, the Karp–Lipton theorem states that if the Boolean satisfiability problem (SAT) can be solved by Boolean circuits with a polynomial number of logic gates, then

In computational complexity theory, the unique games conjecture is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP, then for many important problems it is not only impossible to get an exact solution in polynomial time, but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines.

MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:

In mathematics, polynomial identity testing (PIT) is the problem of efficiently determining whether two multivariate polynomials are identical. More formally, a PIT algorithm is given an arithmetic circuit that computes a polynomial p in a field, and decides whether p is the zero polynomial. Determining the computational complexity required for polynomial identity testing, in particular finding deterministic algorithms for PIT, is one of the most important open problems in algebraic computing complexity.

<span class="mw-page-title-main">Decision tree model</span> Model of computational complexity

In computational complexity theory, the decision tree model is the model of computation in which an algorithm can be considered to be a decision tree, i.e. a sequence of queries or tests that are done adaptively, so the outcome of previous tests can influence the tests performed next.

<span class="mw-page-title-main">NP-completeness</span> Complexity class

In computational complexity theory, a problem is NP-complete when:

  1. It is a decision problem, meaning that for any input to the problem, the output is either "yes" or "no".
  2. When the answer is "yes", this can be demonstrated through the existence of a short solution.
  3. The correctness of each solution can be verified quickly and a brute-force search algorithm can find a solution by trying all possible solutions.
  4. The problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified.

A locally decodable code (LDC) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining a small number of bits of a possibly corrupted codeword. This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Locally decodable codes are not a subset of locally testable codes, though there is some overlap between the two.

<span class="mw-page-title-main">Planted clique</span> Complete subgraph added to a random graph

In computational complexity theory, a planted clique or hidden clique in an undirected graph is a clique formed from another graph by selecting a subset of vertices and adding edges between each pair of vertices in the subset. The planted clique problem is the algorithmic problem of distinguishing random graphs from graphs that have a planted clique. This is a variation of the clique problem; it may be solved in quasi-polynomial time but is conjectured not to be solvable in polynomial time for intermediate values of the clique size. The conjecture that no polynomial time solution exists is called the planted clique conjecture; it has been used as a computational hardness assumption.

The welfare maximization problem is an optimization problem studied in economics and computer science. Its goal is to partition a set of items among agents with different utility functions, such that the welfare – defined as the sum of the agents' utilities – is as high as possible. In other words, the goal is to find an item allocation satisfying the utilitarian rule.