Ryan O'Donnell (computer scientist)

Last updated
Ryan O'Donnell
Alma mater
Known for Analysis of Boolean functions [pub 1]
Scientific career
Fields Analysis of Boolean functions, Complexity theory, Computational learning theory, Hardness of approximation, Quantum computing
Thesis Computational applications of noise sensitivity  (2003)
Doctoral advisor Madhu Sudan
Website https://www.cs.cmu.edu/~odonnell/

Ryan O'Donnell is a Canadian theoretical computer scientist and a professor at Carnegie Mellon University. He is known for his work on the analysis of Boolean functions and for authoring the textbook on this subject. [pub 1] He is also known for his work on computational learning theory, hardness of approximation, property testing, quantum computation and quantum information.

Contents

O'Donnell completed his B.Sc. in Mathematics and Computer Science at the University of Toronto. [1] He then completed his Ph.D. at the Massachusetts Institute of Technology (MIT) in 2003, advised by Madhu Sudan. [2]

Research

O'Donnell proved that the Goemans–Williamson approximation algorithm for MAX-CUT is optimal, assuming the unique games conjecture. The proof follows from two papers, one in 2004 with Subhash Khot, Guy Kindler, and Elchanan Mossel which reduced this statement to proving the Majority Is Stablest conjecture in analysis of Boolean functions, [pub 2] and one in 2005 with Elchanan Mossel and Krzysztof Oleszkiewicz which proves this conjecture. [pub 3] He later wrote an influential textbook on the analysis of Boolean functions. [pub 1]

O'Donnell's other notable contributions include participation in the first Polymath project, Polymath1, for developing a combinatorial proof to the density Hales–Jewett theorem, [3] [pub 4] improved algorithms for problems in computational learning theory, [pub 5] and improved algorithms for the tomography of quantum states. [pub 6]

Recognition

He received the National Science Foundation CAREER Award in 2008 and a Sloan Research Fellowship in 2009. He gave an invited lecture at the International Congress of Mathematicians in 2014.

Service

O'Donnell served as the editor-in-chief for the journal ACM Transactions on Computation Theory from 2019 to 2023 and was an editor of the SIAM Journal on Discrete Mathematics from 2012 to 2017. He is on the scientific advisory board of the Simons Institute for the Theory of Computing [4] and on the scientific board of the Electronic Colloquium on Computational Complexity. [5]

O'Donnell operates a YouTube channel, which has 10.2k+ subscribers and 680k+ views as of December 2023. [6] On there, he delivers mathematics and computer science lectures on topics such as complexity theory, spectral graph theory, and analysis of boolean functions, as well as uploads lectures from his classes at Carnegie Mellon University. He has directed several course series, such as his "CS Theory Toolkit" series, where he explores mathematical areas applicable to the theoretical computer science field.

Selected publications

  1. 1 2 3 O'Donnell, Ryan (2014). Analysis of Boolean functions. New York: Cambridge University Press. arXiv: 2105.10386 . ISBN   978-1-107-03832-5.
  2. Khot, Subhash; Kindler, Guy; Mossel, Elchanan; O’Donnell, Ryan (2007). "Optimal Inapproximability Results for MAX‐CUT and Other 2‐Variable CSPs?". SIAM Journal on Computing. 37 (1): 319–357. doi:10.1137/S0097539705447372. ISSN   0097-5397. S2CID   2090495.
  3. Mossel, Elchanan; O’Donnell, Ryan; Oleszkiewicz, Krzysztof (2010-03-17). "Noise stability of functions with low influences: Invariance and optimality". Annals of Mathematics. 171 (1): 295–341. arXiv: math/0503503 . doi: 10.4007/annals.2010.171.295 . ISSN   0003-486X.
  4. Polymath, D.H.J. (2012-05-01). "A new proof of the density Hales-Jewett theorem". Annals of Mathematics. 175 (3): 1283–1327. doi: 10.4007/annals.2012.175.3.6 . ISSN   0003-486X.
  5. Klivans, A.R.; O'Donnell, R.; Servedio, R.A. (2002). "Learning intersections and thresholds of halfspaces". The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings. IEEE. pp. 177–186. doi:10.1109/SFCS.2002.1181894. ISBN   978-0-7695-1822-0. S2CID   1664758.
  6. O'Donnell, Ryan; Wright, John (2017-06-19). "Efficient quantum tomography II". Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing. ACM. pp. 962–974. doi: 10.1145/3055399.3055454 . ISBN   978-1-4503-4528-6.

Related Research Articles

In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.

<span class="mw-page-title-main">Stephen Cook</span> American-Canadian computer scientist, contributor to complexity theory

Stephen Arthur Cook is an American-Canadian computer scientist and mathematician who has made significant contributions to the fields of complexity theory and proof complexity. He is a university professor emeritus at the University of Toronto, Department of Computer Science and Department of Mathematics.

Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.

<span class="mw-page-title-main">Clique problem</span> Task of computing complete subgraphs

In computer science, the clique problem is the computational problem of finding cliques in a graph. It has several different formulations depending on which cliques, and what information about the cliques, should be found. Common formulations of the clique problem include finding a maximum clique, finding a maximum weight clique in a weighted graph, listing all maximal cliques, and solving the decision problem of testing whether a graph contains a clique larger than a given size.

Theoretical computer science is a subfield of computer science and mathematics that focuses on the abstract and mathematical foundations of computation, such as the theory of computation, formal language theory, the lambda calculus and type theory.

<span class="mw-page-title-main">Vertex cover</span> Subset of a graphs vertices, including at least one endpoint of every edge

In graph theory, a vertex cover of a graph is a set of vertices that includes at least one endpoint of every edge of the graph.

<span class="mw-page-title-main">Gödel Prize</span> Computer science award

The Gödel Prize is an annual prize for outstanding papers in the area of theoretical computer science, given jointly by the European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery Special Interest Group on Algorithms and Computational Theory. The award is named in honor of Kurt Gödel. Gödel's connection to theoretical computer science is that he was the first to mention the "P versus NP" question, in a 1956 letter to John von Neumann in which Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time.

In computational complexity theory, a function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem. For function problems, the output is not simply 'yes' or 'no'.

The Fulkerson Prize for outstanding papers in the area of discrete mathematics is sponsored jointly by the Mathematical Optimization Society (MOS) and the American Mathematical Society (AMS). Up to three awards of $1,500 each are presented at each (triennial) International Symposium of the MOS. Originally, the prizes were paid out of a memorial fund administered by the AMS that was established by friends of the late Delbert Ray Fulkerson to encourage mathematical excellence in the fields of research exemplified by his work. The prizes are now funded by an endowment administered by MPS.

In computational complexity theory, the unique games conjecture is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP, then for many important problems it is not only impossible to get an exact solution in polynomial time, but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines.

In computational complexity theory, the PCP theorem states that every decision problem in the NP complexity class has probabilistically checkable proofs of constant query complexity and logarithmic randomness complexity.

In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently. It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.

<span class="mw-page-title-main">Computational complexity of mathematical operations</span> Algorithmic runtime requirements for common math procedures

The following tables list the computational complexity of various algorithms for common mathematical operations.

<span class="mw-page-title-main">Circuit complexity</span> Model of computational complexity

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .

In theoretical computer science, the Aanderaa–Karp–Rosenberg conjecture is a group of related conjectures about the number of questions of the form "Is there an edge between vertex and vertex ?" that have to be answered to determine whether or not an undirected graph has a particular property such as planarity or bipartiteness. They are named after Stål Aanderaa, Richard M. Karp, and Arnold L. Rosenberg. According to the conjecture, for a wide class of properties, no algorithm can guarantee that it will be able to skip any questions: any algorithm for determining whether the graph has the property, no matter how clever, might need to examine every pair of vertices before it can give its answer. A property satisfying this conjecture is called evasive.

<span class="mw-page-title-main">Decision tree model</span> Model of computational complexity

In computational complexity the decision tree model is the model of computation in which an algorithm is considered to be basically a decision tree, i.e., a sequence of queries or tests that are done adaptively, so the outcome of previous tests can influence the tests performed next.

In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on or from a spectral perspective. The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning.

Elchanan Mossel is a professor of mathematics at the Massachusetts Institute of Technology. His primary research fields are probability theory, combinatorics, and statistical inference.

Hao Huang is a mathematician known for solving the sensitivity conjecture. Huang is currently an associate professor in the mathematics department at National University of Singapore.

References