Ryan O'Donnell (computer scientist)

Last updated
Ryan O'Donnell
Alma mater
Known for Analysis of Boolean functions [pub 1]
Scientific career
Fields Analysis of Boolean functions, Complexity theory, Computational learning theory, Hardness of approximation, Quantum computing
Thesis Computational applications of noise sensitivity  (2003)
Doctoral advisor Madhu Sudan
Website https://www.cs.cmu.edu/~odonnell/

Ryan O'Donnell is a Canadian theoretical computer scientist and a professor at Carnegie Mellon University. He is known for his work on the analysis of Boolean functions and for authoring the textbook on this subject. [pub 1] He is also known for his work on computational learning theory, hardness of approximation, property testing, quantum computation and quantum information.

Contents

O'Donnell completed his B.Sc. in Mathematics and Computer Science at the University of Toronto. [1] He then completed his Ph.D. at the Massachusetts Institute of Technology (MIT) in 2003, advised by Madhu Sudan. [2]

Research

O'Donnell proved that the Goemans–Williamson approximation algorithm for MAX-CUT is optimal, assuming the unique games conjecture. The proof follows from two papers, one in 2004 with Subhash Khot, Guy Kindler, and Elchanan Mossel which reduced this statement to proving the Majority Is Stablest conjecture in analysis of Boolean functions, [pub 2] and one in 2005 with Elchanan Mossel and Krzysztof Oleszkiewicz which proves this conjecture. [pub 3] He later wrote an influential textbook on the analysis of Boolean functions. [pub 1]

O'Donnell's other notable contributions include participation in the first Polymath project, Polymath1, for developing a combinatorial proof to the density Hales–Jewett theorem, [3] [pub 4] improved algorithms for problems in computational learning theory, [pub 5] and improved algorithms for the tomography of quantum states. [pub 6]

Recognition

He received the National Science Foundation CAREER Award in 2008 and a Sloan Research Fellowship in 2009. He gave an invited lecture at the International Congress of Mathematicians in 2014.

Service

O'Donnell served as the editor-in-chief for the journal ACM Transactions on Computation Theory from 2019 to 2023 and was an editor of the SIAM Journal on Discrete Mathematics from 2012 to 2017. He is on the scientific advisory board of the Simons Institute for the Theory of Computing [4] and on the scientific board of the Electronic Colloquium on Computational Complexity. [5]

O'Donnell operates a YouTube channel, which has 10.2k+ subscribers and 680k+ views as of December 2023 [6] . On there, he delivers mathematics and computer science lectures on topics such as complexity theory, spectral graph theory, and analysis of boolean functions, as well as uploads lectures from his classes at Carnegie Mellon University. He has directed several course series, such as his "CS Theory Toolkit" series, where he explores mathematical areas applicable to the theoretical computer science field.

Selected publications

  1. 1 2 3 O'Donnell, Ryan (2014). Analysis of Boolean functions. New York: Cambridge University Press. arXiv: 2105.10386 . ISBN   978-1-107-03832-5.
  2. Khot, Subhash; Kindler, Guy; Mossel, Elchanan; O’Donnell, Ryan (2007). "Optimal Inapproximability Results for MAX‐CUT and Other 2‐Variable CSPs?". SIAM Journal on Computing. 37 (1): 319–357. doi:10.1137/S0097539705447372. ISSN   0097-5397. S2CID   2090495.
  3. Mossel, Elchanan; O’Donnell, Ryan; Oleszkiewicz, Krzysztof (2010-03-17). "Noise stability of functions with low influences: Invariance and optimality". Annals of Mathematics. 171 (1): 295–341. doi: 10.4007/annals.2010.171.295 . ISSN   0003-486X.
  4. Polymath, D.H.J. (2012-05-01). "A new proof of the density Hales-Jewett theorem". Annals of Mathematics. 175 (3): 1283–1327. doi: 10.4007/annals.2012.175.3.6 . ISSN   0003-486X.
  5. Klivans, A.R.; O'Donnell, R.; Servedio, R.A. (2002). "Learning intersections and thresholds of halfspaces". The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings. IEEE. pp. 177–186. doi:10.1109/SFCS.2002.1181894. ISBN   978-0-7695-1822-0. S2CID   1664758.
  6. O'Donnell, Ryan; Wright, John (2017-06-19). "Efficient quantum tomography II". Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing. ACM. pp. 962–974. doi: 10.1145/3055399.3055454 . ISBN   978-1-4503-4528-6.

Related Research Articles

In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">Stephen Cook</span> American-Canadian computer scientist, contributor to complexity theory

Stephen Arthur Cook is an American-Canadian computer scientist and mathematician who has made significant contributions to the fields of complexity theory and proof complexity. He is a university professor emeritus at the University of Toronto, Department of Computer Science and Department of Mathematics.

Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.

<span class="mw-page-title-main">Juris Hartmanis</span> American computer scientist (1928–2022)

Juris Hartmanis was a Latvian-born American computer scientist and computational theorist who, with Richard E. Stearns, received the 1993 ACM Turing Award "in recognition of their seminal paper which established the foundations for the field of computational complexity theory".

<span class="mw-page-title-main">Theoretical computer science</span> Subfield of computer science and mathematics

Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, formal language theory, the lambda calculus and type theory.

The Gödel Prize is an annual prize for outstanding papers in the area of theoretical computer science, given jointly by the European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery Special Interest Group on Algorithms and Computational Theory. The award is named in honor of Kurt Gödel. Gödel's connection to theoretical computer science is that he was the first to mention the "P versus NP" question, in a 1956 letter to John von Neumann in which Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time.

In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.

In computational complexity theory, a function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem. For function problems, the output is not simply 'yes' or 'no'.

In graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets. Any cut determines a cut-set, the set of edges that have one endpoint in each subset of the partition. These edges are said to cross the cut. In a connected graph, each cut-set determines a unique cut, and in some cases cuts are identified with their cut-sets rather than with their vertex partitions.

In computational complexity theory, the unique games conjecture is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP, then for many important problems it is not only impossible to get an exact solution in polynomial time, but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines.

In computational complexity theory, the PCP theorem states that every decision problem in the NP complexity class has probabilistically checkable proofs of constant query complexity and logarithmic randomness complexity.

<span class="mw-page-title-main">Computational complexity of mathematical operations</span> Algorithmic runtime requirements for common math procedures

The following tables list the computational complexity of various algorithms for common mathematical operations.

Daniel Alan Spielman has been a professor of applied mathematics and computer science at Yale University since 2006. As of 2018, he is the Sterling Professor of Computer Science at Yale. He is also the Co-Director of the Yale Institute for Network Science, since its founding, and chair of the newly established Department of Statistics and Data Science.

In theoretical computer science, the Aanderaa–Karp–Rosenberg conjecture is a group of related conjectures about the number of questions of the form "Is there an edge between vertex and vertex ?" that have to be answered to determine whether or not an undirected graph has a particular property such as planarity or bipartiteness. They are named after Stål Aanderaa, Richard M. Karp, and Arnold L. Rosenberg. According to the conjecture, for a wide class of properties, no algorithm can guarantee that it will be able to skip any questions: any algorithm for determining whether the graph has the property, no matter how clever, might need to examine every pair of vertices before it can give its answer. A property satisfying this conjecture is called evasive.

In computational complexity the decision tree model is the model of computation in which an algorithm is considered to be basically a decision tree, i.e., a sequence of queries or tests that are done adaptively, so the outcome of previous tests can influence the tests performed next.

<span class="mw-page-title-main">NP-completeness</span> Complexity class

In computational complexity theory, a problem is NP-complete when:

  1. It is a decision problem, meaning that for any input to the problem, the output is either "yes" or "no".
  2. When the answer is "yes", this can be demonstrated through the existence of a short solution.
  3. The correctness of each solution can be verified quickly and a brute-force search algorithm can find a solution by trying all possible solutions.
  4. The problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified.

In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on or from a spectral perspective. The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning.

Elchanan Mossel is a professor of mathematics at the Massachusetts Institute of Technology. His primary research fields are probability theory, combinatorics, and statistical inference.

Hao Huang is a mathematician known for solving the sensitivity conjecture. Huang is currently an associate professor in the mathematics department at National University of Singapore.

References