Christopher Umans | |
---|---|
Nationality | American |
Alma mater | Williams College, University of California, Berkeley |
Known for | Computational complexity, algorithms, hardness of approximation, matrix multiplication |
Scientific career | |
Fields | Computer Science |
Institutions | California Institute of Technology |
Doctoral advisor | Christos Papadimitriou |
Christopher Umans is a professor of Computer Science in the Computing and Mathematical Sciences Department at the California Institute of Technology. He is known for work on algorithms, computational complexity, algebraic complexity, and hardness of approximation.
Umans studied at Williams College, where he completed a BA degree in Mathematics and Computer Science in 1996. He then received a PhD in Computer Science from University of California, Berkeley in 2000 under Christos Papadimitriou. Following his PhD, he was a postdoctoral researcher at Microsoft Research until joining Caltech in 2002.
Umans' research centers broadly around algorithms and complexity. He has made notable contributions to varied areas within this space including random number generation, expanders, and algorithms for matrix multiplication. A notable example is his work on developing a group theoretic approach for matrix multiplication. [1] [2] [3] [4] [5] [6] [7]
In 2008, Umans and his student Dave Buchfuhrer settled a 1979 conjecture on the complexity of unbounded Boolean formula minimization; the result won a best paper award at ICALP. [8]
Umans received an NSF CAREER award in 2004 and an Alfred P. Sloan Fellowship in 2005. [9] Additionally, his work has received "Best Paper" awards at the International Conference on Automata, Languages, and Programming (ICALP) and the IEEE Conference on Computational Complexity (CCC).
Theoretical computer science is a subfield of computer science and mathematics that focuses on the abstract and mathematical foundations of computation.
The Gödel Prize is an annual prize for outstanding papers in the area of theoretical computer science, given jointly by the European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery Special Interest Group on Algorithms and Computational Theory. The award is named in honor of Kurt Gödel. Gödel's connection to theoretical computer science is that he was the first to mention the "P versus NP" question, in a 1956 letter to John von Neumann in which Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time.
ICALP, the International Colloquium on Automata, Languages, and Programming is an academic conference organized annually by the European Association for Theoretical Computer Science and held in different locations around Europe. Like most theoretical computer science conferences its contributions are strongly peer-reviewed. The articles have appeared in proceedings published by Springer in their Lecture Notes in Computer Science, but beginning in 2016 they are instead published by the Leibniz International Proceedings in Informatics.
Maximum cardinality matching is a fundamental problem in graph theory. We are given a graph G, and the goal is to find a matching containing as many edges as possible; that is, a maximum cardinality subset of the edges such that each vertex is adjacent to at most one edge of the subset. As each edge will cover exactly two vertices, this problem is equivalent to the task of finding a matching that covers as many vertices as possible.
The following tables list the computational complexity of various algorithms for common mathematical operations.
Logic optimization is a process of finding an equivalent representation of the specified logic circuit under one or more specified constraints. This process is a part of a logic synthesis applied in digital electronics and integrated circuit design.
In abstract algebra, the triple product property is an identity satisfied in some groups.
Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors.
In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the fastest algorithm for matrix multiplication is of major practical relevance.
In mathematics, the minimum k-cut is a combinatorial optimization problem that requires finding a set of edges whose removal would partition the graph to at least k connected components. These edges are referred to as k-cut. The goal is to find the minimum-weight k-cut. This partitioning can have applications in VLSI design, data-mining, finite elements and communication in parallel computing.
In algorithmic game theory, a succinct game or a succinctly representable game is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of polynomial type if in a game represented by a string of length n the number of players, as well as the number of strategies of each player, is bounded by a polynomial in n.
Richard Ryan Williams, known as Ryan Williams, is an American theoretical computer scientist working in computational complexity theory and algorithms.
In mathematical logic, computational complexity theory, and computer science, the existential theory of the reals is the set of all true sentences of the form where the variables are interpreted as having real number values, and where is a quantifier-free formula involving equalities and inequalities of real polynomials. A sentence of this form is true if it is possible to find values for all of the variables that, when substituted into formula , make it become true.
Henry Cohn is an American mathematician. He is a principal researcher at Microsoft Research and an adjunct professor at MIT. In collaboration with Abhinav Kumar, Stephen D. Miller, Danylo Radchenko, and Maryna Viazovska, he solved the sphere packing problem in 24 dimensions. In 2003, with Chris Umans he initiated a group-theoretic approach to matrix multiplication, and is a core contributor to its continued development with various coauthors.
Virginia Vassilevska Williams is a theoretical computer scientist and mathematician known for her research in computational complexity theory and algorithms. She is currently the Steven and Renee Finn Career Development Associate Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. She is notable for her breakthrough results in fast matrix multiplication, for her work on dynamic algorithms, and for helping to develop the field of fine-grained complexity.
In economics, a budget-additive valuation is a kind of a utility function. It corresponds to a person that, when given a set of items, evaluates them in the following way:
Péter Gács, professionally also known as Peter Gacs, is a Hungarian-American mathematician and computer scientist, professor, and an external member of the Hungarian Academy of Sciences. He is well known for his work in reliable computation, randomness in computing, algorithmic complexity, algorithmic probability, and information theory.
The twin-width of an undirected graph is a natural number associated with the graph, used to study the parameterized complexity of graph algorithms. Intuitively, it measures how similar the graph is to a cograph, a type of graph that can be reduced to a single vertex by repeatedly merging together twins, vertices that have the same neighbors. The twin-width is defined from a sequence of repeated mergers where the vertices are not required to be twins, but have nearly equal sets of neighbors.