Stefan Szeider | |
---|---|
Nationality | Austrian |
Alma mater | University of Vienna |
Scientific career | |
Fields | Algorithms Complexity Theoretical computer science Boolean satisfiability Constraint satisfaction Parameterized complexity |
Institutions | TU Wien University of Durham University of Toronto Austrian Academy of Sciences |
Doctoral advisors | Herbert Fleischner Georg Gottlob |
Stefan Szeider is an Austrian computer scientist who works on the areas of algorithms, computational complexity, theoretical computer science, and more specifically on propositional satisfiability, constraint satisfaction problems, and parameterised complexity. He is a full professor at the Faculty of Informatics [1] at the Vienna University of Technology (TU Wien), the head of the Algorithms and Complexity Group, and co-chair of the Vienna Center for Logic and Algorithms (VCLA) of TU Wien. [2] [3]
Szeider received his doctorate in Mathematics from the University of Vienna in 2001 under the supervision of Professors Herbert Fleischner and Georg Gottlob while working as a mathematician at the Austrian Academy of Sciences. [4] [5]
Szeider is a full professor at the Faculty of Informatics at TU Wien. [1] Previously he was first Lecturer and then Reader at the University of Durham, UK (2004–2009) and a postdoc with Professor Stephen Cook’s Group at the University of Toronto (2002–2004). [5] [6] He is a co-chair of the Vienna Center for Logic and Algorithms, which he founded together with Helmut Veith in 2012. [7] [8] He serves on the editorial boards of the Journal of Computer and System Sciences , the Journal of Discrete Algorithms, the Journal of Artificial Intelligence Research and Fundamenta Informaticae . [5]
Szeider published more than 140 refereed publications in the areas of theoretical computer science, algorithms, computational complexity, artificial intelligence, propositional satisfiability and constraint satisfaction. [9] [10]
Szeider is best known for popularizing the notion of backdoor sets for SAT and other problems [11] [12] and the introduction of dependency schemes for quantified boolean formulas. [13]
Szeider also worked on width measures for graphs such as treewidth and clique-width. He showed with coauthors that it is NP-hard to determine whether the clique-width of a given graph is smaller than a given bound. [14] He established complexity results for detecting minimally unsatisfiable formulas. [15] [16]
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.
Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite constraints over variables, which is solved by constraint satisfaction methods. CSPs are the subject of research in both artificial intelligence and operations research, since the regularity in their formulation provides a common basis to analyze and solve problems of many seemingly unrelated families. CSPs often exhibit high complexity, requiring a combination of heuristics and combinatorial search methods to be solved in a reasonable time. Constraint programming (CP) is the field of research that specifically focuses on tackling these kinds of problems. Additionally, Boolean satisfiability problem (SAT), the satisfiability modulo theories (SMT), mixed integer programming (MIP) and answer set programming (ASP) are all fields of research focusing on the resolution of particular forms of the constraint satisfaction problem.
In computer science, the clique problem is the computational problem of finding cliques in a graph. It has several different formulations depending on which cliques, and what information about the cliques, should be found. Common formulations of the clique problem include finding a maximum clique, finding a maximum weight clique in a weighted graph, listing all maximal cliques, and solving the decision problem of testing whether a graph contains a clique larger than a given size.
In computer science, 2-satisfiability, 2-SAT or just 2SAT is a computational problem of assigning values to variables, each of which has two possible values, in order to satisfy a system of constraints on pairs of variables. It is a special case of the general Boolean satisfiability problem, which can involve constraints on more than two variables, and of constraint satisfaction problems, which can allow more than two choices for the value of each variable. But in contrast to those more general problems, which are NP-complete, 2-satisfiability can be solved in polynomial time.
In formal logic, Horn-satisfiability, or HORNSAT, is the problem of deciding whether a given set of propositional Horn clauses is satisfiable or not. Horn-satisfiability and Horn clauses are named after Alfred Horn.
In logic and computer science, the Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a complete, backtracking-based search algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form, i.e. for solving the CNF-SAT problem.
In computational complexity theory, the PCP theorem states that every decision problem in the NP complexity class has probabilistically checkable proofs of constant query complexity and logarithmic randomness complexity.
The Karloff–Zwick algorithm, in computational complexity theory, is a randomised approximation algorithm taking an instance of MAX-3SAT Boolean satisfiability problem as input. If the instance is satisfiable, then the expected weight of the assignment found is at least 7/8 of optimal. There is strong evidence that the algorithm achieves 7/8 of optimal even on unsatisfiable MAX-3SAT instances. Howard Karloff and Uri Zwick presented the algorithm in 1997.
In computational complexity, problems that are in the complexity class NP but are neither in the class P nor NP-complete are called NP-intermediate, and the class of such problems is called NPI. Ladner's theorem, shown in 1975 by Richard E. Ladner, is a result asserting that, if P ≠ NP, then NPI is not empty; that is, NP contains problems that are neither in P nor NP-complete. Since it is also true that if NPI problems exist, then P ≠ NP, it follows that P = NP if and only if NPI is empty.
In mathematical logic, given an unsatisfiable Boolean propositional formula in conjunctive normal form, a subset of clauses whose conjunction is still unsatisfiable is called an unsatisfiable core of the original formula.
In computer science and formal methods, a SAT solver is a computer program which aims to solve the Boolean satisfiability problem. On input a formula over Boolean variables, such as "(x or y) and ", a SAT solver outputs whether the formula is satisfiable, meaning that there are possible values of x and y which make the formula true, or unsatisfiable, meaning that there are no such values of x and y. In this case, the formula is satisfiable when x is true, so the solver should return "satisfiable". Since the introduction of algorithms for SAT in the 1960s, modern SAT solvers have grown into complex software artifacts involving a large number of heuristics and program optimizations to work efficiently.
In graph theory, the clique-width of a graph G is a parameter that describes the structural complexity of the graph; it is closely related to treewidth, but unlike treewidth it can be bounded even for dense graphs. It is defined as the minimum number of labels needed to construct G by means of the following 4 operations :
In mathematical logic, a formula is satisfiable if it is true under some assignment of values to its variables. For example, the formula is satisfiable because it is true when and , while the formula is not satisfiable over the integers. The dual concept to satisfiability is validity; a formula is valid if every assignment of values to its variables makes the formula true. For example, is valid over the integers, but is not.
In computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by Impagliazzo & Paturi (1999). It states that satisfiability of 3-CNF Boolean formulas cannot be solved more quickly than exponential time in the worst case. The exponential time hypothesis, if true, would imply that P ≠ NP, but it is a stronger statement. It implies that many computational problems are equivalent in complexity, in the sense that if one of them has a subexponential time algorithm then they all do, and that many known algorithms for these problems have optimal or near-optimal time complexity.
Toniann Pitassi is a Canadian and American mathematician and computer scientist specializing in computational complexity theory. She is currently Jeffrey L. and Brenda Bleustein Professor of Engineering at Columbia University and was Bell Research Chair at the University of Toronto.
Derek Gordon Corneil is a Canadian mathematician and computer scientist, a professor emeritus of computer science at the University of Toronto, and an expert in graph algorithms and graph theory.
In the study of graph algorithms, Courcelle's theorem is the statement that every graph property definable in the monadic second-order logic of graphs can be decided in linear time on graphs of bounded treewidth. The result was first proved by Bruno Courcelle in 1990 and independently rediscovered by Borie, Parker & Tovey (1992). It is considered the archetype of algorithmic meta-theorems.
Uwe Schöning is a retired German computer scientist, known for his research in computational complexity theory.
Helmut Veith was an Austrian computer scientist who worked on the areas of computer-aided verification, software engineering, computer security, and logic in computer science. He was a Professor of Informatics at the Vienna University of Technology, Austria.
Agata Ciabattoni is an Italian mathematical logician specializing in non-classical logic. She is a full professor at the Institute of Logic and Computation of the Faculty of Informatics at the Vienna University of Technology, and a co-chair of the Vienna Center for Logic and Algorithms of TU Wien (VCLA).