The hidden transformation reformulates a constraint satisfaction problem in such a way all constraints have at most two variables. The new problem is satisfiable if and only if the original problem was, and solutions can be converted easily from one problem to the other.
Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite constraints over variables, which is solved by constraint satisfaction methods. CSPs are the subject of intense research in both artificial intelligence and operations research, since the regularity in their formulation provides a common basis to analyze and solve problems of many seemingly unrelated families. CSPs often exhibit high complexity, requiring a combination of heuristics and combinatorial search methods to be solved in a reasonable time. The Boolean satisfiability problem (SAT), the satisfiability modulo theories (SMT) and answer set programming (ASP) can be roughly thought of as certain forms of the constraint satisfaction problem.
There are a number of algorithms for constraint satisfaction that work only on constraints that have at most two variables. If a problem has constraints with a larger arity (number of variables), conversion into a problem made of binary constraints allows for execution of these solving algorithms. Constraints with one, two, or more variables are called unary, binary, or higher-order constraints. The number of variables in a constraint is called its arity.
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, and other tasks.
The hidden transformation converts an arbitrary constraint satisfaction problem into a binary one. The transformation is similar to that generating the dual problem. The problem is added new variables, one for each constraint of the original problem. The domain of each such variable is the set of satisfying tuples of the corresponding constraint. The constraints of the new problem enforce the value of the original variables to be consistent with the values of the new variables. For example, if the new variables , corresponding to the old constraint can assume values and , two new constraints are added: the first one enforces to take value if value if , and vice versa. The second condition enforces a similar condition for variable .
The dual problem is a reformulation of a constraint satisfaction problem expressing each constraint of the original problem as a variable. Dual problems only contain binary constraints, and are therefore solvable by algorithms tailored for such problems. The join graphs and join trees of a constraint satisfaction problem are graphs representing its dual problem or a problem obtained from the dual problem removing some redundant constraints.
The graph representing the result of this transformation is bipartite, as all constraints are between a new and an old variable. Moreover, the constraints are functional: for any given value of a new variable, only one value of the old variable may satisfy the constraint.
In the mathematical field of graph theory, a bipartite graph is a graph whose vertices can be divided into two disjoint and independent sets and such that every edge connects a vertex in to one in . Vertex sets and are usually called the parts of the graph. Equivalently, a bipartite graph is a graph that does not contain any odd-length cycles.
In logic, mathematics, and computer science, the arity of a function or operation is the number of arguments or operands that the function takes. The arity of a relation is the dimension of the domain in the corresponding Cartesian product. The term springs from words like unary, binary, ternary, etc. Unary functions or predicates may be also called "monadic"; similarly, binary functions may be called "dyadic".
Linear programming is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming.
In mathematical optimization, Dantzig's simplex algorithm is a popular algorithm for linear programming.
An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.
The AC-3 algorithm is one of a series of algorithms used for the solution of constraint satisfaction problems. It was developed by Alan Mackworth in 1977. The earlier AC algorithms are often considered too inefficient, and many of the later ones are difficult to implement, and so AC-3 is the one most often taught and used in very simple constraint solvers.
In computational complexity theory, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true.
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem. However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition.
In constraint satisfaction, local consistency conditions are properties of constraint satisfaction problems related to the consistency of subsets of variables or constraints. They can be used to reduce the search space and make the problem easier to solve. Various kinds of local consistency conditions are leveraged, including node consistency, arc consistency, and path consistency.
In constraint satisfaction, local search is an incomplete method for finding a solution to a problem. It is based on iteratively improving an assignment of the variables until all constraints are satisfied. In particular, local search algorithms typically modify the value of a variable in an assignment at each step. The new assignment is close to the previous one in the space of assignment, hence the name local search.
In mathematical optimization, constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.
Constraint logic programming is a form of constraint programming, in which logic programming is extended to include concepts from constraint satisfaction. A constraint logic program is a logic program that contains constraints in the body of clauses. An example of a clause including a constraint is A(X,Y):-X+Y>0,B(X),C(Y)
. In this clause, X+Y>0
is a constraint; A(X,Y)
, B(X)
, and C(Y)
are literals as in regular logic programming. This clause states one condition under which the statement A(X,Y)
holds: X+Y
is greater than zero and both B(X)
and C(Y)
are true.
In constraint satisfaction, constraint inference is a relationship between constraints and their consequences. A set of constraints entails a constraint if every solution to is also a solution to . In other words, if is a valuation of the variables in the scopes of the constraints in and all constraints in are satisfied by , then also satisfies the constraint .
The complexity of constraint satisfaction is the application of computational complexity theory on constraint satisfaction. It has mainly been studied for discriminating between tractable and intractable classes of constraint satisfaction problems on finite domains.
In constraint satisfaction, a decomposition method translates a constraint satisfaction problem into another constraint satisfaction problem that is binary and acyclic. Decomposition methods work by grouping variables into sets, and solving a subproblem for each set. These translations are done because solving binary acyclic problems is a tractable problem.
In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down.
In operations research, the Big M method is a method of solving linear programming problems using the simplex algorithm. The Big M method extends the power of the simplex algorithm to problems that contain "greater-than" constraints. It does so by associating the constraints with large negative constants which would not be part of any optimal solution, if it exists.
In artificial intelligence and operations research, a Weighted Constraint Satisfaction Problem (WCSP) is a generalization of a constraint satisfaction problem (CSP) where some of the constraints can be violated and in which preferences among solutions can be expressed. This generalization makes it possible to represent more real-world problems, in particular those that are over-constrained, or those where we want to find a minimal-cost solution among multiple possible solutions.
In computer science, an interchangeability algorithm is a technique used to more efficiently solve constraint satisfaction problems (CSP). A CSP is a mathematical problem in which objects, represented by variables, are subject to constraints on the values of those variables; the goal in a CSP is to assign values to the variables that are consistent with the constraints. If two variables A and B in a CSP may be swapped for each other without changing the nature of the problem or its solutions, then A and B are interchangeable variables. Interchangeable variables represent a symmetry of the CSP and by exploiting that symmetry, the search space for solutions to a CSP problem may be reduced. For example, if solutions with A=1 and B=2 have been tried, then by interchange symmetry, solutions with B=1 and A=2 need not be investigated.
In computing, a Digital Object Identifier or DOI is a persistent identifier or handle used to identify objects uniquely, standardized by the International Organization for Standardization (ISO). An implementation of the Handle System, DOIs are in wide use mainly to identify academic, professional, and government information, such as journal articles, research reports and data sets, and official publications though they also have been used to identify other types of information resources, such as commercial videos.