Weighted constraint satisfaction problem

Last updated

In artificial intelligence and operations research, a Weighted Constraint Satisfaction Problem (WCSP) is a generalization of a constraint satisfaction problem (CSP) where some of the constraints can be violated (according to a violation degree) and in which preferences among solutions can be expressed. This generalization makes it possible to represent more real-world problems, in particular those that are over-constrained (no solution can be found without violating at least one constraint), or those where we want to find a minimal-cost solution (according to a cost function) among multiple possible solutions.

In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is used to describe machines that mimic "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".

Operations research, or operational research (OR) in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions. Further, the term operational analysis is used in the British military as an intrinsic part of capability development, management and assurance. In particular, operational analysis forms part of the Combined Operational Effectiveness and Investment Appraisals, which support British defense capability acquisition decision-making.

Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite constraints over variables, which is solved by constraint satisfaction methods. CSPs are the subject of intense research in both artificial intelligence and operations research, since the regularity in their formulation provides a common basis to analyze and solve problems of many seemingly unrelated families. CSPs often exhibit high complexity, requiring a combination of heuristics and combinatorial search methods to be solved in a reasonable time. The Boolean satisfiability problem (SAT), the satisfiability modulo theories (SMT) and answer set programming (ASP) can be roughly thought of as certain forms of the constraint satisfaction problem.

Contents

Formal definition

A Weighted Constraint Network (WCN) is a triplet where is a finite set of variables, is a finite set of soft constraints and is either a natural integer or .

Each soft constraint involves an ordered set of variables, called its scope, and is defined as a cost function from to where is the set of possible instantiations of . When an instantiation is given the cost , i.e., , it is said forbidden. Otherwise it is permitted with the corresponding cost (0 being completely satisfactory).

In WCSP, specific subclass of Valued CSP (VCSP), costs are combined with the specific operator defined as: . The partial inverse of is defined by: if , and if , .

Without any loss of generality, the existence of a nullary constraint (a cost) as well as the presence of a unary constraint for every variable is assumed.

The total cost of an instantiation on a soft constraint , includes the cost of on as well as the nullary cost and the unary costs for of the variables in .

Considering a WCN, the usual (NP-hard) task of WCSP is to find a complete instantiation with a minimal cost.

Resolution of binary/ternary WCSPs

Approach with cost transfer operations

Node consistency (NC) and Arc consistency (AC), introduced for the Constraint Satisfaction Problem (CSP), have been studied later in the context of WCSP. Furthermore, several consistencies about the best form of arc consistency have been proposed: Full Directional Arc consistency (FDAC), [1] Existential Directional Arc consistency (EDAC), [2] Virtual Arc consistency (VAC) [3] and Optimal Soft Arc consistency (OSAC). [4]

Algorithms enforcing such properties are based on Equivalence Preserving Transformations (EPT) that allow safe moves of costs among constraints. Three basic costs transfer operations are:

Basic Equivalence Preserving Transformations. TransfertsWCSP.pdf
Basic Equivalence Preserving Transformations.

The goal of Equivalence Preserving Transformations is to concentrate costs on the nullary constraint and remove efficiently instantiations and values with a cost, added to , that is greater than or equal to the forbidden cost or the cost of the best solution found.

Approach without cost transfer operations

An alternative to cost transfer algorithms is the algorithm PFC-MRDAC [5] which is a classical branch and bound algorithm that computes lower bound at each node of the search tree, that corresponds to an under-estimation of the cost of any solution that can be obtained from this node. The cost of the best solution found is . When , then the search tree from this node is pruned.

Resolution of n-ary WCSPs

Cost transfer algorithms have been shown to be particularly efficient to solve real-world problem when soft constraints are binary or ternary (maximal arity of constraints in the problem is equal to 2 or 3). For soft constraints of large arity, cost transfer becomes a serious issue because the risk of combinatorial explosion has to be controlled.

In mathematics, a combinatorial explosion is the rapid growth of the complexity of a problem due to how the combinatorics of the problem is affected by the input, constraints, and bounds of the problem. Combinatorial explosion is sometimes used to justify the intractability of certain problems. Examples of such problems include certain mathematical functions, the analysis of some puzzles and games, and some pathological examples which can be modelled as the Ackermann function.

An algorithm, called , [6] has been proposed to enforce a weak version of the property Generalized Arc Consistency (GAC) on soft constraints defined extensionally by listing tuples and their costs. This algorithm combines two techniques, namely, Simple Tabular Reduction (STR) [7] and cost transfer. Values that are no longer consistent with respect to GAC are identified and minimum costs of values are computed. This is particularly useful for efficiently performing projection operations that are required to establish GAC.

Benchmarks

Many real-world WCSP benchmarks are available on http://costfunction.org/en/benchmark [8] and on http://www.cril.univ-artois.fr/~lecoutre/benchmarks.html .

See also

Related Research Articles

In a chemical reaction, chemical equilibrium is the state in which both reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system. Usually, this state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but equal. Thus, there are no net changes in the concentrations of the reactant(s) and product(s). Such a state is known as dynamic equilibrium.

Beta distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] parametrized by two positive shape parameters, denoted by α and β, that appear as exponents of the random variable and control the shape of the distribution. It is a special case of the Dirichlet distribution.

Gamma distribution probability distribution

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are three different parametrizations in common use:

  1. With a shape parameter k and a scale parameter θ.
  2. With a shape parameter α = k and an inverse scale parameter β = 1/θ, called a rate parameter.
  3. With a shape parameter k and a mean parameter μ = = α/β.

The equilibrium constant of a chemical reaction is the value of its reaction quotient at chemical equilibrium, a state approached by a dynamic chemical system after sufficient time has elapsed at which its composition has no measurable tendency towards further change. For a given set of reaction conditions, the equilibrium constant is independent of the initial analytical concentrations of the reactant and product species in the mixture. Thus, given the initial composition of a system, known equilibrium constant values can be used to determine the composition of the system at equilibrium. However, reaction parameters like temperature, solvent, and ionic strength may all influence the value of the equilibrium constant.

Conjugate gradient method

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is symmetric and positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.

An activity coefficient is a factor used in thermodynamics to account for deviations from ideal behaviour in a mixture of chemical substances. In an ideal mixture, the microscopic interactions between each pair of chemical species are the same and, as a result, properties of the mixtures can be expressed directly in terms of simple concentrations or partial pressures of the substances present e.g. Raoult's law. Deviations from ideality are accommodated by modifying the concentration by an activity coefficient. Analogously, expressions involving gases can be adjusted for non-ideality by scaling partial pressures by a fugacity coefficient.

Pollard's rho algorithm for logarithms is an algorithm introduced by John Pollard in 1978 to solve the discrete logarithm problem, analogous to Pollard's rho algorithm to solve the integer factorization problem.

The AC-3 algorithm is one of a series of algorithms used for the solution of constraint satisfaction problems. It was developed by Alan Mackworth in 1977. The earlier AC algorithms are often considered too inefficient, and many of the later ones are difficult to implement, and so AC-3 is the one most often taught and used in very simple constraint solvers.

ID/LP Grammars are a subset of Phrase Structure Grammars, differentiated from other formal grammars by distinguishing between immediate dominance (ID) and linear precedence (LP) constraints. Whereas traditional phrase structure rules incorporate dominance and precedence into a single rule, ID/LP Grammars maintains separate rule sets which need not be processed simultaneously. ID/LP Grammars are used in Computational Linguistics.

Conjunctive grammars are a class of formal grammars studied in formal language theory. They extend the basic type of grammars, the context-free grammars, with a conjunction operation. Besides explicit conjunction, conjunctive grammars allow implicit disjunction represented by multiple rules for a single nonterminal symbol, which is the only logical connective expressible in context-free grammars. Conjunction can be used, in particular, to specify intersection of languages. A further extension of conjunctive grammars known as Boolean grammars additionally allows explicit negation.

In constraint satisfaction, local consistency conditions are properties of constraint satisfaction problems related to the consistency of subsets of variables or constraints. They can be used to reduce the search space and make the problem easier to solve. Various kinds of local consistency conditions are leveraged, including node consistency, arc consistency, and path consistency.

Distributed constraint optimization is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is minimized.

In the mathematical subfield of numerical analysis, monotone cubic interpolation is a variant of cubic interpolation that preserves monotonicity of the data set being interpolated.

In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The general steps involved are; (i) choose novel unconstrained coordinates, (ii) introduce explicit constraint forces, (iii) minimize constraint forces implicitly by the technique of Lagrange multipliers or projection methods.

In mathematics, the Schauder estimates are a collection of results due to Juliusz Schauder concerning the regularity of solutions to linear, uniformly elliptic partial differential equations. The estimates say that when the equation has appropriately smooth terms and appropriately smooth solutions, then the Hölder norm of the solution can be controlled in terms of the Hölder norms for the coefficient and source terms. Since these estimates assume by hypothesis the existence of a solution, they are called a priori estimates.

In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.

In coding theory, list decoding is an alternative to unique decoding of error-correcting codes in the presence of many errors. If a code has relative distance , then it is possible in principle to recover an encoded message when up to fraction of the codeword symbols are corrupted. But when error rate is greater than , this will not in general be possible. List decoding overcomes that issue by allowing the decoder to output a short list of messages that might have been encoded. List decoding can correct more than fraction of errors.

In computer science, an interchangeability algorithm is a technique used to more efficiently solve constraint satisfaction problems (CSP). A CSP is a mathematical problem in which objects, represented by variables, are subject to constraints on the values of those variables; the goal in a CSP is to assign values to the variables that are consistent with the constraints. If two variables A and B in a CSP may be swapped for each other without changing the nature of the problem or its solutions, then A and B are interchangeable variables. Interchangeable variables represent a symmetry of the CSP and by exploiting that symmetry, the search space for solutions to a CSP problem may be reduced. For example, if solutions with A=1 and B=2 have been tried, then by interchange symmetry, solutions with B=1 and A=2 need not be investigated.

In global optimization, a state transition algorithm (STA) is an iterative method that generates a sequence of continually improved approximations to provide a solution to an optimization problem. It was first proposed by Zhou, et al.

References

  1. M. Cooper. Reduction operations in fuzzy or valued constraint satisfaction. Fuzzy Sets and Systems, 134(3):311–342, 2003.
  2. S. de Givry, F. Heras, M. Zytnicki, and J. Larrosa. Existential arc consistency: Getting closer to full arc consistency in weighted CSPs. In Proceedings of IJCAI’05, pages 84–89, 2005.
  3. M. Cooper, S. de Givry, M. Sanchez, T. Schiex, M. Zytnicki. Virtual Arc Consistency for Weighted CSP. In Proceedings of AAAI’08, pages 253-258, 2008.
  4. M. Cooper, S. de Givry, M. Sanchez, T. Schiex, M. Zytnicki, and T. Werner. Soft arc consistency revisited. Artificial Intelligence, 174(7-8):449–478, 2010.
  5. E.C. Freuder and R.J. Wallace. Partial constraint satisfaction. Artificial Intelligence, 58(1- 3):21–70, 1992.
  6. C. Lecoutre, N. Paris, O. Roussel, S. Tabary. Propagating Soft Table Constraints. In Proceedings of CP’12, pages 390-405, 2012.
  7. C. Lecoutre. STR2: Optimized simple tabular reduction for table constraint. Constraints, 16(4):341–371, 2011.
  8. The aims of this web site is to promote cost function network in providing Benchmark and teaching material, solver demo, link to article about cost function used in the context of constraint programming.