Distributed constraint optimization (DCOP or DisCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set of variables such that the cost of a set of constraints over the variables is minimized.
Distributed Constraint Satisfaction is a framework for describing a problem in terms of constraints that are known and enforced by distinct participants (agents). The constraints are described on some variables with predefined domains, and have to be assigned to the same values by the different agents.
Problems defined with this framework can be solved by any of the algorithms that are designed for it.
The framework was used under different names in the 1980s. The first known usage with the current name is in 1990.[ citation needed ]
The main ingredients of a DCOP problem are agents and variables. Importantly, each variable is owned by an agent; this is what makes the problem distributed. Formally, a DCOP is a tuple , where:
The objective of a DCOP is to have each agent assign values to its associated variables in order to either minimize or maximize for a given assignment of the variables.
A value assignment is a pair where is an element of the domain .
A partial assignment is a set of value-assignments where each appears at most once. It is also called a context. This can be thought of as a function mapping variables in the DCOP to their current values:
Note that a context is essentially a partial solution and need not contain values for every variable in the problem; therefore, implies that the agent has not yet assigned a value to variable . Given this representation, the "domain" (that is, the set of input values) of the function f
can be thought of as the set of all possible contexts for the DCOP. Therefore, in the remainder of this article we may use the notion of a context (i.e., the function) as an input to the function.
A full assignment is an assignment in which each appears exactly once, that is, all variables are assigned. It is also called a solution to the DCOP.
An optimal solution is a full assignment in which the objective function is optimized (i.e., maximized or minimized, depending on the type of problem).
Various problems from different domains can be presented as DCOPs.
The graph coloring problem is as follows: given a graph and a set of colors , assign each vertex, , a color, , such that the number of adjacent vertices with the same color is minimized.
As a DCOP, there is one agent per vertex that is assigned to decide the associated color. Each agent has a single variable whose associated domain is of cardinality (there is one domain value for each possible color). For each vertex , there is a variable with domain . For each pair of adjacent vertices , there is a constraint of cost 1 if both of the associated variables are assigned the same color:
The objective, then, is to minimize .
The distributed multiple- variant of the knapsack problem is as follows: given a set of items of varying volume and a set of knapsacks of varying capacity, assign each item to a knapsack such that the amount of overflow is minimized. Let be the set of items, be the set of knapsacks, be a function mapping items to their volume, and be a function mapping knapsacks to their capacities.
To encode this problem as a DCOP, for each create one variable with associated domain . Then for all possible contexts :
where represents the total weight assigned by context to knapsack :
The item allocation problem is as follows. There are several items that have to be divided among several agents. Each agent has a different valuation for the items. The goal is to optimize some global goal, such as maximizing the sum of utilities or minimizing the envy. The item allocation problem can be formulated as a DCOP as follows. [2]
DCOP was applied to other problems, such as:
DCOP algorithms can be classified in several ways: [3]
ADOPT, for example, uses best-first search, asynchronous synchronization, point-to-point communication between neighboring agents in the constraint graph and a constraint tree as main communication topology.
Algorithm Name | Year Introduced | Memory Complexity | Number of Messages | Correctness (computer science)/ Completeness (logic) | Implementations |
---|---|---|---|---|---|
ABT[ citation needed ] Asynchronous Backtracking | 1992 | [ citation needed ] | [ citation needed ] | Note: static ordering, complete | [ citation needed ] |
AWC[ citation needed ] Asynchronous Weak-Commitment | 1994 | [ citation needed ] | [ citation needed ] | Note: reordering, fast, complete (only with exponential space) | [ citation needed ] |
DBA Distributed Breakout Algorithm | 1995 | [ citation needed ] | [ citation needed ] | Note: incomplete but fast | FRODO version 1 [ permanent dead link ] |
SyncBB [4] Synchronous Branch and Bound | 1997 | [ citation needed ] | [ citation needed ] | Complete but slow | |
IDB Iterative Distributed Breakout | 1997 | [ citation needed ] | [ citation needed ] | Note: incomplete but fast | |
AAS[ citation needed ] Asynchronous Aggregation Search | 2000 | [ citation needed ] | [ citation needed ] | aggregation of values in ABT | [ citation needed ] |
DFC[ citation needed ] Distributed Forward Chaining | 2000 | [ citation needed ] | [ citation needed ] | Note: low, comparable to ABT | [ citation needed ] |
ABTR[ citation needed ] Asynchronous Backtracking with Reordering | 2001 | [ citation needed ] | [ citation needed ] | Note: reordering in ABT with bounded nogoods | [ citation needed ] |
DMAC[ citation needed ] Maintaining Asynchronously Consistencies | 2001 | [ citation needed ] | [ citation needed ] | Note: the fastest algorithm | [ citation needed ] |
Secure Computation with Semi-Trusted Servers[ citation needed ] | 2002 | [ citation needed ] | [ citation needed ] | Note: security increases with the number of trustworthy servers | [ citation needed ] |
Secure Multiparty Computation For Solving DisCSPs (MPC-DisCSP1-MPC-DisCSP4)[ citation needed ] | 2003 | [ citation needed ] | [ citation needed ] | Note: secure if 1/2 of the participants are trustworthy | [ citation needed ] |
Adopt Asynchronous Backtracking [5] | 2003 | Polynomial (or any-space [6] ) | Exponential | Proven | Reference Implementation: Adopt Archived 2006-09-16 at the Wayback Machine |
OptAPO Asynchronous Partial Overlay [7] | 2004 | Polynomial | Exponential | Proven, but proof of completeness has been challenged [8] | Reference Implementation: "OptAPO". Artificial Intelligence Center . SRI International. Archived from the original on 2007-07-15. |
DPOP Distributed Pseudotree Optimization Procedure [9] | 2005 | Exponential | Linear | Proven | Reference Implementation: FRODO (AGPL) |
NCBB No-Commitment Branch and Bound [10] | 2006 | Polynomial (or any-space [11] ) | Exponential | Proven | Reference Implementation: not publicly released |
CFL Communication-Free Learning [12] | 2013 | Linear | None Note: no messages are sent, but assumes knowledge about satisfaction of local constraint | Incomplete | |
Hybrids of these DCOP algorithms also exist. BnB-Adopt, [3] for example, changes the search strategy of Adopt from best-first search to depth-first branch-and-bound search.
An asymmetric DCOP is an extension of DCOP in which the cost of each constraint may be different for different agents. Some example applications are: [13]
One way to represent an ADCOP is to represent the constraints as functions:
Here, for each constraint there is not a single cost but a vector of costs - one for each agent involved in the constraint. The vector of costs is of length k if each variable belongs to a different agent; if two or more variables belong to the same agent, then the vector of costs is shorter - there is a single cost for each involved agent, not for each variable.
A simple way for solving an ADCOP is to replace each constraint with a constraint , which equals the sum of the functions . However, this solution requires the agents to reveal their cost functions. Often, this is not desired due to privacy considerations. [14] [15] [16]
Another approach is called Private Events as Variables (PEAV). [17] In this approach, each variable owns, in addition to his own variables, also "mirror variables" of all the variables owned by his neighbors in the constraint network. There are additional constraints (with a cost of infinity) that guarantee that the mirror variables equal the original variables. The disadvantage of this method is that the number of variables and constraints is much larger than the original, which leads to a higher run-time.
A third approach is to adapt existing algorithms, developed for DCOPs, to the ADCOP framework. This has been done for both complete-search algorithms and local-search algorithms. [13]
The structure of an ADCOP problem is similar to the game-theoretic concept of a simultaneous game. In both cases, there are agents who control variables (in game theory, the variables are the agents' possible actions or strategies). In both cases, each choice of variables by the different agents result in a different payoff to each agent. However, there is a fundamental difference: [13]
There are some intermediate models in which the agents are partially-cooperative: they are willing to decrease their utility to help the global goal, but only if their own cost is not too high. An example of partially-cooperative agents are employees in a firm. On one hand, each employee wants to maximize their own utility; on the other hand, they also want to contribute to the success of the firm. Therefore, they are willing to help others or do some other time-consuming tasks that help the firm, as long as it is not too burdensome on them. Some models for partially-cooperative agents are: [18]
Solving such partial-coopreation ADCOPs requires adaptations of ADCOP algorithms. [18]
{{cite book}}
: |journal=
ignored (help){{cite book}}
: |journal=
ignored (help){{cite web}}
: CS1 maint: multiple names: authors list (link)Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints. It is named after the mathematician Joseph-Louis Lagrange.
An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.
In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.
In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the optimized problem and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found.
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem. Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality.
In mathematical optimization, constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.
In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.
Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.
Subgradient methods are convex optimization methods which use subderivatives. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of steepest descent.
In computer science, online machine learning is a method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time, e.g., stock price prediction. Online learning algorithms may be prone to catastrophic interference, a problem that can be addressed by incremental learning approaches.
A sum-of-squares optimization program is an optimization problem with a linear cost function and a particular type of constraint on the decision variables. These constraints are of the form that when the decision variables are used as coefficients in certain polynomials, those polynomials should have the polynomial SOS property. When fixing the maximum degree of the polynomials involved, sum-of-squares optimization is also known as the Lasserre hierarchy of relaxations in semidefinite programming.
In optimization problems in applied mathematics, the duality gap is the difference between the primal and dual solutions. If is the optimal dual value and is the optimal primal value then the duality gap is equal to . This value is always greater than or equal to 0. The duality gap is zero if and only if strong duality holds. Otherwise the gap is strictly positive and weak duality holds.
In applied mathematics, Graver bases enable iterative solutions of linear and various nonlinear integer programming problems in polynomial time. They were introduced by Jack E. Graver. Their connection to the theory of Gröbner bases was discussed by Bernd Sturmfels. The algorithmic theory of Graver bases and its application to integer programming is described by Shmuel Onn.
Regularized least squares (RLS) is a family of methods for solving the least-squares problem while using regularization to further constrain the resulting solution.
Sparse dictionary learning is a representation learning method which aims at finding a sparse representation of the input data in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than the one of the signals being observed. The above two properties lead to having seemingly redundant atoms that allow multiple representations of the same signal but also provide an improvement in sparsity and flexibility of the representation.
Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.
In computer science and operations research, the envy minimization problem is the problem of allocating discrete items among agents with different valuations over the items, such that the amount of envy is as small as possible.
Lexicographic max-min optimization is a kind of multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Lexmaxmin optimization presumes that the decision-maker would like the smallest objective value to be as high as possible; subject to this, the second-smallest objective should be as high as possible; and so on. In other words, the decision-maker ranks the possible solutions according to a leximin order of their objective function values.