Weapon target assignment problem

Last updated

The weapon target assignment problem (WTA) is a class of combinatorial optimization problems present in the fields of optimization and operations research. It consists of finding an optimal assignment of a set of weapons of various types to a set of targets in order to maximize the total expected damage done to the opponent.

Contents

The basic problem is as follows:

There are a number of weapons and a number of targets. The weapons are of type . There are available weapons of type . Similarly, there are targets, each with a value of . Any of the weapons can be assigned to any target. Each weapon type has a certain probability of destroying each target, given by .

Notice that as opposed to the classic assignment problem or the generalized assignment problem, more than one agent (i.e., weapon) can be assigned to each task (i.e., target) and not all targets are required to have weapons assigned. Thus, we see that the WTA allows one to formulate optimal assignment problems wherein tasks require cooperation among agents. Additionally, it provides the ability to model probabilistic completion of tasks in addition to costs.

Both static and dynamic versions of WTA can be considered. In the static case, the weapons are assigned to targets once. The dynamic case involves many rounds of assignment where the state of the system after each exchange of fire (round) is considered in the next round. While the majority of work has been done on the static WTA problem, recently the dynamic WTA problem has received more attention.

In spite of the name, there are nonmilitary applications of the WTA. The main one is to search for a lost object or person by heterogeneous assets such as dogs, aircraft, walkers, etc. The problem is to assign the assets to a partition of the space in which the object is located to minimize the probability of not finding the object. The "value" of each element of the partition is the probability that the object is located there.

Formal mathematical definition

The weapon target assignment problem is often formulated as the following nonlinear integer programming problem:

subject to the constraints

Where the variable represents the assignment of as many weapons of type to target and is the probability of survival (). The first constraint requires that the number of weapons of each type assigned does not exceed the number available. The second constraint is the integral constraint.

Notice that minimizing the expected survival value is the same as maximizing the expected damage.

Algorithms and generalizations

An exact solution can be found using branch and bound techniques which utilize relaxation (approximation). [1] Many heuristic algorithms have been proposed which provide near-optimal solutions in polynomial time. [2]

Example

A commander has 5 tanks, 2 aircraft, and 1 sea vessel and is told to engage 3 targets with values 5, 10, and 20. Each weapon type has the following success probabilities against each target:

Weapon Type
Tank0.30.20.5
Aircraft0.10.60.5
Sea Vessel0.40.50.4

One feasible solution is to assign the sea vessel and one aircraft to the highest valued target (3). This results in an expected survival value of . One could then assign the remaining aircraft and 2 tanks to target #2, resulting in expected survival value of . Finally, the remaining 3 tanks are assigned to target #1 which has an expected survival value of . Thus, we have a total expected survival value of . Note that a better solution can be achieved by assigning 3 tanks to target #1, 2 tanks and sea vessel to target #2 and 2 aircraft to target #3, giving an expected survival value of .

See also

Related Research Articles

<span class="mw-page-title-main">Assignment problem</span> Combinatorial optimization problem

The assignment problem is a fundamental combinatorial optimization problem. In its most general form, the problem is as follows:

The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data.

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). While it is one of several forms of causal notation, causal networks are special cases of Bayesian networks. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite constraints over variables, which is solved by constraint satisfaction methods. CSPs are the subject of research in both artificial intelligence and operations research, since the regularity in their formulation provides a common basis to analyze and solve problems of many seemingly unrelated families. CSPs often exhibit high complexity, requiring a combination of heuristics and combinatorial search methods to be solved in a reasonable time. Constraint programming (CP) is the field of research that specifically focuses on tackling these kinds of problems. Additionally, the Boolean satisfiability problem (SAT), satisfiability modulo theories (SMT), mixed integer programming (MIP) and answer set programming (ASP) are all fields of research focusing on the resolution of particular forms of the constraint satisfaction problem.

Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations.

In probability theory, the rule of succession is a formula introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem. The formula is still used, particularly to estimate underlying probabilities when there are few observations or events that have not been observed to occur at all in (finite) sample data.

Route assignment, route choice, or traffic assignment concerns the selection of routes between origins and destinations in transportation networks. It is the fourth step in the conventional transportation forecasting model, following trip generation, trip distribution, and mode choice. The zonal interchange analysis of trip distribution provides origin-destination trip tables. Mode choice analysis tells which travelers will use which mode. To determine facility needs and costs and benefits, we need to know the number of travelers on each route and link of the network. We need to undertake traffic assignment. Suppose there is a network of highways and transit systems and a proposed addition. We first want to know the present pattern of traffic delay and then what would happen if the addition were made.

<span class="mw-page-title-main">Discrete-time Markov chain</span> Probability concept

In probability, a discrete-time Markov chain (DTMC) is a sequence of random variables, known as a stochastic process, in which the value of the next variable depends only on the value of the current variable, and not any variables in the past. For instance, a machine may have two states, A and E. When it is in state A, there is a 40% chance of it moving to state E and a 60% chance of it remaining in state A. When it is in state E, there is a 70% chance of it moving to A and a 30% chance of it staying in E. The sequence of states of the machine is a Markov chain. If we denote the chain by then is the state which the machine starts in and is the random variable describing its state after 10 transitions. The process continues forever, indexed by the natural numbers.

In constraint satisfaction, local consistency conditions are properties of constraint satisfaction problems related to the consistency of subsets of variables or constraints. They can be used to reduce the search space and make the problem easier to solve. Various kinds of local consistency conditions are leveraged, including node consistency, arc consistency, and path consistency.

In mathematical optimization, constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.

Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

In mathematics, a Euclidean distance matrix is an n×n matrix representing the spacing of a set of n points in Euclidean space. For points in k-dimensional space k, the elements of their Euclidean distance matrix A are given by squares of distances between them. That is

In applied mathematics, the maximum generalized assignment problem is a problem in combinatorial optimization. This problem is a generalization of the assignment problem in which both tasks and agents have a size. Moreover, the size of each task might vary from one agent to the other.

The study of facility location problems (FLP), also known as location analysis, is a branch of operations research and computational geometry concerned with the optimal placement of facilities to minimize transportation costs while considering factors like avoiding placing hazardous materials near housing, and competitors' facilities. The techniques also apply to cluster analysis.

In statistics, additive smoothing, also called Laplace smoothing or Lidstone smoothing, is a technique used to smooth count data, eliminating issues caused by certain values having 0 occurrences. Given a set of observation counts from a -dimensional multinomial distribution with trials, a "smoothed" version of the counts gives the estimator

In probability theory and information theory, adjusted mutual information, a variation of mutual information may be used for comparing clusterings. It corrects the effect of agreement solely due to chance between clusterings, similar to the way the adjusted rand index corrects the Rand index. It is closely related to variation of information: when a similar adjustment is made to the VI index, it becomes equivalent to the AMI. The adjusted measure however is no longer metrical.

<span class="mw-page-title-main">David Shmoys</span> American mathematician

David Bernard Shmoys is a Professor in the School of Operations Research and Information Engineering and the Department of Computer Science at Cornell University. He obtained his Ph.D. from the University of California, Berkeley in 1984. His major focus has been in the design and analysis of algorithms for discrete optimization problems.

Bayesian hierarchical modelling is a statistical model written in multiple levels that estimates the parameters of the posterior distribution using the Bayesian method. The sub-models combine to form the hierarchical model, and Bayes' theorem is used to integrate them with the observed data and account for all the uncertainty that is present. The result of this integration is the posterior distribution, also known as the updated probability estimate, as additional evidence on the prior distribution is acquired.

In statistics, the class of vector generalized linear models (VGLMs) was proposed to enlarge the scope of models catered for by generalized linear models (GLMs). In particular, VGLMs allow for response variables outside the classical exponential family and for more than one parameter. Each parameter can be transformed by a link function. The VGLM framework is also large enough to naturally accommodate multiple responses; these are several independent responses each coming from a particular statistical distribution with possibly different parameter values.

Maximal entropy random walk (MERW) is a popular type of biased random walk on a graph, in which transition probabilities are chosen accordingly to the principle of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. While standard random walk chooses for every vertex uniform probability distribution among its outgoing edges, locally maximizing entropy rate, MERW maximizes it globally by assuming uniform probability distribution among all paths in a given graph.

References

  1. Andersen, A.C.; Pavlikov, K.; Toffolo, T.A.M. (2022). "Weapon-Target Assignment Problem: Exact and Approximate Solution Algorithms" (PDF). Annals of Operations Research. 312 (2): 581–606. doi:10.1007/s10479-022-04525-6.
  2. Ahuja, Ravindra K.; Kumar, Arvind; Jha, Krishna C.; Orlin, James B. (2007). "Exact and Heuristic Algorithms for the Weapon-Target Assignment Problem". Operations Research. 55 (6): 1136–1146. doi:10.1287/opre.1070.0440.

Further reading