A **Colonel Blotto game** is a type of two-person constant-sum game in which the players (officers) are tasked to simultaneously distribute limited resources over several objects (battlefields).

In the classic version of the game, the player devoting the most resources to a battlefield wins that battlefield, and the gain (or payoff) is equal to the total number of battlefields won.

Consider two players (Colonel Blotto and Enemy), two battlefields both of equal value, both players know each other's total level of resources prior to allocation, and they then must make a simultaneous allocation decision. It is often assumed Colonel Blotto is the more-resourced officer (his level of resource can be defined to be 1), and Enemy has a fraction of resources less than 1. The Nash equilibrium allocation strategies and payoffs depend on that resource level relationship.

The Colonel Blotto game was first proposed by Émile Borel ^{ [1] } in 1921. The game was studied after the Second World War by scholars in Operation Research, and became a classic in game theory.^{ [2] } Gross and Wagner's 1950^{ [3] } paper, from which the fictitious Colonel Blotto and Enemy get their name, provides some example Nash equilibrium. Macdonell and Mastronardi 2015 provide the first complete characterization of all Nash equilibria to the canonical simplest version of the Colonel Blotto game. This solution, which includes a graphical algorithm for characterizing all the Nash equilibrium strategies, includes previously unidentified Nash equilibrium strategies as well as helps identify what behaviors should never be expected by rational players. Nash equilibrium strategies in this version of the game are a set of bivariate probability distributions: distributions over a set of possible resource allocations for each player, often referred to as Mixed Nash Equilibria (such as can be found in Paper-Rock-Scissors or Matching Pennies as much simpler examples).

Macdonell and Mastronardi 2015 solution, proof, and graphical algorithm for identifying Nash equilibria strategies also pertains to generalized versions of the game such as when Colonel Blotto have differing valuations of the battlefields, when their resources have differing effectiveness on the two battlefields (eg one battlefield includes a water landing and Colonel Blotto's resources are Marines instead of Soldiers), and provides insights into versions of the game with three or more battlefields.

In addition to military strategy applications, the Colonel Blotto game has applications to political strategy (resource allocations across political battlefields), network defense, R&D patent races, and strategic hiring decisions. Consider two sports teams with must spend budget caps (or two Economics departments with use-or-lose grants) are pursuing the same set of candidates, and must decide between many modest offers or aggressive pursuit of a subset of candidates.

As an example Blotto game, consider the game in which two players each write down three positive integers in non-decreasing order and such that they add up to a pre-specified number S. Subsequently, the two players show each other their writings, and compare corresponding numbers. The player who has two numbers higher than the corresponding ones of the opponent wins the game.

For S = 6 only three choices of numbers are possible: (2, 2, 2), (1, 2, 3) and (1, 1, 4). It is easy to see that:

- Any triplet against itself is a draw
- (1, 1, 4) against (1, 2, 3) is a draw
- (1, 2, 3) against (2, 2, 2) is a draw
- (2, 2, 2) beats (1, 1, 4)

It follows that the optimum strategy is (2, 2, 2) as it does not do worse than breaking even against any other strategy while beating one other strategy. There are however several Nash equilibria. If both players choose the strategy (2, 2, 2) or (1, 2, 3), then none of them can beat the other one by changing strategies, so every such strategy pair is a Nash equilibrium.

For larger S the game becomes progressively more difficult to analyze. For S = 12, it can be shown that (2, 4, 6) represents the optimal strategy, while for S > 12, deterministic strategies fail to be optimal. For S = 13, choosing (3, 5, 5), (3, 3, 7) and (1, 5, 7) with probability 1/3 each can be shown to be the optimal probabilistic strategy.

Borel's game is similar to the above example for very large S, but the players are not limited to round integers. They thus have an infinite number of available pure strategies, indeed a continuum.

This concept is also implemented in a story of Sun Bin when watching a chariot race with three different races running concurrently. In the races each party had the option to have one chariot team in each race, and each chose to use a strategy of 1, 2, 3 (with 3 being the fastest chariot and 1 being the slowest) to deploy their chariots between the three races creating close wins in each race and few sure outcomes on the winners. When asked how to win Sun Bin advised the chariot owner to change his deployment to that of 2, 3, 1. Though he would be sure to lose the race against the fastest chariots (the 3 chariots); he would win each of the other races, with his 3 chariot easily beating 2 chariots and his 2 chariot beating the 1 chariots.

This game is commonly used as a metaphor for electoral competition, with two political parties devoting money or resources to attract the support of a fixed number of voters.^{ [4] }^{ [5] } Each voter is a "battlefield" that can be won by one or the other party. The same game also finds application in auction theory where bidders must make simultaneous bids.^{ [6] }

Several variations on the original game have been solved by Jean-François Laslier,^{ [7] } Brian Roberson,^{ [8] } Dmitriy Kvasov.^{ [9] }

**Game theory** is the study of mathematical models of strategic interaction among rational decision-makers. It has applications in all fields of social science, as well as in logic, systems science and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.

In game theory, the **Nash equilibrium**, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

The **game of chicken**, also known as the **hawk–dove game** or **snowdrift game**, is a model of conflict for two players in game theory. The principle of the game is that while the outcome is ideal for one player to yield, but the individuals try to avoid it out of pride for not wanting to look like a 'chicken'. So each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game is for the most part over.

In game theory, the **best response** is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

In game theory, **coordination games** are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies.

**Matching pennies** is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even keeps both pennies, so wins one from Odd. If the pennies do not match Odd keeps both pennies, so receives one from Even.

In game theory, a player's **strategy** is any of the options which he or she chooses in a setting where the outcome depends *not only* on their own actions *but* on the actions of others. A player's strategy will determine the action which the player will take at any stage of the game.

In game theory, a **Perfect Bayesian Equilibrium** (PBE) is an equilibrium concept relevant for dynamic games with incomplete information. It is a refinement of Bayesian Nash equilibrium (BNE). A PBE has two components - *strategies* and *beliefs*:

In game theory, a **Bayesian game** is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

**Backward induction** is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by first considering the last time a decision might be made and choosing what to do in any situation at that time. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation at every point in time. It was first used by Zermelo in 1913, to prove that chess has pure optimal strategies.

The **El Farol bar problem** is a problem in game theory. Every Thursday night, a fixed population want to go have fun at the El Farol Bar, unless it's too crowded.

In game theory, a **repeated game** is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. *Single stage game* or *single shot game* are names for non-repeated games.

In game theory, a **subgame perfect equilibrium** is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game with perfect recall has a subgame perfect equilibrium.

In game theory, an **epsilon-equilibrium**, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory, the **price of stability (PoS)** of a game is the ratio between the best objective function value of one of its equilibria and that of an optimal outcome. The PoS is relevant for games in which there is some objective authority that can influence the players a bit, and maybe help them converge to a good Nash equilibrium. When measuring how efficient a Nash equilibrium is in a specific game we often time also talk about the price of anarchy (PoA).

**Congestion games** are a class of games in game theory first proposed by American economist Robert W. Rosenthal in 1973. In a congestion game the payoff of each player depends on the resources it chooses and the number of players choosing the same resource. Congestion games are a special case of potential games. Rosenthal proved that any congestion game is a potential game and Monderer and Shapley (1996) proved the converse: for any potential game, there is a congestion game with the same potential function.

In algorithmic game theory, a **succinct game** or a **succinctly representable game** is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of *polynomial type* if in a game represented by a string of length *n* the number of players, as well as the number of strategies of each player, is bounded by a polynomial in *n*.

**Mertens stability** is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

The Price of Anarchy (**PoA**) is a concept in game theory and mechanism design that measures how the social welfare of a system degrades due to selfish behavior of its agents. It has been studied extensively in various contexts, particularly in **auctions**.

- ↑ The Theory of Play and Integral Equations with Skew Symmetric Kernels (1953 translation from the French paper "La théorie du jeu et les équations intégrales à noyau symétrique gauche")
- ↑ Guillermo Owen, Game Theory, Academic Press (1968)
- ↑ A Continuous Colonel Blotto Game
- ↑ R. Myerson "Incentives to cultivate favored minorities under alternative electoral systems"
*American Political Science Review*87(4):856—869, 1993 - ↑ Laslier, J.-F.; Picard, N. (2002). "Distributive politics and electoral competition".
*Journal of Economic Theory*.**103**: 106–130. doi:10.1006/jeth.2000.2775. - ↑ Szentes, B.; Rosenthal, R. (2003). "Three-object, Two-Bidder Simultaneous Auctions: Chopsticks and Tetrahedra".
*Games and Economic Behavior*.**44**: 114–133. doi:10.1016/s0899-8256(02)00530-4. - ↑ J.-F. Laslier, "Party objectives in the `divide a dollar’ electoral competition" in: Social Choice and Strategic Decisions, Essays in Honor of Jeff Banks, edited by D. Austen–Smith and J. Duggan, Springer, pp. 113–130 (2005)
- ↑ B. Roberson, The Colonel Blotto game
^{[ dead link ]} - ↑ Kvasov, D. (2007). "Contests with Limited Resources".
*Journal of Economic Theory*.**136**: 738–748. doi:10.1016/j.jet.2006.06.007.

- Colonel Blotto's Top secret Files: Multi-Dimensional Iterative Reasoning in Action by Ayala Arad and Ariel Rubinstein
- Jonathan Partington's Colonel Blotto page

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.