Linear production game

Last updated

Linear production game (LP Game) is a N-person game in which the value of a coalition can be obtained by solving a linear programming problem. It is widely used in the context of resource allocation and payoff distribution. Mathematically, there are m types of resources and n products can be produced out of them. Product j requires amount of the kth resource. The products can be sold at a given market price while the resources themselves can not. Each of the N players is given a vector of resources. The value of a coalition S is the maximum profit it can achieve with all the resources possessed by its members. It can be obtained by solving a corresponding linear programming problem as follows.

Core

Every LP game v is a totally balanced game. So every subgame of v has a non-empty core. One imputation can be computed by solving the dual problem of . Let be the optimal dual solution of . The payoff to player i is . It can be proved by the duality theorems that is in the core of v.

An important interpretation of the imputation is that under the current market, the value of each resource j is exactly , although it is not valued in themselves. So the payoff one player i should receive is the total value of the resources he possesses.

However, not all the imputations in the core can be obtained from the optimal dual solutions. There are a lot of discussions on this problem. One of the mostly widely used method is to consider the r-fold replication of the original problem. It can be shown that if an imputation u is in the core of the r-fold replicated game for all r, then u can be obtained from the optimal dual solution.

Related Research Articles

Linear programming promramming method to achieve the best outcome in a mathematical model

Linear programming is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming.

In machine learning, support-vector machines are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. The Support Vector Machine (SVM) algorithm is a popular machine learning tool that offers solutions for both classification and regression problems. Developed at AT&T Bell Laboratories by Vapnik with colleagues, it presents one of the most robust prediction methods, based on the statistical learning framework or VC theory proposed by Vapnik and Chervonekis (1974) and Vapnik. Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on the side of the gap on which they fall.

Least squares Method in statistics

The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems by minimizing the sum of the squares of the residuals made in the results of every single equation.

In mathematics, a recurrence relation is an equation that recursively defines a sequence or multidimensional array of values, once one or more initial terms are given; each further term of the sequence or array is defined as a function of the preceding terms.

In multilinear algebra, a tensor contraction is an operation on a tensor that arises from the natural pairing of a finite-dimensional vector space and its dual. In components, it is expressed as a sum of products of scalar components of the tensor(s) caused by applying the summation convention to a pair of dummy indices that are bound to each other in an expression. The contraction of a single mixed tensor occurs when a pair of literal indices of the tensor are set equal to each other and summed over. In the Einstein notation this summation is built into the notation. The result is another tensor with order reduced by 2.

In linear algebra, a linear form is a linear map from a vector space to its field of scalars. In n, if vectors are represented as column vectors, then linear functionals are represented as row vectors, and their action on vectors is given by the matrix product with the row vector on the left and the column vector on the right. In general, if V is a vector space over a field k, then a linear functional f is a function from V to k that is linear:

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. Whereas deterministic optimization problems are formulated with known parameters, real world problems almost invariably include some unknown parameters. When the parameters are known only within certain bounds, one approach to tackling such problems is called robust optimization. Here the goal is to find a solution which is feasible for all such data and optimal in some sense. Stochastic programming models are similar in style but take advantage of the fact that probability distributions governing the data are known or can be estimated. The goal here is to find some policy that is feasible for all the possible data instances and maximizes the expectation of some function of the decisions and the random variables. More generally, such models are formulated, solved analytically or numerically, and analyzed in order to provide useful information to a decision-maker.

In game theory, a cooperative game is a game with competition between groups of players ("coalitions") due to the possibility of external enforcement of cooperative behavior. Those are opposed to non-cooperative games in which there is either no possibility to forge alliances or all agreements need to be self-enforcing.

Gauss–Newton algorithm algorithm

The Gauss–Newton algorithm is used to solve non-linear least squares problems. It is a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.

In mathematics, a hyperbolic partial differential equation of order n is a partial differential equation (PDE) that, roughly speaking, has a well-posed initial value problem for the first n − 1 derivatives. More precisely, the Cauchy problem can be locally solved for arbitrary initial data along any non-characteristic hypersurface. Many of the equations of mechanics are hyperbolic, and so the study of hyperbolic equations is of substantial contemporary interest. The model hyperbolic equation is the wave equation. In one spatial dimension, this is

Mehrotra's predictor–corrector method in optimization is a specific interior point method for linear programming. It was proposed in 1989 by Sanjay Mehrotra.

Mathematics of general relativity Mathematical structures and techniques used in the theory of general relativity.

The mathematics of general relativity refers to various mathematical structures and techniques that are used in studying and formulating Albert Einstein's theory of general relativity. The main tools used in this geometrical theory of gravitation are tensor fields defined on a Lorentzian manifold representing spacetime. This article is a general description of the mathematics of general relativity.

In game theory, the core is the set of feasible allocations that cannot be improved upon by a subset of the economy's agents. A coalition is said to improve upon or block a feasible allocation if the members of that coalition are better off under another feasible allocation that is identical to the first except that every member of the coalition has a different consumption bundle that is part of an aggregate consumption bundle that can be constructed from publicly available technology and the initial endowments of each consumer in the coalition.

In numerical analysis, the Durand–Kerner method, discovered by Karl Weierstrass in 1891 and rediscovered independently by Durand in 1960 and Kerner in 1966, is a root-finding algorithm for solving polynomial equations. In other words, the method can be used to solve numerically the equation

Linear Programming Boosting (LPBoost) is a supervised classifier from the boosting family of classifiers. LPBoost maximizes a margin between training samples of different classes and hence also belongs to the class of margin-maximizing supervised classification algorithms. Consider a classification function

Sequential minimal optimization (SMO) is an algorithm for solving the quadratic programming (QP) problem that arises during the training of support-vector machines (SVM). It was invented by John Platt in 1998 at Microsoft Research. SMO is widely used for training support vector machines and is implemented by the popular LIBSVM tool. The publication of the SMO algorithm in 1998 has generated a lot of excitement in the SVM community, as previously available methods for SVM training were much more complex and required expensive third-party QP solvers.

In machine learning, a Ranking SVM is a variant of the support vector machine algorithm, which is used to solve certain ranking problems. The ranking SVM algorithm was published by Thorsten Joachims in 2002. The original purpose of the algorithm was to improve the performance of an internet search engine. However, it was found that Ranking SVM also can be used to solve other problems such as Rank SIFT.

In the mathematical theory of probability, the drift-plus-penalty method is used for optimization of queueing networks and other stochastic systems.

Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. The power of quantum computing may allow solving problems which are not practically feasible on classical computers, or suggest a considerable speed up with respect to the best known classical algorithm. Among other quantum algorithms, there are quantum optimization algorithms which might suggest improvement in solving optimization problems.

References