In mathematics and economics, transportation theory or transport theory is a name given to the study of optimal transportation and allocation of resources. The problem was formalized by the French mathematician Gaspard Monge in 1781. [1]
In the 1920s A.N. Tolstoi was one of the first to study the transportation problem mathematically. In 1930, in the collection Transportation Planning Volume I for the National Commissariat of Transportation of the Soviet Union, he published a paper "Methods of Finding the Minimal Kilometrage in Cargo-transportation in space". [2] [3]
Major advances were made in the field during World War II by the Soviet mathematician and economist Leonid Kantorovich. [4] Consequently, the problem as it is stated is sometimes known as the Monge–Kantorovich transportation problem. [5] The linear programming formulation of the transportation problem is also known as the Hitchcock–Koopmans transportation problem. [6]
Suppose that we have a collection of mines mining iron ore, and a collection of factories which use the iron ore that the mines produce. Suppose for the sake of argument that these mines and factories form two disjoint subsets and of the Euclidean plane . Suppose also that we have a cost function, so that is the cost of transporting one shipment of iron from to . For simplicity, we ignore the time taken to do the transporting. We also assume that each mine can supply only one factory (no splitting of shipments) and that each factory requires precisely one shipment to be in operation (factories cannot work at half- or double-capacity). Having made the above assumptions, a transport plan is a bijection . In other words, each mine supplies precisely one target factory and each factory is supplied by precisely one mine. We wish to find the optimal transport plan, the plan whose total cost
is the least of all possible transport plans from to . This motivating special case of the transportation problem is an instance of the assignment problem. More specifically, it is equivalent to finding a minimum weight matching in a bipartite graph.
The following simple example illustrates the importance of the cost function in determining the optimal transport plan. Suppose that we have books of equal width on a shelf (the real line), arranged in a single contiguous block. We wish to rearrange them into another contiguous block, but shifted one book-width to the right. Two obvious candidates for the optimal transport plan present themselves:
If the cost function is proportional to Euclidean distance ( for some ) then these two candidates are both optimal. If, on the other hand, we choose the strictly convex cost function proportional to the square of Euclidean distance ( for some ), then the "many small moves" option becomes the unique minimizer.
Note that the above cost functions consider only the horizontal distance traveled by the books, not the horizontal distance traveled by a device used to pick each book up and move the book into position. If the latter is considered instead, then, of the two transport plans, the second is always optimal for the Euclidean distance, while, provided there are at least 3 books, the first transport plan is optimal for the squared Euclidean distance.
The following transportation problem formulation is credited to F. L. Hitchcock: [7]
Tjalling Koopmans is also credited with formulations of transport economics and allocation of resources.
The transportation problem as it is stated in modern or more technical literature looks somewhat different because of the development of Riemannian geometry and measure theory. The mines-factories example, simple as it is, is a useful reference point when thinking of the abstract case. In this setting, we allow the possibility that we may not wish to keep all mines and factories open for business, and allow mines to supply more than one factory, and factories to accept iron from more than one mine.
Let and be two separable metric spaces such that any probability measure on (or ) is a Radon measure (i.e. they are Radon spaces). Let be a Borel-measurable function. Given probability measures on and on , Monge's formulation of the optimal transportation problem is to find a transport map that realizes the infimum
where denotes the push forward of by . A map that attains this infimum (i.e. makes it a minimum instead of an infimum) is called an "optimal transport map".
Monge's formulation of the optimal transportation problem can be ill-posed, because sometimes there is no satisfying : this happens, for example, when is a Dirac measure but is not.
We can improve on this by adopting Kantorovich's formulation of the optimal transportation problem, which is to find a probability measure on that attains the infimum
where denotes the collection of all probability measures on with marginals on and on . It can be shown [10] that a minimizer for this problem always exists when the cost function is lower semi-continuous and is a tight collection of measures (which is guaranteed for Radon spaces and ). (Compare this formulation with the definition of the Wasserstein metric on the space of probability measures.) A gradient descent formulation for the solution of the Monge–Kantorovich problem was given by Sigurd Angenent, Steven Haker, and Allen Tannenbaum. [11]
The minimum of the Kantorovich problem is equal to
where the supremum runs over all pairs of bounded and continuous functions and such that
The economic interpretation is clearer if signs are flipped. Let stand for the vector of characteristics of a worker, for the vector of characteristics of a firm, and for the economic output generated by worker matched with firm . Setting and , the Monge–Kantorovich problem rewrites:
which has dual:
where the infimum runs over bounded and continuous function and . If the dual problem has a solution, one can see that:
so that interprets as the equilibrium wage of a worker of type , and interprets as the equilibrium profit of a firm of type . [12]
For , let denote the collection of probability measures on that have finite -th moment. Let and let , where is a convex function.
The proof of this solution appears in Rachev & Rüschendorf (1998). [13]
In the case where the margins and are discrete, let and be the probability masses respectively assigned to and , and let be the probability of an assignment. The objective function in the primal Kantorovich problem is then
and the constraint expresses as
and
In order to input this in a linear programming problem, we need to vectorize the matrix by either stacking its columns or its rows, we call this operation. In the column-major order, the constraints above rewrite as
where is the Kronecker product, is a matrix of size with all entries of ones, and is the identity matrix of size . As a result, setting , the linear programming formulation of the problem is
which can be readily inputted in a large-scale linear programming solver (see chapter 3.4 of Galichon (2016) [12] ).
In the semi-discrete case, and is a continuous distribution over , while is a discrete distribution which assigns probability mass to site . In this case, we can see [14] that the primal and dual Kantorovich problems respectively boil down to:
for the primal, where means that and , and:
for the dual, which can be rewritten as:
which is a finite-dimensional convex optimization problem that can be solved by standard techniques, such as gradient descent.
In the case when , one can show that the set of assigned to a particular site is a convex polyhedron. The resulting configuration is called a power diagram. [15]
Assume the particular case , , and where is invertible. One then has
The proof of this solution appears in Galichon (2016). [12]
Let be a separable Hilbert space. Let denote the collection of probability measures on that have finite -th moment; let denote those elements that are Gaussian regular: if is any strictly positive Gaussian measure on and , then also.
Let , , for . Then the Kantorovich problem has a unique solution , and this solution is induced by an optimal transport map: i.e., there exists a Borel map such that
Moreover, if has bounded support, then
for -almost all for some locally Lipschitz, -concave and maximal Kantorovich potential . (Here denotes the Gateaux derivative of .)
Consider a variant of the discrete problem above, where we have added an entropic regularization term to the objective function of the primal problem
One can show that the dual regularized problem is
where, compared with the unregularized version, the "hard" constraint in the former dual () has been replaced by a "soft" penalization of that constraint (the sum of the terms). The optimality conditions in the dual problem can be expressed as
Denoting as the matrix of term , solving the dual is therefore equivalent to looking for two diagonal positive matrices and of respective sizes and , such that and . The existence of such matrices generalizes Sinkhorn's theorem and the matrices can be computed using the Sinkhorn–Knopp algorithm, [16] which simply consists of iteratively looking for to solve Equation 5.1 , and to solve Equation 5.2 . Sinkhorn–Knopp's algorithm is therefore a coordinate descent algorithm on the dual regularized problem.
The Monge–Kantorovich optimal transport has found applications in wide range in different fields. Among them are:
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are traceless, Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way. It has become vital in the building of the Standard Model.
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
In differential geometry, the four-gradient is the four-vector analogue of the gradient from vector calculus.
This article describes the mathematics of the Standard Model of particle physics, a gauge quantum field theory containing the internal symmetries of the unitary product group SU(3) × SU(2) × U(1). The theory is commonly viewed as describing the fundamental set of particles – the leptons, quarks, gauge bosons and the Higgs boson.
In mathematical physics, the gamma matrices, also called the Dirac matrices, are a set of conventional matrices with specific anticommutation relations that ensure they generate a matrix representation of the Clifford algebra It is also possible to define higher-dimensional gamma matrices. When interpreted as the matrices of the action of a set of orthogonal basis vectors for contravariant vectors in Minkowski space, the column vectors on which the matrices act become a space of spinors, on which the Clifford algebra of spacetime acts. This in turn makes it possible to represent infinitesimal spatial rotations and Lorentz boosts. Spinors facilitate spacetime computations in general, and in particular are fundamental to the Dirac equation for relativistic spin particles. Gamma matrices were introduced by Paul Dirac in 1928.
In physics, the gauge covariant derivative is a means of expressing how fields vary from place to place, in a way that respects how the coordinate systems used to describe a physical phenomenon can themselves change from place to place. The gauge covariant derivative is used in many areas of physics, including quantum field theory and fluid dynamics and in a very special way general relativity.
The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.
In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.
There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.
In quantum mechanics, the Pauli equation or Schrödinger–Pauli equation is the formulation of the Schrödinger equation for spin-1/2 particles, which takes into account the interaction of the particle's spin with an external electromagnetic field. It is the non-relativistic limit of the Dirac equation and can be used where particles are moving at speeds much less than the speed of light, so that relativistic effects can be neglected. It was formulated by Wolfgang Pauli in 1927. In its linearized form it is known as Lévy-Leblond equation.
In mathematical physics, spacetime algebra (STA) is the application of Clifford algebra Cl1,3(R), or equivalently the geometric algebra G(M4) to physics. Spacetime algebra provides a "unified, coordinate-free formulation for all of relativistic physics, including the Dirac equation, Maxwell equation and General Relativity" and "reduces the mathematical divide between classical, quantum and relativistic physics."
In mathematical physics, the Dirac equation in curved spacetime is a generalization of the Dirac equation from flat spacetime to curved spacetime, a general Lorentzian manifold.
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.
Attempts have been made to describe gauge theories in terms of extended objects such as Wilson loops and holonomies. The loop representation is a quantum hamiltonian representation of gauge theories in terms of loops. The aim of the loop representation in the context of Yang–Mills theories is to avoid the redundancy introduced by Gauss gauge symmetries allowing to work directly in the space of physical states. The idea is well known in the context of lattice Yang–Mills theory. Attempts to explore the continuous loop representation was made by Gambini and Trias for canonical Yang–Mills theory, however there were difficulties as they represented singular objects. As we shall see the loop formalism goes far beyond a simple gauge invariant description, in fact it is the natural geometrical framework to treat gauge theories and quantum gravity in terms of their fundamental physical excitations.
In mathematical physics, the Gordon decomposition of the Dirac current is a splitting of the charge or particle-number current into a part that arises from the motion of the center of mass of the particles and a part that arises from gradients of the spin density. It makes explicit use of the Dirac equation and so it applies only to "on-shell" solutions of the Dirac equation.
In set theory and logic, Buchholz's ID hierarchy is a hierarchy of subsystems of first-order arithmetic. The systems/theories are referred to as "the formal theories of ν-times iterated inductive definitions". IDν extends PA by ν iterated least fixed points of monotone operators.