Mixed complementarity problem

Last updated

Mixed Complementarity Problem (MCP) is a problem formulation in mathematical programming. Many well-known problem types are special cases of, or may be reduced to MCP. It is a generalization of nonlinear complementarity problem (NCP).

In applied mathematics, a nonlinear complementarity problem (NCP) with respect to a mapping ƒ : Rn → Rn, denoted by NCPƒ, is to find a vector x ∈ Rn such that

Contents

Definition

The mixed complementarity problem is defined by a mapping , lower values and upper values .

The solution of the MCP is a vector such that for each index one of the following alternatives holds:

Another definition for MCP is: it is a variational inequality on the parallelepiped .

In mathematics, a variational inequality is an inequality involving a functional, which has to be solved for all possible values of a given variable, belonging usually to a convex set. The mathematical theory of variational inequalities was initially developed to deal with equilibrium problems, precisely the Signorini problem: in that model problem, the functional involved was obtained as the first variation of the involved potential energy therefore it has a variational origin, recalled by the name of the general abstract problem. The applicability of the theory has since been expanded to include problems from economics, finance, optimization and game theory.

Parallelepiped polyhedron formed by six parallelograms

In geometry, a parallelepiped is a three-dimensional figure formed by six parallelograms. By analogy, it relates to a parallelogram just as a cube relates to a square or as a cuboid to a rectangle. In Euclidean geometry, its definition encompasses all four concepts. In this context of affine geometry, in which angles are not differentiated, its definition admits only parallelograms and parallelepipeds. Three equivalent definitions of parallelepiped are

See also

Related Research Articles

In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up is 3.5 as the number of rolls approaches infinity. In other words, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment.

The Whittaker–Shannon interpolation formula or sinc interpolation is a method to construct a continuous-time bandlimited function from a sequence of real numbers. The formula dates back to the works of E. Borel in 1898, and E. T. Whittaker in 1915, and was cited from works of J. M. Whittaker in 1935, and in the formulation of the Nyquist–Shannon sampling theorem by Claude Shannon in 1949. It is also commonly called Shannon's interpolation formula and Whittaker's interpolation formula. E. T. Whittaker, who published it in 1915, called it the Cardinal series.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

In mathematical analysis, a function of bounded variation, also known as BV function, is a real-valued function whose total variation is bounded (finite): the graph of a function having this property is well behaved in a precise sense. For a continuous function of a single variable, being of bounded variation means that the distance along the direction of the y-axis, neglecting the contribution of motion along x-axis, traveled by a point moving along the graph has a finite value. For a continuous function of several variables, the meaning of the definition is the same, except for the fact that the continuous path to be considered cannot be the whole graph of the given function, but can be every intersection of the graph itself with a hyperplane parallel to a fixed x-axis and to the y-axis.

In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function itself and its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, thus a Banach space. Intuitively, a Sobolev space is a space of functions with sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.

Reproducing kernel Hilbert space in functional analysis, a Hilbert space

In functional analysis, a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Roughly speaking, this means that if two functions and in the RKHS are close in norm, i.e., is small, then and are also pointwise close, i.e., is small for all . The reverse need not be true.

In mathematics, specifically algebraic topology, the cohomology ring of a topological space X is a ring formed from the cohomology groups of X together with the cup product serving as the ring multiplication. Here 'cohomology' is usually understood as singular cohomology, but the ring structure is also present in other theories such as de Rham cohomology. It is also functorial: for a continuous mapping of spaces one obtains a ring homomorphism on cohomology rings, which is contravariant.

In numerical analysis, a numerical method is a mathematical tool designed to solve numerical problems. The implementation of a numerical method with an appropriate convergence check in a programming language is called a numerical algorithm.

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests necessary conditions for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. The system of equations and inequalities corresponding to the KKT conditions is usually not solved directly, except in the few special cases where a closed-form solution can be derived analytically. In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations and inequalities.

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem. However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition.

In mathematics, a differential variational inequality (DVI) is a dynamical system that incorporates ordinary differential equations and variational inequalities or complementarity problems.

In functional analysis, the dual norm is a measure of the "size" of each continuous linear functional defined on a normed vector space.

In the mathematical field of geometric measure theory, the coarea formula expresses the integral of a function over an open set in Euclidean space in terms of integrals over the level sets of another function. A special case is Fubini's theorem, which says under suitable hypotheses that the integral of a function over the region enclosed by a rectangular box can be written as the iterated integral over the level sets of the coordinate functions. Another special case is integration in spherical coordinates, in which the integral of a function on Rn is related to the integral of the function over spherical shells: level sets of the radial function. The formula plays a decisive role in the modern study of isoperimetric problems.

Inequalities are very important in the study of information theory. There are a number of different contexts in which these inequalities appear.

In mathematics, Maass forms or Maass wave forms are studied in the theory of automorphic forms. Maass forms are complex-valued smooth functions of the upper half plane, which transform in a similar way under the operation of a discrete subgroup of as modular forms. They are Eigenforms of the hyperbolic Laplace Operator defined on and satisfy certain growth conditions at the cusps of a fundamental domain of . In contrast to the modular forms the Maass forms need not be holomorphic. They were studied first by Hans Maass in 1949.

In mathematical analysis, Lorentz spaces, introduced by George Lorentz in the 1950s, are generalisations of the more familiar spaces.

In mathematics, the symmetric decreasing rearrangement of a function is a function which is symmetric and decreasing, and whose level sets are of the same size as those of the original function.

In mathematics the symmetrization methods are algorithms of transforming a set to a ball with equal volume and centered at the origin. B is called the symmetrized version of A, usually denoted . These algorithms show up in solving the classical isoperimetric inequality problem, which asks: Given all two-dimensional shapes of a given area, which of them has the minimal perimeter. The conjectured answer was the disk and Steiner in 1838 showed this to be true using the Steiner symmetrization method. From this many other isoperimetric problems sprung and other symmetrization algorithms. For example, Rayleigh's conjecture is that the first eigenvalue of the Dirichlet problem is minimized for the ball. Another problem is that the Newtonian capacity of a set A is minimized by and this was proved by Polya and G. Szego (1951) using circular symmetrization.

In mathematics, the field of logarithmic-exponential transseries is a non-Archimedean ordered differential field which extends comparability of asymptotic growth rates of elementary nontrigonometric functions to a much broader class of objects. Each log-exp transseries represents a formal asymptotic behavior, and it can be manipulated formally, and when it converges, corresponds to actual behavior. Transseries can also be convenient for representing functions. Through their inclusion of exponentiation and logarithms, transseries are a strong generalization of the power series at infinity and other similar asymptotic expansions.

References

Adobe Photoshop Raster graphics editing software

Adobe Photoshop is a raster graphics editor developed and published by Adobe Inc. for macOS and Windows. It was originally created in 1988 by Thomas and John Knoll. Since then, this software has become the industry standard not only in raster graphics editing, but in digital art as a whole. The software's name has thus become a generic trademark, leading to its usage as a verb although Adobe discourages such use. Photoshop can edit and compose raster images in multiple layers and supports masks, alpha compositing, and several color models including RGB, CMYK, CIELAB, spot color, and duotone. Photoshop uses its own PSD and PSB file formats to support these features. In addition to raster graphics, this software has limited abilities to edit or render text and vector graphics, as well as 3D graphics and video. Its feature set can be expanded by plug-ins; programs developed and distributed independently of Photoshop that run inside it and offer new or enhanced features.