This article includes a list of general references, but it lacks sufficient corresponding inline citations .(October 2011) |
In geometry, the Chebyshev center of a bounded set having non-empty interior is the center of the minimal-radius ball enclosing the entire set , or alternatively (and non-equivalently) the center of largest inscribed ball of . [1]
In the field of parameter estimation, the Chebyshev center approach tries to find an estimator for given the feasibility set , such that minimizes the worst possible estimation error for x (e.g. best worst case).
There exist several alternative representations for the Chebyshev center. Consider the set and denote its Chebyshev center by . can be computed by solving:
with respect to the Euclidean norm , or alternatively by solving:
Despite these properties, finding the Chebyshev center may be a hard numerical optimization problem. For example, in the second representation above, the inner maximization is non-convex if the set Q is not convex.
In inner product spaces and two-dimensional spaces, if is closed, bounded and convex, then the Chebyshev center is in . In other words, the search for the Chebyshev center can be conducted inside without loss of generality. [2]
In other spaces, the Chebyshev center may not be in , even if is convex. For instance, if is the tetrahedron formed by the convex hull of the points (1,1,1), (-1,1,1), (1,-1,1) and (1,1,-1), then computing the Chebyshev center using the norm yields [3]
Consider the case in which the set can be represented as the intersection of ellipsoids.
with
By introducing an additional matrix variable , we can write the inner maximization problem of the Chebyshev center as:
where is the trace operator and
Relaxing our demand on by demanding , i.e. where is the set of positive semi-definite matrices, and changing the order of the min max to max min (see the references for more details), the optimization problem can be formulated as:
with
This last convex optimization problem is known as the relaxed Chebyshev center (RCC). The RCC has the following important properties:
It can be shown that the well-known constrained least squares (CLS) problem is a relaxed version of the Chebyshev center.[ citation needed ]
The original CLS problem can be formulated as:
with
It can be shown that this problem is equivalent to the following optimization problem:
with
One can see that this problem is a relaxation of the Chebyshev center (though different than the RCC described above).
A solution set for the RCC is also a solution for the CLS, and thus . This means that the CLS estimate is the solution of a looser relaxation than that of the RCC. Hence the CLS is an upper bound for the RCC, which is an upper bound for the real Chebyshev center.
Since both the RCC and CLS are based upon relaxation of the real feasibility set , the form in which is defined affects its relaxed versions. This of course affects the quality of the RCC and CLS estimators. As a simple example consider the linear box constraints:
which can alternatively be written as
It turns out that the first representation results with an upper bound estimator for the second one, hence using it may dramatically decrease the quality of the calculated estimator.
This simple example shows us that great care should be given to the formulation of constraints when relaxation of the feasibility region is used.
This problem can be formulated as a linear programming problem, provided that the region Q is an intersection of finitely many hyperplanes. [4] Given a polytope, Q, defined as follows then it can be solved via the following linear program.
In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a large number of independently selected outcomes of a random variable.
In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.
Vapnik–Chervonenkis theory was developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkis. The theory is a form of computational learning theory, which attempts to explain the learning process from a statistical point of view.
In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result which characterizes the transformation of an arbitrarily crude estimator into an estimator that is optimal by the mean-squared-error criterion or any of a variety of similar criteria.
In quantum mechanics, the canonical commutation relation is the fundamental relation between canonical conjugate quantities. For example,
In probability theory, the Chernoff bound gives exponentially decreasing bounds on tail distributions of sums of independent random variables. Despite being named after Herman Chernoff, the author of the paper it first appeared in, the result is due to Herman Rubin. It is a sharper bound than the known first- or second-moment-based tail bounds such as Markov's inequality or Chebyshev's inequality, which only yield power-law bounds on tail decay. However, the Chernoff bound requires that the variates be independent – a condition that neither Markov's inequality nor Chebyshev's inequality require, although Chebyshev's inequality does require the variates to be pairwise independent.
In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate. It allows in particular for a far reaching generalization of Lagrangian duality.
In metric geometry, the metric envelope or tight span of a metric space M is an injective metric space into which M can be embedded. In some sense it consists of all points "between" the points of M, analogous to the convex hull of a point set in a Euclidean space. The tight span is also sometimes known as the injective envelope or hyperconvex hull of M. It has also been called the injective hull, but should not be confused with the injective hull of a module in algebra, a concept with a similar description relative to the category of R-modules rather than metric spaces.
In probability theory, Bernstein inequalities give bounds on the probability that the sum of random variables deviates from its mean. In the simplest case, let X1, ..., Xn be independent Bernoulli random variables taking values +1 and −1 with probability 1/2, then for every positive ,
Stress majorization is an optimization strategy used in multidimensional scaling (MDS) where, for a set of -dimensional data items, a configuration of points in -dimensional space is sought that minimizes the so-called stress function . Usually is or , i.e. the matrix lists points in or dimensional Euclidean space so that the result may be visualised. The function is a cost or loss function that measures the squared differences between ideal distances and actual distances in r-dimensional space. It is defined as:
The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a p-norm:
In mathematics, a space, where is a real number, is a specific type of metric space. Intuitively, triangles in a space are "slimmer" than corresponding "model triangles" in a standard space of constant curvature . In a space, the curvature is bounded from above by . A notable special case is ; complete spaces are known as "Hadamard spaces" after the French mathematician Jacques Hadamard.
For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:
Matrix completion is the task of filling in the missing entries of a partially observed matrix, which is equivalent to performing data imputation in statistics. A wide range of datasets are naturally organized in matrix form. One example is the movie-ratings matrix, as appears in the Netflix problem: Given a ratings matrix in which each entry represents the rating of movie by customer , if customer has watched movie and is otherwise missing, we would like to predict the remaining entries in order to make good recommendations to customers on what to watch next. Another example is the term-document matrix: The frequencies of words used in a collection of documents can be represented as a matrix, where each entry corresponds to the number of times the associated term appears in the indicated document.
In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.
Generalized relative entropy is a measure of dissimilarity between two quantum states. It is a "one-shot" analogue of quantum relative entropy and shares many properties of the latter quantity.
A central problem in algorithmic graph theory is the shortest path problem. One of the generalizations of the shortest path problem is known as the single-source-shortest-paths (SSSP) problem, which consists of finding the shortest path between every pair of vertices in a graph. There are classical sequential algorithms which solve this problem, such as Dijkstra's algorithm. In this article, however, we present two parallel algorithms solving this problem.
In mathematics, nuclear operators are an important class of linear operators introduced by Alexander Grothendieck in his doctoral dissertation. Nuclear operators are intimately tied to the projective tensor product of two topological vector spaces (TVSs).
The quaternion estimator algorithm (QUEST) is an algorithm designed to solve Wahba's problem, that consists of finding a rotation matrix between two coordinate systems from two sets of observations sampled in each system respectively. The key idea behind the algorithm is to find an expression of the loss function for the Wahba's problem as a quadratic form, using the Cayley–Hamilton theorem and the Newton–Raphson method to efficiently solve the eigenvalue problem and construct a numerically stable representation of the solution.
{{cite web}}
: CS1 maint: archived copy as title (link)