If a linear map is a bijection then it is called a linear isomorphism. In the case where , a linear map is called a (linear) endomorphism. Sometimes the term linear operator refers to this case, but the term "linear operator" can have different meanings for different conventions: for example, it can be used to emphasize that and are real vector spaces (not necessarily with ), or it can be used to emphasize that is a function space, which is a common convention in functional analysis. Sometimes the term linear function has the same meaning as linear map, while in analysis it does not.
homogeneity of degree 1 / operation of scalar multiplication
Thus, a linear map is said to be operation preserving. In other words, it does not matter whether the linear map is applied before (the right hand sides of the above examples) or after (the left hand sides of the examples) the operations of addition and scalar multiplication.
Denoting the zero elements of the vector spaces and by and respectively, it follows that Let and in the equation for homogeneity of degree 1:
Occasionally, and can be vector spaces over different fields. It is then necessary to specify which of these ground fields is being used in the definition of "linear". If and are spaces over the same field as above, then we talk about -linear maps. For example, the conjugation of complex numbers is an -linear map , but it is not -linear, where and are symbols representing the sets of real numbers and complex numbers, respectively.
A linear map with viewed as a one-dimensional vector space over itself is called a linear functional.
These statements generalize to any left-module over a ring without modification, and to any right-module upon reversing of the scalar multiplication.
A prototypical example that gives linear maps their name is a function , of which the graph is a line through the origin.
More generally, any homothety where centered in the origin of a vector space is a linear map.
The zero map between two vector spaces (over the same field) is linear.
If is an isometry between real normed spaces such that then is a linear map. This result is not necessarily true for complex normed space.
Differentiation defines a linear map from the space of all differentiable functions to the space of all functions. It also defines a linear operator on the space of all smooth functions (a linear operator is a linear endomorphism, that is a linear map where the domain and codomain of it is the same). An example is
A definite integral over some intervalI is a linear map from the space of all real-valued integrable functions on I to . For example,
An indefinite integral (or antiderivative) with a fixed integration starting point defines a linear map from the space of all real-valued integrable functions on to the space of all real-valued, differentiable functions on . Without a fixed starting point, the antiderivative maps to the quotient space of the differentiable functions by the linear space of constant functions.
If and are finite-dimensional vector spaces over a field F, of respective dimensions m and n, then the function that maps linear maps to n × m matrices in the way described in §Matrices (below) is a linear map, and even a linear isomorphism.
The expected value of a random variable (which is in fact a function, and as such a element of a vector space) is linear, as for random variables and we have and , but the variance of a random variable is not linear.
The function with is a linear map. This function scales the component of a vector by the factor .
The function is additive: It doesn't matter whether vectors are first added and then mapped or whether they are mapped and finally added:
The function is homogeneous: It doesn't matter whether a vector is first scaled and then mapped or first mapped and then scaled:
If and are finite-dimensional vector spaces and a basis is defined for each vector space, then every linear map from to can be represented by a matrix. This is useful because it allows concrete calculations. Matrices yield examples of linear maps: if is a real matrix, then describes a linear map (see Euclidean space).
Let be a basis for . Then every vector is uniquely determined by the coefficients in the field :
If is a linear map,
which implies that the function f is entirely determined by the vectors . Now let be a basis for . Then we can represent each vector as
Thus, the function is entirely determined by the values of . If we put these values into an matrix , then we can conveniently use it to compute the vector output of for any vector in . To get , every column of is a vector
corresponding to as defined above. To define it more clearly, for some column that corresponds to the mapping ,
where is the matrix of . In other words, every column has a corresponding vector whose coordinates are the elements of column . A single linear map may be represented by many matrices. This is because the values of the elements of a matrix depend on the bases chosen.
The matrices of a linear transformation can be represented visually:
Matrix for relative to :
Matrix for relative to :
Transition matrix from to :
Transition matrix from to :
Such that starting in the bottom left corner and looking for the bottom right corner , one would left-multiply—that is, . The equivalent method would be the "longer" method going clockwise from the same point such that is left-multiplied with , or .
Examples in dimension two
In two-dimensional space R2 linear maps are described by 2 × 2 matrices. These are some examples:
The composition of linear maps is linear: if and are linear, then so is their composition. It follows from this that the class of all vector spaces over a given field K, together with K-linear maps as morphisms, forms a category.
The inverse of a linear map, when defined, is again a linear map.
If and are linear, then so is their pointwise sum , which is defined by .
If is linear and is an element of the ground field , then the map , defined by , is also linear.
Thus the set of linear maps from to itself forms a vector space over , sometimes denoted . Furthermore, in the case that , this vector space, denoted , is an associative algebra under composition of maps, since the composition of two linear maps is again a linear map, and the composition of maps is always associative. This case is discussed in more detail below.
Given again the finite-dimensional case, if bases have been chosen, then the composition of linear maps corresponds to the matrix multiplication, the addition of linear maps corresponds to the matrix addition, and the multiplication of linear maps with scalars corresponds to the multiplication of matrices with scalars.
A linear transformation is an endomorphism of ; the set of all such endomorphisms together with addition, composition and scalar multiplication as defined above forms an associative algebra with identity element over the field (and in particular a ring). The multiplicative identity element of this algebra is the identity map.
An endomorphism of that is also an isomorphism is called an automorphism of . The composition of two automorphisms is again an automorphism, and the set of all automorphisms of forms a group, the automorphism group of which is denoted by or . Since the automorphisms are precisely those endomorphisms which possess inverses under composition, is the group of units in the ring .
The number is also called the rank of and written as , or sometimes, ; the number is called the nullity of and written as or . If and are finite-dimensional, bases have been chosen and is represented by the matrix , then the rank and nullity of are equal to the rank and nullity of the matrix , respectively.
A subtler invariant of a linear transformation is the cokernel, which is defined as
This is the dual notion to the kernel: just as the kernel is a subspace of the domain, the co-kernel is a quotient space of the target. Formally, one has the exact sequence
These can be interpreted thus: given a linear equation f(v) = w to solve,
the kernel is the space of solutions to the homogeneous equation f(v) = 0, and its dimension is the number of degrees of freedom in the space of solutions, if it is not empty;
the co-kernel is the space of constraints that the solutions must satisfy, and its dimension is the maximal number of independent constraints.
The dimension of the co-kernel and the dimension of the image (the rank) add up to the dimension of the target space. For finite dimensions, this means that the dimension of the quotient space W/f(V) is the dimension of the target space minus the dimension of the image.
As a simple example, consider the map f: R2 → R2, given by f(x, y) = (0, y). Then for an equation f(x, y) = (a, b) to have a solution, we must have a = 0 (one constraint), and in that case the solution space is (x, b) or equivalently stated, (0, b) + (x, 0), (one degree of freedom). The kernel may be expressed as the subspace (x, 0) < V: the value of x is the freedom in a solution – while the cokernel may be expressed via the map W → R, given a vector (a, b), the value of a is the obstruction to there being a solution.
An example illustrating the infinite-dimensional case is afforded by the map f: R∞ → R∞, with b1 = 0 and bn + 1 = an for n > 0. Its image consists of all sequences with first element 0, and thus its cokernel consists of the classes of sequences with identical first element. Thus, whereas its kernel has dimension 0 (it maps only the zero sequence to the zero sequence), its co-kernel has dimension 1. Since the domain and the target space are the same, the rank and the dimension of the kernel add up to the same sum as the rank and the dimension of the co-kernel ( ), but in the infinite-dimensional case it cannot be inferred that the kernel and the co-kernel of an endomorphism have the same dimension (0 ≠ 1). The reverse situation obtains for the map h: R∞ → R∞, with cn = an + 1. Its image is the entire target space, and hence its co-kernel has dimension 0, but since it maps all sequences in which only the first element is non-zero to the zero sequence, its kernel has dimension 1.
For a linear operator with finite-dimensional kernel and co-kernel, one may define index as:
namely the degrees of freedom minus the number of constraints.
For a transformation between finite-dimensional vector spaces, this is just the difference dim(V) − dim(W), by rank–nullity. This gives an indication of how many solutions or how many constraints one has: if mapping from a larger space to a smaller one, the map may be onto, and thus will have degrees of freedom even without constraints. Conversely, if mapping from a smaller space to a larger one, the map cannot be onto, and thus one will have constraints even without degrees of freedom.
Definition: T is said to be an isomorphism if it is both left- and right-invertible. This is equivalent to T being both one-to-one and onto (a bijection of sets) or also to T being both epic and monic, and so being a bimorphism.
If T: V → V is an endomorphism, then:
If, for some positive integer n, the n-th iterate of T, Tn, is identically zero, then T is said to be nilpotent.
Given a linear map which is an endomorphism whose matrix is A, in the basis B of the space it transforms vector coordinates [u] as [v] = A[u]. As vectors change with the inverse of B (vectors are contravariant) its inverse transformation is [v] = B[v'].
Substituting this in the first expression
Therefore, the matrix in the new basis is A′ = B−1AB, being B the matrix of the given basis.
Therefore, linear maps are said to be 1-co- 1-contra-variant objects, or type (1, 1) tensors.
An example of an unbounded, hence discontinuous, linear transformation is differentiation on the space of smooth functions equipped with the supremum norm (a function with small values can have a derivative with large values, while the derivative of 0 is 0). For a specific example, sin(nx)/n converges to 0, but its derivative cos(nx) does not, so differentiation is not continuous at 0 (and by a variation of this argument, it is not continuous anywhere).
A specific application of linear maps is for geometric transformations, such as those performed in computer graphics, where the translation, rotation and scaling of 2D or 3D objects is performed by the use of a transformation matrix. Linear mappings also are used as a mechanism for describing change: for example in calculus correspond to derivatives; or in relativity, used as a device to keep track of the local transformations of reference frames.
↑ "Linear transformations of V into V are often called linear operators on V." Rudin 1976, p.207
↑ Let V and W be two real vector spaces. A mapping a from V into W Is called a 'linear mapping' or 'linear transformation' or 'linear operator' [...] from V into W, if for all , for all and all real λ. Bronshtein & Semendyayev 2004, p.316
↑ Rudin 1991, p.14 Here are some properties of linear mappings whose proofs are so easy that we omit them; it is assumed that and :
If B is a subspace (or a convex set, or a balanced set) the same is true of
In particular, the set:
is a subspace of X, called the null space of .
↑ Rudin 1991, p.14. Suppose now that X and Y are vector spaces over the same scalar field. A mapping is said to be linear if for all and all scalars and . Note that one often writes , rather than , when is linear.
↑ Rudin 1976, p.206. A mapping A of a vector space X into a vector space Y is said to be a linear transformation if: for all and all scalars c. Note that one often writes instead of if A is linear.
↑ Rudin 1991, p.14. Linear mappings of X onto its scalar field are called linear functionals.
↑ Rudin 1976, p.210 Suppose and are bases of vector spaces X and Y, respectively. Then every determines a set of numbers such that
It is convenient to represent these numbers in a rectangular array of m rows and n columns, called an mbynmatrix:
Observe that the coordinates of the vector (with respect to the basis ) appear in the jth column of . The vectors are therefore sometimes called the column vectors of . With this terminology, the range of Ais spanned by the column vectors of .
↑ Nistor, Victor (2001) , "Index theory", Encyclopedia of Mathematics, EMS Press : "The main question in index theory is to provide index formulas for classes of Fredholm operators ... Index theory has become a subject on its own only after M. F. Atiyah and I. Singer published their index theorems"
↑ Rudin 1991, p.151.18 TheoremLet be a linear functional on a topological vector space X. Assume for some . Then each of the following four properties implies the other three:
In mathematics, the Cauchy–Schwarz inequality, also known as the Cauchy–Bunyakovsky–Schwarz inequality, is a useful inequality in many mathematical fields, such as linear algebra, analysis, probability theory, vector algebra and other areas. It is considered to be one of the most important inequalities in all of mathematics.
In Euclidean geometry, an affine transformation, or an affinity, is a geometric transformation that preserves lines and parallelism.
In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.
In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition
In mathematics, a self-adjoint operator on a finite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint: for all vectors v and w. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A∗. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of vector spaces of finite dimension is the characteristic polynomial of the matrix of the endomorphism over any base; it does not depend on the choice of a basis. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating to zero the characteristic polynomial.
In mathematics, the exterior product or wedge product of vectors is an algebraic construction used in geometry to study areas, volumes, and their higher-dimensional analogues. The exterior product of two vectors and , denoted by , is called a bivector and lives in a space called the exterior square, a vector space that is distinct from the original space of vectors. The magnitude of can be interpreted as the area of the parallelogram with sides and , which in three dimensions can also be computed using the cross product of the two vectors. More generally, all parallel plane surfaces with the same orientation and area have the same bivector as a measure of their oriented area. Like the cross product, the exterior product is anticommutative, meaning that for all vectors and , but, unlike the cross product, the exterior product is associative.
The rank–nullity theorem is a theorem in linear algebra, which asserts that the dimension of the domain of a linear map is the sum of its rank and its nullity.
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component and the intrinsic covariant derivative component.
In functional analysis, a branch of mathematics, a compact operator is a linear operator L from a Banach space X to another Banach space Y, such that the image under L of any bounded subset of X is a relatively compact subset of Y. Such an operator is necessarily a bounded operator, and so continuous.
In mathematics, an invariant subspace of a linear mapping T : V → V from some vector space V to itself, is a subspace W of V that is preserved by T; that is, T(W) ⊆ W.
In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context:
In linear algebra, a generalized eigenvector of an matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.
Verma modules, named after Daya-Nand Verma, are objects in the representation theory of Lie algebras, a branch of mathematics.
In relativity, rapidity is commonly used as a measure for relativistic velocity. Mathematically, rapidity can be defined as the hyperbolic angle that differentiates two frames of reference in relative motion, each frame being associated with distance and time coordinates.
In geometry, the hyperboloid model, also known as the Minkowski model after Hermann Minkowski is a model of n-dimensional hyperbolic geometry in which points are represented by the points on the forward sheet S+ of a two-sheeted hyperboloid in (n+1)-dimensional Minkowski space and m-planes are represented by the intersections of the (m+1)-planes in Minkowski space with S+. The hyperbolic distance function admits a simple expression in this model. The hyperboloid model of the n-dimensional hyperbolic space is closely related to the Beltrami–Klein model and to the Poincaré disk model as they are projective models in the sense that the isometry group is a subgroup of the projective group.
In statistics, principal component regression (PCR) is a regression analysis technique that is based on principal component analysis (PCA). More specifically, PCR is used for estimating the unknown regression coefficients in a standard linear regression model.
Regularized least squares (RLS) is a family of methods for solving the least-squares problem while using regularization to further constrain the resulting solution.
In mathematics, nuclear operators are an important class of linear operators introduced by Alexander Grothendieck in his doctoral dissertation. Nuclear operators are intimately tied to the projective tensor product of two topological vector spaces (TVSs).
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.