![]() ![]() ![]() ![]() |
One-dimensional subspaces in the two-dimensional vector space over the finite field F5. The origin (0, 0), marked with green circles, belongs to any of six 1-subspaces, while each of 24 remaining points belongs to exactly one; a property which holds for 1-subspaces over any field and in all dimensions. All F52 (i.e. a 5 × 5 square) is pictured four times for a better visualization |
In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace [1] [note 1] is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.
If V is a vector space over a field K and if W is a subset of V, then W is a linear subspace of V if under the operations of V, W is a vector space over K. Equivalently, a nonempty subset W is a subspace of V if, whenever w1, w2 are elements of W and α, β are elements of K, it follows that αw1 + βw2 is in W. [2] [3] [4] [5] [6]
As a corollary, all vector spaces are equipped with at least two (possibly different) linear subspaces: the zero vector space consisting of the zero vector alone and the entire vector space itself. These are called the trivial subspaces of the vector space. [7]
In the vector space V = R3 (the real coordinate space over the field R of real numbers), take W to be the set of all vectors in V whose last component is 0. Then W is a subspace of V.
Proof:
Let the field be R again, but now let the vector space V be the Cartesian plane R2. Take W to be the set of points (x, y) of R2 such that x = y. Then W is a subspace of R2.
Proof:
In general, any subset of the real coordinate space Rn that is defined by a system of homogeneous linear equations will yield a subspace. (The equation in example I was z = 0, and the equation in example II was x = y.)
Again take the field to be R, but now let the vector space V be the set RR of all functions from R to R. Let C(R) be the subset consisting of continuous functions. Then C(R) is a subspace of RR.
Proof:
Keep the same field and vector space as before, but now consider the set Diff(R) of all differentiable functions. The same sort of argument as before shows that this is a subspace too.
Examples that extend these themes are common in functional analysis.
From the definition of vector spaces, it follows that subspaces are nonempty, and are closed under sums and under scalar multiples. [8] Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty set W is a subspace if and only if every linear combination of finitely many elements of W also belongs to W. The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time.
In a topological vector space X, a subspace W need not be topologically closed, but a finite-dimensional subspace is always closed. [9] The same is true for subspaces of finite codimension (i.e., subspaces determined by a finite number of continuous linear functionals).
Descriptions of subspaces include the solution set to a homogeneous system of linear equations, the subset of Euclidean space described by a system of homogeneous linear parametric equations, the span of a collection of vectors, and the null space, column space, and row space of a matrix. Geometrically (especially over the field of real numbers and its subfields), a subspace is a flat in an n-space that passes through the origin.
A natural description of a 1-subspace is the scalar multiplication of one non-zero vector v to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication:
This idea is generalized for higher dimensions with linear span, but criteria for equality of k-spaces specified by sets of k vectors are not so simple.
A dual description is provided with linear functionals (usually implemented as linear equations). One non-zero linear functional F specifies its kernel subspace F = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal, if and only if one functional can be obtained from another with scalar multiplication (in the dual space):
It is generalized for higher codimensions with a system of equations. The following two subsections will present this latter description in details, and the remaining four subsections further describe the idea of linear span.
The solution set to any homogeneous system of linear equations with n variables is a subspace in the coordinate space Kn:
For example, the set of all vectors (x, y, z) (over real or rational numbers) satisfying the equations
is a one-dimensional subspace. More generally, that is to say that given a set of n independent functions, the dimension of the subspace in Kk will be the dimension of the null set of A, the composite matrix of the n functions.
In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation:
The set of solutions to this equation is known as the null space of the matrix. For example, the subspace described above is the null space of the matrix
Every subspace of Kn can be described as the null space of some matrix (see § Algorithms below for more).
The subset of Kn described by a system of homogeneous linear parametric equations is a subspace:
For example, the set of all vectors (x, y, z) parameterized by the equations
is a two-dimensional subspace of K3, if K is a number field (such as real or rational numbers). [note 2]
In linear algebra, the system of parametric equations can be written as a single vector equation:
The expression on the right is called a linear combination of the vectors (2, 5, −1) and (3, −4, 2). These two vectors are said to span the resulting subspace.
In general, a linear combination of vectors v1, v2, ... , vk is any vector of the form
The set of all possible linear combinations is called the span:
If the vectors v1, ... , vk have n components, then their span is a subspace of Kn. Geometrically, the span is the flat through the origin in n-dimensional space determined by the points v1, ... , vk.
A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation:
In this case, the subspace consists of all possible values of the vector x. In linear algebra, this subspace is known as the column space (or image) of the matrix A. It is precisely the subspace of Kn spanned by the column vectors of A.
The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the orthogonal complement of the null space (see below).
In general, a subspace of Kn determined by k parameters (or spanned by k vectors) has dimension k. However, there are exceptions to this rule. For example, the subspace of K3 spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just the xz-plane, with each point on the plane described by infinitely many different values of t1, t2, t3.
In general, vectors v1, ... , vk are called linearly independent if
for (t1, t2, ... , tk) ≠ (u1, u2, ... , uk). [note 3] If v1, ..., vk are linearly independent, then the coordinatest1, ..., tk for a vector in the span are uniquely determined.
A basis for a subspace S is a set of linearly independent vectors whose span is S. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see § Algorithms below for more).
The set-theoretical inclusion binary relation specifies a partial order on the set of all subspaces (of any dimension).
A subspace cannot lie in any subspace of lesser dimension. If dim U = k, a finite number, and U ⊂ W, then dim W = k if and only if U = W.
Given subspaces U and W of a vector space V, then their intersection U ∩ W := {v ∈ V : v is an element of both U and W} is also a subspace of V. [10]
Proof:
For every vector space V, the set {0} and V itself are subspaces of V. [11] [12]
If U and W are subspaces, their sum is the subspace [13] [14]
For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality
Here, the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related by the following equation: [15]
A set of subspaces is independent when the only intersection between any pair of subspaces is the trivial subspace. The direct sum is the sum of independent subspaces, written as . An equivalent restatement is that a direct sum is a subspace sum under the condition that every subspace contributes to the span of the sum. [16] [17] [18] [19]
The dimension of a direct sum is the same as the sum of subspaces, but may be shortened because the dimension of the trivial subspace is zero. [20]
The operations intersection and sum make the set of all subspaces a bounded modular lattice, where the {0} subspace, the least element, is an identity element of the sum operation, and the identical subspace V, the greatest element, is an identity element of the intersection operation.
If is an inner product space and is a subset of , then the orthogonal complement of , denoted , is again a subspace. [21] If is finite-dimensional and is a subspace, then the dimensions of and satisfy the complementary relationship . [22] Moreover, no vector is orthogonal to itself, so and is the direct sum of and . [23] Applying orthogonal complements twice returns the original subspace: for every subspace . [24]
This operation, understood as negation (), makes the lattice of subspaces a (possibly infinite) orthocomplemented lattice (although not a distributive lattice).[ citation needed ]
In spaces with other bilinear forms, some but not all of these results still hold. In pseudo-Euclidean spaces and symplectic vector spaces, for example, orthogonal complements exist. However, these spaces may have null vectors that are orthogonal to themselves, and consequently there exist subspaces such that . As a result, this operation does not turn the lattice of subspaces into a Boolean algebra (nor a Heyting algebra).[ citation needed ]
Most algorithms for dealing with subspaces involve row reduction. This is the process of applying elementary row operations to a matrix, until it reaches either row echelon form or reduced row echelon form. Row reduction has the following important properties:
See the article on row space for an example.
If we instead put the matrix A into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of Kn are equal.
See the article on column space for an example.
This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns.
If the final column of the reduced row echelon form contains a pivot, then the input vector v does not lie in S.
See the article on null space for an example.
Given two subspaces U and W of V, a basis of the sum and the intersection can be calculated using the Zassenhaus algorithm.
{{cite web}}
: CS1 maint: url-status (link){{cite web}}
: CS1 maint: url-status (link)In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.
Linear algebra is the branch of mathematics concerning linear equations such as:
In linear algebra, the rank of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.
In mathematics and physics, a vector space is a set whose elements, often called vectors, may be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. The terms real vector space and complex vector space are often used to specify the nature of the scalars: real coordinate space or complex coordinate space.
In linear algebra, the column space of a matrix A is the span of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.
In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension.
In linear algebra, the outer product of two coordinate vectors is a matrix. If the two vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.
In mathematics, a system of linear equations is a collection of one or more linear equations involving the same variables.
In geometry, a normal is an object such as a line, ray, or vector that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the (infinite) line perpendicular to the tangent line to the curve at the point. A normal vector may have length one or its length may represent the curvature of the object ; its algebraic sign may indicate sides.
In mathematics, a linear form is a linear map from a vector space to its field of scalars.
In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that
The rank–nullity theorem is a theorem in linear algebra, which asserts that the dimension of the domain of a linear map is the sum of its rank and its nullity.
In mathematics, the GrassmannianGr(k, V) is a space that parameterizes all k-dimensional linear subspaces of the n-dimensional vector space V. For example, the Grassmannian Gr(1, V) is the space of lines through the origin in V, so it is the same as the projective space of one dimension lower than V.
In mathematics, a bilinear form is a bilinear map V × V → K on a vector space V over a field K. In other words, a bilinear form is a function B : V × V → K that is linear in each argument separately:
In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace W of a vector space V equipped with a bilinear form B is the set W⊥ of all vectors in V that are orthogonal to every vector in W. Informally, it is called the perp, short for perpendicular complement. It is a subspace of V.
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map L : V → W between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:
In mathematics, the Plücker map embeds the Grassmannian , whose elements are k-dimensional subspaces of an n-dimensional vector space V, in a projective space, thereby realizing it as an algebraic variety. More precisely, the Plücker map embeds into the projectivization of the -th exterior power of . The image is algebraic, consisting of the intersection of a number of quadrics defined by the Plücker relations.
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by , is the factor by which the eigenvector is scaled.
In mathematics, a system of equations is considered overdetermined if there are more equations than unknowns. An overdetermined system is almost always inconsistent when constructed with random coefficients. However, an overdetermined system will have solutions in some cases, for example if some equation occurs several times in the system, or if some equations are linear combinations of the others.