Linear span

Last updated

In mathematics, the linear span (also called the linear hull [1] or just span) of a set S of vectors (from a vector space), denoted span(S), [2] [3] is the smallest linear subspace that contains the set. [4] It can be characterized either as the intersection of all linear subspaces that contain S, or as the set of linear combinations of elements of S. The linear span of a set of vectors is therefore a vector space. Spans can be generalized to matroids and modules.


For expressing that a vector space V is a span of a set S, one commonly uses the following phrases: S spans V; S generates V; V is spanned by S; V is generated by S; S is a spanning set of V; S is a generating set of V.


Given a vector space V over a field K, the span of a set S of vectors (not necessarily infinite) is defined to be the intersection W of all subspaces of V that contain S. W is referred to as the subspace spanned byS, or by the vectors in S. Conversely, S is called a spanning set of W, and we say that SspansW.

Alternatively, the span of S may be defined as the set of all finite linear combinations of elements (vectors) of S, which follows from the above definition. [5] [6] [7] [8]

In the case of infinite S, infinite linear combinations (i.e. where a combination may involve an infinite sum, assuming that such sums are defined somehow as in, say, a Banach space) are excluded by the definition; a generalization that allows these is not equivalent.


The cross-hatched plane is the linear span of u and v in R . Basis for a plane.svg
The cross-hatched plane is the linear span of u and v in R .

The real vector space R3 has {(−1, 0, 0), (0, 1, 0), (0, 0, 1)} as a spanning set. This particular spanning set is also a basis. If (−1, 0, 0) were replaced by (1, 0, 0), it would also form the canonical basis of R3.

Another spanning set for the same space is given by {(1, 2, 3), (0, 1, 2), (−1, 12, 3), (1, 1, 1)}, but this set is not a basis, because it is linearly dependent.

The set {(1, 0, 0), (0, 1, 0), (1, 1, 0)} is not a spanning set of R3, since its span is the space of all vectors in R3 whose last component is zero. That space is also spanned by the set {(1, 0, 0), (0, 1, 0)}, as (1, 1, 0) is a linear combination of (1, 0, 0) and (0, 1, 0). It does, however, span R2.(when interpreted as a subset of R3).

The empty set is a spanning set of {(0, 0, 0)}, since the empty set is a subset of all possible vector spaces in R3, and {(0, 0, 0)} is the intersection of all of these vector spaces.

The set of functions xn where n is a non-negative integer spans the space of polynomials.


Theorem 1: The subspace spanned by a non-empty subset S of a vector space V is the set of all linear combinations of vectors in S.

This theorem is so well known that at times, it is referred to as the definition of span of a set.

Theorem 2: Every spanning set S of a vector space V must contain at least as many elements as any linearly independent set of vectors from V.

Theorem 3: Let V be a finite-dimensional vector space. Any set of vectors that spans V can be reduced to a basis for V, by discarding vectors if necessary (i.e. if there are linearly dependent vectors in the set). If the axiom of choice holds, this is true without the assumption that V has finite dimension.

This also indicates that a basis is a minimal spanning set when V is finite-dimensional.


Generalizing the definition of the span of points in space, a subset X of the ground set of a matroid is called a spanning set, if the rank of X equals the rank of the entire ground set[ citation needed ].

The vector space definition can also be generalized to modules. [9] [10] Given an R-module A and a collection of elements a1, …, an of A, the submodule of A spanned by a1, …, an is the sum of cyclic modules

consisting of all R-linear combinations of the elements ai. As with the case of vector spaces, the submodule of A spanned by any subset of A is the intersection of all submodules containing that subset.

Closed linear span (functional analysis)

In functional analysis, a closed linear span of a set of vectors is the minimal closed set which contains the linear span of that set.

Suppose that X is a normed vector space and let E be any non-empty subset of X. The closed linear span of E, denoted by or , is the intersection of all the closed linear subspaces of X which contain E.

One mathematical formulation of this is

The closed linear span of the set of functions xn on the interval [0, 1], where n is a non-negative integer, depends on the norm used. If the L2 norm is used, then the closed linear span is the Hilbert space of square-integrable functions on the interval. But if the maximum norm is used, the closed linear span will be the space of continuous functions on the interval. In either case, the closed linear span contains functions that are not polynomials, and so are not in the linear span itself. However, the cardinality of the set of functions in the closed linear span is the cardinality of the continuum, which is the same cardinality as for the set of polynomials.


The linear span of a set is dense in the closed linear span. Moreover, as stated in the lemma below, the closed linear span is indeed the closure of the linear span.

Closed linear spans are important when dealing with closed linear subspaces (which are themselves highly important, see Riesz's lemma).

A useful lemma

Let X be a normed space and let E be any non-empty subset of X. Then

  1. is a closed linear subspace of X which contains E,
  2. , viz. is the closure of ,

(So the usual way to find the closed linear span is to find the linear span first, and then the closure of that linear span.)

See also


  1. Encyclopedia of Mathematics (2020). Linear Hull.
  2. Axler (2015) pp. 29-30, §§ 2.5, 2.8
  3. Math Vault (2021) Vector space related operators.
  4. Axler (2015) p. 29, § 2.7
  5. Hefferon (2020) p. 100, ch. 2, Definition 2.13
  6. Axler (2015) pp. 29-30, §§ 2.5, 2.8
  7. Roman (2005) pp. 41-42
  8. MathWorld (2021) Vector Space Span.
  9. Roman (2005) p. 96, ch. 4
  10. Lane & Birkhoff (1999) p. 193, ch. 6




Related Research Articles

In linear algebra, the rank of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

In mathematics, a linear combination is an expression constructed from a set of terms by multiplying each term by a constant and adding the results. The concept of linear combinations is central to linear algebra and related fields of mathematics. Most of this article deals with linear combinations in the context of a vector space over a field, with some generalizations given at the end of the article.

Linear subspace

In mathematics, and more specifically in linear algebra, a linear subspace, also known as a vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.

In abstract algebra, the direct sum is a construction which combines several modules into a new, larger module. The direct sum of modules is the smallest module which contains the given modules as submodules with no "unnecessary" constraints, making it an example of a coproduct. Contrast with the direct product, which is the dual notion.

In mathematics, an algebra over a field is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear".

Affine space Geometric structure that generalizes the Euclidean space

In mathematics, an affine space is a geometric structure that generalizes some of the properties of Euclidean spaces in such a way that these are independent of the concepts of distance and measure of angles, keeping only the properties related to parallelism and ratio of lengths for parallel line segments.

In mathematics, more specifically in topology, an open map is a function between two topological spaces that maps open sets to open sets. That is, a function is open if for any open set in the image is open in Likewise, a closed map is a function that maps closed sets to closed sets. A map may be open, closed, both, or neither; in particular, an open map need not be closed and vice versa.

In functional analysis and related areas of mathematics, locally convex topological vector spaces (LCTVS) or locally convex spaces are examples of topological vector spaces (TVS) that generalize normed spaces. They can be defined as topological vector spaces whose topology is generated by translations of balanced, absorbent, convex sets. Alternatively they can be defined as a vector space with a family of seminorms, and a topology can be defined in terms of that family. Although in general such spaces are not necessarily normable, the existence of a convex local base for the zero vector is strong enough for the Hahn–Banach theorem to hold, yielding a sufficiently rich theory of continuous linear functionals.

In mathematics, an invariant subspace of a linear mapping T : VV from some vector space V to itself, is a subspace W of V that is preserved by T; that is, T(W) ⊆ W.

In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace W of a vector space V equipped with a bilinear form B is the set W of all vectors in V that are orthogonal to every vector in W. Informally, it is called the perp, short for perpendicular complement. It is a subspace of V.

Reductive group

In mathematics, a reductive group is a type of linear algebraic group over a field. One definition is that a connected linear algebraic group G over a perfect field is reductive if it has a representation with finite kernel which is a direct sum of irreducible representations. Reductive groups include some of the most important groups in mathematics, such as the general linear group GL(n) of invertible matrices, the special orthogonal group SO(n), and the symplectic group Sp(2n). Simple algebraic groups and semisimple algebraic groups are reductive.

In linear algebra, the order-rKrylov subspace generated by an n-by-n matrix A and a vector b of dimension n is the linear subspace spanned by the images of b under the first r powers of A, that is,

In mathematics, the affine hull or affine span of a set S in Euclidean space Rn is the smallest affine set containing S, or equivalently, the intersection of all affine sets containing S. Here, an affine set may be defined as the translation of a vector subspace.

Convex cone subset of a vector space closed under positive linear combinations

In linear algebra, a convex cone is a subset of a vector space over an ordered field that is closed under linear combinations with positive coefficients.

In mathematics, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator as the sum of its commuting semisimple part and its nilpotent part. The multiplicative decomposition expresses an invertible operator as the product of its commuting semisimple and unipotent parts. The decomposition is easy to describe when the Jordan normal form of the operator is given, but it exists under weaker hypotheses than the existence of a Jordan normal form. Analogues of the Jordan-Chevalley decomposition exist for elements of linear algebraic groups, Lie algebras, and Lie groups, and the decomposition is an important tool in the study of these objects.

In functional analysis, a branch of mathematics, the algebraic interior or radial kernel of a subset of a vector space is a refinement of the concept of the interior. It is the subset of points contained in a given set with respect to which it is absorbing, i.e. the radial points of the set. The elements of the algebraic interior are often referred to as internal points.

In mathematics, semi-simplicity is a widespread concept in disciplines such as linear algebra, abstract algebra, representation theory, category theory, and algebraic geometry. A semi-simple object is one that can be decomposed into a sum of simple objects, and simple objects are those that do not contain non-trivial proper sub-objects. The precise definitions of these words depends on the context.

In the branch of mathematics called functional analysis, when a topological vector space admits a direct sum decomposition , the spaces and are called complements of each other. This happens if and only if the addition map which is defined by is a homeomorphism. Note that while this addition map is always continuous, it may fail to be a homeomorphism, which is why this definition is needed.

This is a glossary for the terminology in a mathematical field of functional analysis.

Quadric (algebraic geometry)

In mathematics, a quadric or quadric hypersurface is the subspace of N-dimensional space defined by a polynomial equation of degree 2 over a field. Quadrics are fundamental examples in algebraic geometry. The theory is simplified by working in projective space rather than affine space. An example is the quadric surface