Linear span

Last updated
The cross-hatched plane is the linear span of u and v in R . Basis for a plane.svg
The cross-hatched plane is the linear span of u and v in R .

In mathematics, the linear span (also called the linear hull [1] or just span) of a set S of vectors (from a vector space), denoted span(S), [2] is defined as the set of all linear combinations of the vectors in S. [3] For example, two linearly independent vectors span a plane. The linear span can be characterized either as the intersection of all linear subspaces that contain S, or as the smallest subspace containing S. The linear span of a set of vectors is therefore a vector space itself. Spans can be generalized to matroids and modules.

Contents

To express that a vector space V is a linear span of a subset S, one commonly uses the following phrases—either: S spans V, S is a spanning set of V, V is spanned/generated by S, or S is a generator or generator set of V.

Definition

Given a vector space V over a field K, the span of a set S of vectors (not necessarily finite) is defined to be the intersection W of all subspaces of V that contain S. W is referred to as the subspace spanned byS, or by the vectors in S. Conversely, S is called a spanning set of W, and we say that SspansW.

Alternatively, the span of S may be defined as the set of all finite linear combinations of elements (vectors) of S, which follows from the above definition. [4] [5] [6] [7]

Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \operatorname{span}(S) = \left \{ {\left.\sum_{i=1}^k \lambda_i \mathbf v_i \;\right|\; k \in \N, \mathbf v_i \in S, \lambda _i \in K} \right \}.}

In the case of infinite S, infinite linear combinations (i.e. where a combination may involve an infinite sum, assuming that such sums are defined somehow as in, say, a Banach space) are excluded by the definition; a generalization that allows these is not equivalent.

Examples

The real vector space has {(−1, 0, 0), (0, 1, 0), (0, 0, 1)} as a spanning set. This particular spanning set is also a basis. If (−1, 0, 0) were replaced by (1, 0, 0), it would also form the canonical basis of .

Another spanning set for the same space is given by {(1, 2, 3), (0, 1, 2), (−1, 12, 3), (1, 1, 1)}, but this set is not a basis, because it is linearly dependent.

The set {(1, 0, 0), (0, 1, 0), (1, 1, 0)} is not a spanning set of , since its span is the space of all vectors in whose last component is zero. That space is also spanned by the set {(1, 0, 0), (0, 1, 0)}, as (1, 1, 0) is a linear combination of (1, 0, 0) and (0, 1, 0). Thus, the spanned space is not It can be identified with by removing the third components equal to zero.

The empty set is a spanning set of {(0, 0, 0)}, since the empty set is a subset of all possible vector spaces in , and {(0, 0, 0)} is the intersection of all of these vector spaces.

The set of monomials xn, where n is a non-negative integer, spans the space of polynomials.

Theorems

Equivalence of definitions

The set of all linear combinations of a subset S of V, a vector space over K, is the smallest linear subspace of V containing S.

Proof. We first prove that span S is a subspace of V. Since S is a subset of V, we only need to prove the existence of a zero vector 0 in span S, that span S is closed under addition, and that span S is closed under scalar multiplication. Letting , it is trivial that the zero vector of V exists in span S, since . Adding together two linear combinations of S also produces a linear combination of S: , where all , and multiplying a linear combination of S by a scalar will produce another linear combination of S: . Thus span S is a subspace of V.
Suppose that W is a linear subspace of V containing S. It follows that , since every vi is a linear combination of S (trivially). Since W is closed under addition and scalar multiplication, then every linear combination must be contained in W. Thus, span S is contained in every subspace of V containing S, and the intersection of all such subspaces, or the smallest such subspace, is equal to the set of all linear combinations of S.

Size of spanning set is at least size of linearly independent set

Every spanning set S of a vector space V must contain at least as many elements as any linearly independent set of vectors from V.

Proof. Let be a spanning set and be a linearly independent set of vectors from V. We want to show that .
Since S spans V, then must also span V, and must be a linear combination of S. Thus is linearly dependent, and we can remove one vector from S that is a linear combination of the other elements. This vector cannot be any of the wi, since W is linearly independent. The resulting set is , which is a spanning set of V. We repeat this step n times, where the resulting set after the pth step is the union of and m - p vectors of S.
It is ensured until the nth step that there will always be some vi to remove out of S for every adjoint of v, and thus there are at least as many vi's as there are wi's—i.e. . To verify this, we assume by way of contradiction that . Then, at the mth step, we have the set and we can adjoin another vector . But, since is a spanning set of V, is a linear combination of . This is a contradiction, since W is linearly independent.

Spanning set can be reduced to a basis

Let V be a finite-dimensional vector space. Any set of vectors that spans V can be reduced to a basis for V, by discarding vectors if necessary (i.e. if there are linearly dependent vectors in the set). If the axiom of choice holds, this is true without the assumption that V has finite dimension. This also indicates that a basis is a minimal spanning set when V is finite-dimensional.

Generalizations

Generalizing the definition of the span of points in space, a subset X of the ground set of a matroid is called a spanning set if the rank of X equals the rank of the entire ground set[ citation needed ].

The vector space definition can also be generalized to modules. [8] [9] Given an R-module A and a collection of elements a1, ..., an of A, the submodule of A spanned by a1, ..., an is the sum of cyclic modules

consisting of all R-linear combinations of the elements ai. As with the case of vector spaces, the submodule of A spanned by any subset of A is the intersection of all submodules containing that subset.

Closed linear span (functional analysis)

In functional analysis, a closed linear span of a set of vectors is the minimal closed set which contains the linear span of that set.

Suppose that X is a normed vector space and let E be any non-empty subset of X. The closed linear span of E, denoted by or , is the intersection of all the closed linear subspaces of X which contain E.

One mathematical formulation of this is

The closed linear span of the set of functions xn on the interval [0, 1], where n is a non-negative integer, depends on the norm used. If the L2 norm is used, then the closed linear span is the Hilbert space of square-integrable functions on the interval. But if the maximum norm is used, the closed linear span will be the space of continuous functions on the interval. In either case, the closed linear span contains functions that are not polynomials, and so are not in the linear span itself. However, the cardinality of the set of functions in the closed linear span is the cardinality of the continuum, which is the same cardinality as for the set of polynomials.

Notes

The linear span of a set is dense in the closed linear span. Moreover, as stated in the lemma below, the closed linear span is indeed the closure of the linear span.

Closed linear spans are important when dealing with closed linear subspaces (which are themselves highly important, see Riesz's lemma).

A useful lemma

Let X be a normed space and let E be any non-empty subset of X. Then

  1. is a closed linear subspace of X which contains E,
  2. , viz. is the closure of ,

(So the usual way to find the closed linear span is to find the linear span first, and then the closure of that linear span.)

See also

Citations

  1. Encyclopedia of Mathematics (2020). Linear Hull.
  2. Axler (2015) pp. 29-30, §§ 2.5, 2.8
  3. Axler (2015) p. 29, § 2.7
  4. Hefferon (2020) p. 100, ch. 2, Definition 2.13
  5. Axler (2015) pp. 29-30, §§ 2.5, 2.8
  6. Roman (2005) pp. 41-42
  7. MathWorld (2021) Vector Space Span.
  8. Roman (2005) p. 96, ch. 4
  9. Lane & Birkhoff (1999) p. 193, ch. 6

Sources

Textbooks

Web

Related Research Articles

<span class="mw-page-title-main">Inner product space</span> Generalization of the dot product; used to define Hilbert spaces

In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

<span class="mw-page-title-main">Basis (linear algebra)</span> Set of vectors used to define coordinates

In mathematics, a set B of vectors in a vector space V is called a basis if every element of V may be written in a unique way as a finite linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to B. The elements of a basis are called basis vectors.

<span class="mw-page-title-main">Normed vector space</span> Vector space on which a distance is defined

In mathematics, a normed vector space or normed space is a vector space over the real or complex numbers on which a norm is defined. A norm is a generalization of the intuitive notion of "length" in the physical world. If is a vector space over , where is a field equal to or to , then a norm on is a map , typically denoted by , satisfying the following four axioms:

  1. Non-negativity: for every ,.
  2. Positive definiteness: for every , if and only if is the zero vector.
  3. Absolute homogeneity: for every and ,
  4. Triangle inequality: for every and ,
<span class="mw-page-title-main">Vector space</span> Algebraic structure in linear algebra

In mathematics and physics, a vector space is a set whose elements, often called vectors, may be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector space and complex vector space are kinds of vector spaces based on different kinds of scalars: real coordinate space or complex coordinate space.

<span class="mw-page-title-main">Gram–Schmidt process</span> Orthonormalization of a set of vectors

In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a method for orthonormalizing a set of vectors in an inner product space, most commonly the Euclidean space Rn equipped with the standard inner product. The Gram–Schmidt process takes a finite, linearly independent set of vectors S = {v1, ..., vk} for kn and generates an orthogonal set S′ = {u1, ..., uk} that spans the same k-dimensional subspace of Rn as S.

In mathematics, a linear form is a linear map from a vector space to its field of scalars.

<span class="mw-page-title-main">Affine space</span> Euclidean space without distance and angles

In mathematics, an affine space is a geometric structure that generalizes some of the properties of Euclidean spaces in such a way that these are independent of the concepts of distance and measure of angles, keeping only the properties related to parallelism and ratio of lengths for parallel line segments. Affine space is the setting for affine geometry.

<span class="mw-page-title-main">Rank–nullity theorem</span> In linear algebra, relation between 3 dimensions

The rank–nullity theorem is a theorem in linear algebra, which asserts:

In functional analysis and related areas of mathematics, locally convex topological vector spaces (LCTVS) or locally convex spaces are examples of topological vector spaces (TVS) that generalize normed spaces. They can be defined as topological vector spaces whose topology is generated by translations of balanced, absorbent, convex sets. Alternatively they can be defined as a vector space with a family of seminorms, and a topology can be defined in terms of that family. Although in general such spaces are not necessarily normable, the existence of a convex local base for the zero vector is strong enough for the Hahn–Banach theorem to hold, yielding a sufficiently rich theory of continuous linear functionals.

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In functional analysis, a branch of mathematics, a compact operator is a linear operator , where are normed vector spaces, with the property that maps bounded subsets of to relatively compact subsets of . Such an operator is necessarily a bounded operator, and so continuous. Some authors require that are Banach, but the definition can be extended to more general spaces.

In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context:

In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace W of a vector space V equipped with a bilinear form B is the set W of all vectors in V that are orthogonal to every vector in W. Informally, it is called the perp, short for perpendicular complement. It is a subspace of V.

In functional analysis and related areas of mathematics, a sequence space is a vector space whose elements are infinite sequences of real or complex numbers. Equivalently, it is a function space whose elements are functions from the natural numbers to the field K of real or complex numbers. The set of all such functions is naturally identified with the set of all possible infinite sequences with elements in K, and can be turned into a vector space under the operations of pointwise addition of functions and pointwise scalar multiplication. All sequence spaces are linear subspaces of this space. Sequence spaces are typically equipped with a norm, or at least the structure of a topological vector space.

In mathematics, Schubert calculus is a branch of algebraic geometry introduced in the nineteenth century by Hermann Schubert, in order to solve various counting problems of projective geometry. It was a precursor of several more modern theories, for example characteristic classes, and in particular its algorithmic aspects are still of current interest. The term Schubert calculus is sometimes used to mean the enumerative geometry of linear subspaces of a vector space, which is roughly equivalent to describing the cohomology ring of Grassmannians. Sometimes it is used to mean the more general enumerative geometry of algebraic varieties that are homogenous spaces of simple Lie groups. Even more generally, Schubert calculus is often understood to encompass the study of analogous questions in generalized cohomology theories.

In mathematics, the affine hull or affine span of a set S in Euclidean space Rn is the smallest affine set containing S, or equivalently, the intersection of all affine sets containing S. Here, an affine set may be defined as the translation of a vector subspace.

In mathematics, a real structure on a complex vector space is a way to decompose the complex vector space in the direct sum of two real vector spaces. The prototype of such a structure is the field of complex numbers itself, considered as a complex vector space over itself and with the conjugation map , with , giving the "canonical" real structure on , that is .

In mathematics, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator as the sum of its commuting semisimple part and its nilpotent part. The multiplicative decomposition expresses an invertible operator as the product of its commuting semisimple and unipotent parts. The decomposition is easy to describe when the Jordan normal form of the operator is given, but it exists under weaker hypotheses than the existence of a Jordan normal form. Analogues of the Jordan-Chevalley decomposition exist for elements of linear algebraic groups, Lie algebras, and Lie groups, and the decomposition is an important tool in the study of these objects.

In statistics, principal component regression (PCR) is a regression analysis technique that is based on principal component analysis (PCA). More specifically, PCR is used for estimating the unknown regression coefficients in a standard linear regression model.