Glossary of linear algebra

Last updated

This glossary of linear algebra is a list of definitions and terms relevant to the field of linear algebra, the branch of mathematics concerned with linear equations and their representations as vector spaces.

Contents

For a glossary related to the generalization of vector spaces through modules, see glossary of module theory.

A

affine transformation
A composition of functions consisting of a linear transformation between vector spaces followed by a translation. [1] Equivalently, a function between vector spaces that preserves affine combinations.
affine combination
A linear combination in which the sum of the coefficients is 1.

B

basis
In a vector space, a linearly independent set of vectors spanning the whole vector space. [2]
basis vector
An element of a given basis of a vector space. [2]

C

column vector
A matrix with only one column. [3]
coordinate vector
The tuple of the coordinates of a vector on a basis.
covector
An element of the dual space of a vector space, (that is a linear form), identified to an element of the vector space through an inner product.

D

determinant
The unique scalar function over square matrices which is distributive over matrix multiplication, multilinear in the rows and columns, and takes the value of for the unit matrix.
diagonal matrix
A matrix in which only the entries on the main diagonal are non-zero. [4]
dimension
The number of elements of any basis of a vector space. [2]
dual space
The vector space of all linear forms on a given vector space. [5]

E

elementary matrix
Square matrix that differs from the identity matrix by at most one entry

I

identity matrix
A diagonal matrix all of the diagonal elements of which are equal to . [4]
inverse matrix
Of a matrix , another matrix such that multiplied by and multiplied by both equal the identity matrix. [4]
isotropic vector
In a vector space with a quadratic form, a non-zero vector for which the form is zero.
isotropic quadratic form
A vector space with a quadratic form which has a null vector.

L

linear algebra
The branch of mathematics that deals with vectors, vector spaces, linear transformations and systems of linear equations.
linear combination
A sum, each of whose summands is an appropriate vector times an appropriate scalar (or ring element). [6]
linear dependence
A linear dependence of a tuple of vectors is a nonzero tuple of scalar coefficients for which the linear combination equals .
linear equation
A polynomial equation of degree one (such as ). [7]
linear form
A linear map from a vector space to its field of scalars [8]
linear independence
Property of being not linearly dependent. [9]
linear map
A function between vector spaces which respects addition and scalar multiplication.
linear transformation
A linear map whose domain and codomain are equal; it is generally supposed to be invertible.

M

matrix
Rectangular arrangement of numbers or other mathematical objects. [4]

N

null vector
1.  Another term for an isotropic vector.
2.  Another term for a zero vector.

R

row vector
A matrix with only one row. [4]

S

singular-value decomposition
a factorization of an complex matrix M as , where U is an complex unitary matrix, is an rectangular diagonal matrix with non-negative real numbers on the diagonal, and V is an complex unitary matrix. [10]
spectrum
Set of the eigenvalues of a matrix. [11]
square matrix
A matrix having the same number of rows as columns. [4]

U

unit vector
a vector in a normed vector space whose norm is 1, or a Euclidean vector of length one. [12]

V

vector
1.  A directed quantity, one with both magnitude and direction.
2.  An element of a vector space. [13]
vector space
A set, whose elements can be added together, and multiplied by elements of a field (this is scalar multiplication); the set must be an abelian group under addition, and the scalar multiplication must be a linear map. [14]

Z

zero vector
The additive identity in a vector space. In a normed vector space, it is the unique vector of norm zero. In a Euclidean vector space, it is the unique vector of length zero. [15]

Notes

    Related Research Articles

    In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

    <span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

    Linear algebra is the branch of mathematics concerning linear equations such as:

    <span class="mw-page-title-main">Vector space</span> Algebraic structure in linear algebra

    In mathematics and physics, a vector space is a set whose elements, often called vectors, can be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector spaces and complex vector spaces are kinds of vector spaces based on different kinds of scalars: real numbers and complex numbers.

    <span class="mw-page-title-main">Affine transformation</span> Geometric transformation that preserves lines but not angles nor the origin

    In Euclidean geometry, an affine transformation or affinity is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles.

    <span class="mw-page-title-main">Quaternion</span> Noncommutative extension of the complex numbers

    In mathematics, the quaternion number system extends the complex numbers. Quaternions were first described by the Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. The algebra of quaternions is often denoted by H, or in blackboard bold by Quaternions are not a field, because multiplication of quaternions is not, in general, commutative. Quaternions provide a definition of the quotient of two vectors in a three-dimensional space. Quaternions are generally represented in the form

    <span class="mw-page-title-main">Linear subspace</span> In mathematics, vector subspace

    In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.

    <span class="mw-page-title-main">Linear independence</span> Vectors whose linear combinations are nonzero

    In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension.

    <span class="mw-page-title-main">General linear group</span> Group of n × n invertible matrices

    In mathematics, the general linear group of degree n is the set of n×n invertible matrices, together with the operation of ordinary matrix multiplication. This forms a group, because the product of two invertible matrices is again invertible, and the inverse of an invertible matrix is invertible, with the identity matrix as the identity element of the group. The group is so named because the columns of an invertible matrix are linearly independent, hence the vectors/points they define are in general linear position, and matrices in the general linear group take points in general linear position to points in general linear position.

    <span class="mw-page-title-main">Orthogonal group</span> Type of group in mathematics

    In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.

    In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.

    In mathematics, an algebra over a field is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear".

    In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

    <span class="mw-page-title-main">Affine space</span> Euclidean space without distance and angles

    In mathematics, an affine space is a geometric structure that generalizes some of the properties of Euclidean spaces in such a way that these are independent of the concepts of distance and measure of angles, keeping only the properties related to parallelism and ratio of lengths for parallel line segments. Affine space is the setting for affine geometry.

    In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix , often called the pseudoinverse, is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. The terms pseudoinverse and generalized inverse are sometimes used as synonyms for the Moore–Penrose inverse of a matrix, but sometimes applied to other elements of algebraic structures which share some but not all properties expected for an inverse element.

    <span class="mw-page-title-main">Real coordinate space</span> Space formed by the n-tuples of real numbers

    In mathematics, the real coordinate space or real coordinate n-space, of dimension n, denoted Rn or , is the set of all ordered n-tuples of real numbers, that is the set of all sequences of n real numbers, also known as coordinate vectors. Special cases are called the real lineR1, the real coordinate planeR2, and the real coordinate three-dimensional spaceR3. With component-wise addition and scalar multiplication, it is a real vector space.

    In mathematics, a complex structure on a real vector space is an automorphism of that squares to the minus identity, . Such a structure on allows one to define multiplication by complex scalars in a canonical fashion so as to regard as a complex vector space.

    In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . It is often important to know these vectors in linear algebra. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

    In mathematical physics, spacetime algebra (STA) is the application of Clifford algebra Cl1,3(R), or equivalently the geometric algebra G(M4) to physics. Spacetime algebra provides a "unified, coordinate-free formulation for all of relativistic physics, including the Dirac equation, Maxwell equation and General Relativity" and "reduces the mathematical divide between classical, quantum and relativistic physics."

    In mathematics, a definite quadratic form is a quadratic form over some real vector space V that has the same sign for every non-zero vector of V. According to that sign, the quadratic form is called positive-definite or negative-definite.

    <span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

    In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.

    References