This article needs additional citations for verification .(September 2014) |
In mathematics, particularly in linear algebra, a flag is an increasing sequence of subspaces of a finite-dimensional vector space V. Here "increasing" means each is a proper subspace of the next (see filtration):
The term flag is motivated by a particular example resembling a flag: the zero point, a line, and a plane correspond to a nail, a staff, and a sheet of fabric. [1]
If we write that dimVi = di then we have
where n is the dimension of V (assumed to be finite). Hence, we must have k ≤ n. A flag is called a complete flag if di = i for all i, otherwise it is called a partial flag.
A partial flag can be obtained from a complete flag by deleting some of the subspaces. Conversely, any partial flag can be completed (in many different ways) by inserting suitable subspaces.
The signature of the flag is the sequence (d1, ..., dk).
An ordered basis for V is said to be adapted to a flag V0 ⊂ V1 ⊂ ... ⊂ Vk if the first di basis vectors form a basis for Vi for each 0 ≤ i ≤ k. Standard arguments from linear algebra can show that any flag has an adapted basis.
Any ordered basis gives rise to a complete flag by letting the Vi be the span of the first i basis vectors. For example, the standard flag in Rn is induced from the standard basis (e1, ..., en) where ei denotes the vector with a 1 in the ith entry and 0's elsewhere. Concretely, the standard flag is the sequence of subspaces:
An adapted basis is almost never unique (the counterexamples are trivial); see below.
A complete flag on an inner product space has an essentially unique orthonormal basis: it is unique up to multiplying each vector by a unit (scalar of unit length, e.g. 1, −1, i). Such a basis can be constructed using the Gram-Schmidt process. The uniqueness up to units follows inductively, by noting that lies in the one-dimensional space .
More abstractly, it is unique up to an action of the maximal torus: the flag corresponds to the Borel group, and the inner product corresponds to the maximal compact subgroup. [2]
The stabilizer subgroup of the standard flag is the group of invertible upper triangular matrices.
More generally, the stabilizer of a flag (the linear operators on V such that for all i) is, in matrix terms, the algebra of block upper triangular matrices (with respect to an adapted basis), where the block sizes . The stabilizer subgroup of a complete flag is the set of invertible upper triangular matrices with respect to any basis adapted to the flag. The subgroup of lower triangular matrices with respect to such a basis depends on that basis, and can therefore not be characterized in terms of the flag only.
The stabilizer subgroup of any complete flag is a Borel subgroup (of the general linear group), and the stabilizer of any partial flags is a parabolic subgroup.
The stabilizer subgroup of a flag acts simply transitively on adapted bases for the flag, and thus these are not unique unless the stabilizer is trivial. That is a very exceptional circumstance: it happens only for a vector space of dimension 0, or for a vector space over of dimension 1 (precisely the cases where only one basis exists, independently of any flag).
In an infinite-dimensional space V, as used in functional analysis, the flag idea generalises to a subspace nest, namely a collection of subspaces of V that is a total order for inclusion and which further is closed under arbitrary intersections and closed linear spans. See nest algebra.
From the point of view of the field with one element, a set can be seen as a vector space over the field with one element: this formalizes various analogies between Coxeter groups and algebraic groups.
Under this correspondence, an ordering on a set corresponds to a maximal flag: an ordering is equivalent to a maximal filtration of a set. For instance, the filtration (flag) corresponds to the ordering .
In mathematics, more specifically in functional analysis, a Banach space is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space.
In mathematics, a geometric algebra is an extension of elementary algebra to work with geometrical objects such as vectors. Geometric algebra is built out of two fundamental operations, addition and the geometric product. Multiplication of vectors results in higher-dimensional objects called multivectors. Compared to other formalisms for manipulating geometric objects, geometric algebra is noteworthy for supporting vector division and addition of objects of different dimensions.
In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.
In mathematics, a Lie algebra is a vector space together with an operation called the Lie bracket, an alternating bilinear map , that satisfies the Jacobi identity. In other words, a Lie algebra is an algebra over a field for which the multiplication operation is alternating and satisfies the Jacobi identity. The Lie bracket of two vectors and is denoted . A Lie algebra is typically a non-associative algebra. However, every associative algebra gives rise to a Lie algebra, consisting of the same vector space with the commutator Lie bracket, .
In mathematics, a set B of vectors in a vector space V is called a basis if every element of V may be written in a unique way as a finite linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to B. The elements of a basis are called basis vectors.
Linear algebra is the branch of mathematics concerning linear equations such as:
In mathematics, a Clifford algebra is an algebra generated by a vector space with a quadratic form, and is a unital associative algebra with the additional structure of a distinguished subspace. As K-algebras, they generalize the real numbers, complex numbers, quaternions and several other hypercomplex number systems. The theory of Clifford algebras is intimately connected with the theory of quadratic forms and orthogonal transformations. Clifford algebras have important applications in a variety of fields including geometry, theoretical physics and digital image processing. They are named after the English mathematician William Kingdon Clifford (1845–1879).
In mathematics, the general linear group of degree n is the set of n×n invertible matrices, together with the operation of ordinary matrix multiplication. This forms a group, because the product of two invertible matrices is again invertible, and the inverse of an invertible matrix is invertible, with the identity matrix as the identity element of the group. The group is so named because the columns of an invertible matrix are linearly independent, hence the vectors/points they define are in general linear position, and matrices in the general linear group take points in general linear position to points in general linear position.
In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
In mathematics, the orthogonal group in dimension n, denoted O(n), is the group of distance-preserving transformations of a Euclidean space of dimension n that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of n × n orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.
In mathematics, a homogeneous space is, very informally, a space that looks the same everywhere, as you move through it, with movement given by the action of a group. Homogeneous spaces occur in the theories of Lie groups, algebraic groups and topological groups. More precisely, a homogeneous space for a group G is a non-empty manifold or topological space X on which G acts transitively. The elements of G are called the symmetries of X. A special case of this is when the group G in question is the automorphism group of the space X – here "automorphism group" can mean isometry group, diffeomorphism group, or homeomorphism group. In this case, X is homogeneous if intuitively X looks locally the same at each point, either in the sense of isometry, diffeomorphism, or homeomorphism (topology). Some authors insist that the action of G be faithful, although the present article does not. Thus there is a group action of G on X that can be thought of as preserving some "geometric structure" on X, and making X into a single G-orbit.
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero.
In mathematics, a generalized flag variety is a homogeneous space whose points are flags in a finite-dimensional vector space V over a field F. When F is the real or complex numbers, a generalized flag variety is a smooth or complex manifold, called a real or complexflag manifold. Flag varieties are naturally projective varieties.
In mathematics, a reductive group is a type of linear algebraic group over a field. One definition is that a connected linear algebraic group G over a perfect field is reductive if it has a representation that has a finite kernel and is a direct sum of irreducible representations. Reductive groups include some of the most important groups in mathematics, such as the general linear group GL(n) of invertible matrices, the special orthogonal group SO(n), and the symplectic group Sp(2n). Simple algebraic groups and (more generally) semisimple algebraic groups are reductive.
In linear algebra, a cone—sometimes called a linear cone for distinguishing it from other sorts of cones—is a subset of a vector space that is closed under positive scalar multiplication; that is, C is a cone if implies for every positive scalar s. A cone need not be convex, or even look like a cone in Euclidean space.
In mathematics, a Hermitian symmetric space is a Hermitian manifold which at every point has an inversion symmetry preserving the Hermitian structure. First studied by Élie Cartan, they form a natural generalization of the notion of Riemannian symmetric space from real manifolds to complex manifolds.
Affine geometry, broadly speaking, is the study of the geometrical properties of lines, planes, and their higher dimensional analogs, in which a notion of "parallel" is retained, but no metrical notions of distance or angle are. Affine spaces differ from linear spaces in that they do not have a distinguished choice of origin. So, in the words of Marcel Berger, "An affine space is nothing more than a vector space whose origin we try to forget about, by adding translations to the linear maps." Accordingly, a complex affine space, that is an affine space over the complex numbers, is like a complex vector space, but without a distinguished point to serve as the origin.
In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.
In mathematics, the complexification or universal complexification of a real Lie group is given by a continuous homomorphism of the group into a complex Lie group with the universal property that every continuous homomorphism of the original group into another complex Lie group extends compatibly to a complex analytic homomorphism between the complex Lie groups. The complexification, which always exists, is unique up to unique isomorphism. Its Lie algebra is a quotient of the complexification of the Lie algebra of the original group. They are isomorphic if the original group has a quotient by a discrete normal subgroup which is linear.
This is a glossary for the terminology in a mathematical field of functional analysis.