Nest algebra

Last updated

In functional analysis, a branch of mathematics, nest algebras are a class of operator algebras that generalise the upper-triangular matrix algebras to a Hilbert space context. They were introduced by Ringrose  ( 1965 ) and have many interesting properties. They are non-selfadjoint algebras, are closed in the weak operator topology and are reflexive.

Functional analysis branch of mathematical analysis concerned with infinite-dimensional topological vector spaces, often spaces of functions

Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure and the linear functions defined on these spaces and respecting these structures in a suitable sense. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining continuous, unitary etc. operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations.

In functional analysis, an operator algebra is an algebra of continuous linear operators on a topological vector space with the multiplication given by the composition of mappings.

Hilbert space inner product space that is metrically complete; a Banach space whose norm induces an inner product (follows the parallelogram identity)

The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured. Furthermore, Hilbert spaces are complete: there are enough limits in the space to allow the techniques of calculus to be used.

Contents

Nest algebras are among the simplest examples of commutative subspace lattice algebras. Indeed, they are formally defined as the algebra of bounded operators leaving invariant each subspace contained in a subspace nest, that is, a set of subspaces which is totally ordered by inclusion and is also a complete lattice. Since the orthogonal projections corresponding to the subspaces in a nest commute, nests are commutative subspace lattices.

In functional analysis, a bounded linear operator is a linear transformation L between normed vector spaces X and Y for which the ratio of the norm of L(v) to that of v is bounded above by the same number, over all non-zero vectors v in X. In other words, there exists some such that for all v in X

Invariant (mathematics) property of mathematical objects that remains unchanged for transformations applied to the objects

In mathematics, an invariant is a property, held by a class of mathematical objects, which remains unchanged when transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used. For example, the area of a triangle is an invariant with respect to isometries of the Euclidean plane. The phrases "invariant under" and "invariant to" a transformation are both used. More generally, an invariant with respect to an equivalence relation is a property that is constant on each equivalence class.

Linear subspace a vector space which is a subset of another vector space

In mathematics, and more specifically in linear algebra, a linear subspace, also known as a vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually called simply a subspace when the context serves to distinguish it from other kinds of subspace.

By way of an example, let us apply this definition to recover the finite-dimensional upper-triangular matrices. Let us work in the -dimensional complex vector space , and let be the standard basis. For , let be the -dimensional subspace of spanned by the first basis vectors . Let

Dimension Maximum number of independent directions within a mathematical space

In physics and mathematics, the dimension of a mathematical space is informally defined as the minimum number of coordinates needed to specify any point within it. Thus a line has a dimension of one because only one coordinate is needed to specify a point on it – for example, the point at 5 on a number line. A surface such as a plane or the surface of a cylinder or sphere has a dimension of two because two coordinates are needed to specify a point on it – for example, both a latitude and longitude are required to locate a point on the surface of a sphere. The inside of a cube, a cylinder or a sphere is three-dimensional because three coordinates are needed to locate a point within these spaces.

Complex number Element of a number system in which –1 has a square root

A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers, and i is a solution of the equation x2 = −1. Because no real number satisfies this equation, i is called an imaginary number. For the complex number a + bi, a is called the real part, and b is called the imaginary part. Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers, and are fundamental in many aspects of the scientific description of the natural world.

Vector space Algebraic structure which is fundamental for linear algebra

A vector space is a collection of objects called vectors, which may be added together and multiplied ("scaled") by numbers, called scalars. Scalars are often taken to be real numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers, or generally any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called axioms, listed below.

then N is a subspace nest, and the corresponding nest algebra of n × n complex matrices M leaving each subspace in N invariant   that is, satisfying for each S in N is precisely the set of upper-triangular matrices.

If we omit one or more of the subspaces Sj from N then the corresponding nest algebra consists of block upper-triangular matrices.

Properties

See also

Related Research Articles

Lie algebra A vector space with an alternating binary operation satisfying the Jacobi identity.

In mathematics, a Lie algebra is a vector space together with a non-associative, alternating bilinear map , called the Lie bracket, satisfying the Jacobi identity.

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

In mathematics, an algebra over a field is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure, which consists of a set, together with operations of multiplication, addition, and scalar multiplication by elements of the underlying field, and satisfies the axioms implied by "vector space" and "bilinear".

In mathematics, a symplectic vector space is a vector space V over a field F equipped with a symplectic bilinear form.

Jordan normal form Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form of a linear operator on a finite-dimensional vector space is an upper triangular matrix of a particular form called a Jordan matrix, representing the operator with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In mathematics, a universal enveloping algebra is the most general algebra that contains all representations of a Lie algebra.

In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows to write an arbitrary matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.

Triangular matrix special kind of square matrix

In the mathematical discipline of linear algebra, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero. A triangular matrix is one that is either lower triangular or upper triangular. A matrix that is both upper and lower triangular is called a diagonal matrix.

In mathematics, a generalized flag variety is a homogeneous space whose points are flags in a finite-dimensional vector space V over a field F. When F is the real or complex numbers, a generalized flag variety is a smooth or complex manifold, called a real or complexflag manifold. Flag varieties are naturally projective varieties.

Irreducible representation Type of group and algebra representation

In mathematics, specifically in the representation theory of groups and algebras, an irreducible representation or irrep of an algebraic structure is a nonzero representation that has no proper subrepresentation closed under the action of .

In mathematics, particularly in linear algebra, a flag is an increasing sequence of subspaces of a finite-dimensional vector space V. Here "increasing" means each is a proper subspace of the next :

In functional analysis, a reflexive operator algebraA is an operator algebra that has enough invariant subspaces to characterize it. Formally, A is reflexive if it is equal to the algebra of bounded operators which leave invariant each subspace left invariant by every operator in A.

In abstract algebra, a matrix ring is any collection of matrices over some ring R that form a ring under matrix addition and matrix multiplication. The set of n × n matrices with entries from R is a matrix ring denoted Mn(R), as well as some subsets of infinite matrices which form infinite matrix rings. Any subring of a matrix ring is a matrix ring.

In mathematics, an invariant subspace of a linear mapping T : VV from some vector space V to itself is a subspace W of V that is preserved by T; that is, T(W) ⊆ W.

In mathematics, operator theory is the study of linear operators on function spaces, beginning with differential operators and integral operators. The operators may be presented abstractly by their characteristics, such as bounded linear operators or closed operators, and consideration may be given to nonlinear operators. The study, which depends heavily on the topology of function spaces, is a branch of functional analysis.

In abstract algebra, a representation of an associative algebra is a module for that algebra. Here an associative algebra is a ring. If the algebra is not unital, it may be made so in a standard way ; there is no essential difference between modules for the resulting unital ring, in which the identity acts by the identity mapping, and representations of the algebra.

In mathematics, the Lie–Kolchin theorem is a theorem in the representation theory of linear algebraic groups; Lie's theorem is the analog for linear Lie algebras.

In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace W of a vector space V equipped with a bilinear form B is the set W of all vectors in V that are orthogonal to every vector in W. Informally, it is called the perp, short for perpendicular complement. It is a subspace of V.

In mathematics, more specifically in the field of ring theory, a ring has the invariant basis number (IBN) property if all finitely generated free left modules over R have a well-defined rank. In the case of fields, the IBN property becomes the statement that finite-dimensional vector spaces have a unique dimension.

Hermitian symmetric space Manifold with inversion symmetry

In mathematics, a Hermitian symmetric space is a Hermitian manifold which at every point has as an inversion symmetry preserving the Hermitian structure. First studied by Élie Cartan, they form a natural generalization of the notion of Riemannian symmetric space from real manifolds to complex manifolds.

References

Digital object identifier Character string used as a permanent identifier for a digital object, in a format controlled by the International DOI Foundation

In computing, a Digital Object Identifier or DOI is a persistent identifier or handle used to identify objects uniquely, standardized by the International Organization for Standardization (ISO). An implementation of the Handle System, DOIs are in wide use mainly to identify academic, professional, and government information, such as journal articles, research reports and data sets, and official publications though they also have been used to identify other types of information resources, such as commercial videos.

International Standard Serial Number unique eight-digit number used to identify a print or electronic periodical publication

An International Standard Serial Number (ISSN) is an eight-digit serial number used to uniquely identify a serial publication, such as a magazine. The ISSN is especially helpful in distinguishing between serials with the same title. ISSN are used in ordering, cataloging, interlibrary loans, and other practices in connection with serial literature.

Mathematical Reviews is a journal published by the American Mathematical Society (AMS) that contains brief synopses, and in some cases evaluations, of many articles in mathematics, statistics, and theoretical computer science. The AMS also publishes an associated online bibliographic database called MathSciNet which contains an electronic version of Mathematical Reviews and additionally contains citation information for over 3.5 million items as of 2018.