Hilbert C*-module

Last updated

Hilbert C*-modules are mathematical objects that generalise the notion of Hilbert spaces (which are themselves generalisations of Euclidean space), in that they endow a linear space with an "inner product" that takes values in a C*-algebra. Hilbert C*-modules were first introduced in the work of Irving Kaplansky in 1953, which developed the theory for commutative, unital algebras (though Kaplansky observed that the assumption of a unit element was not "vital"). [1] In the 1970s the theory was extended to non-commutative C*-algebras independently by William Lindall Paschke [2] and Marc Rieffel, the latter in a paper that used Hilbert C*-modules to construct a theory of induced representations of C*-algebras. [3] Hilbert C*-modules are crucial to Kasparov's formulation of KK-theory, [4] and provide the right framework to extend the notion of Morita equivalence to C*-algebras. [5] They can be viewed as the generalization of vector bundles to noncommutative C*-algebras and as such play an important role in noncommutative geometry, notably in C*-algebraic quantum group theory, [6] [7] and groupoid C*-algebras.

Contents

Definitions

Inner-product C*-modules

Let be a C*-algebra (not assumed to be commutative or unital), its involution denoted by . An inner-product -module (or pre-Hilbert -module) is a complex linear space equipped with a compatible right -module structure, together with a map

that satisfies the following properties:

(i.e. the inner product is -linear in its second argument).
from which it follows that the inner product is conjugate linear in its first argument (i.e. it is a sesquilinear form).
in the sense of being a positive element of A, and
(An element of a C*-algebra is said to be positive if it is self-adjoint with non-negative spectrum.) [8] [9]

Hilbert C*-modules

An analogue to the Cauchy–Schwarz inequality holds for an inner-product -module : [10]

for , in .

On the pre-Hilbert module , define a norm by

The norm-completion of , still denoted by , is said to be a Hilbert -module or a Hilbert C*-module over the C*-algebra . The Cauchy–Schwarz inequality implies the inner product is jointly continuous in norm and can therefore be extended to the completion.

The action of on is continuous: for all in

Similarly, if is an approximate unit for (a net of self-adjoint elements of for which and tend to for each in ), then for in

Whence it follows that is dense in , and when is unital.

Let

then the closure of is a two-sided ideal in . Two-sided ideals are C*-subalgebras and therefore possess approximate units. One can verify that is dense in . In the case when is dense in , is said to be full. This does not generally hold.

Examples

Hilbert spaces

Since the complex numbers are a C*-algebra with an involution given by complex conjugation, a complex Hilbert space is a Hilbert -module under scalar multipliation by complex numbers and its inner product.

Vector bundles

If is a locally compact Hausdorff space and a vector bundle over with projection a Hermitian metric , then the space of continuous sections of is a Hilbert -module. Given sections of and the right action is defined by

and the inner product is given by

The converse holds as well: Every countably generated Hilbert C*-module over a commutative unital C*-algebra is isomorphic to the space of sections vanishing at infinity of a continuous field of Hilbert spaces over . [ citation needed ]

C*-algebras

Any C*-algebra is a Hilbert -module with the action given by right multiplication in and the inner product . By the C*-identity, the Hilbert module norm coincides with C*-norm on .

The (algebraic) direct sum of copies of

can be made into a Hilbert -module by defining

If is a projection in the C*-algebra , then is also a Hilbert -module with the same inner product as the direct sum.

The standard Hilbert module

One may also consider the following subspace of elements in the countable direct product of

Endowed with the obvious inner product (analogous to that of ), the resulting Hilbert -module is called the standard Hilbert module over .

The standard Hilbert module plays an important role in the proof of the Kasparov stabilization theorem which states that for any countably generated Hilbert -module there is an isometric isomorphism [11]

See also

Notes

  1. Kaplansky, I. (1953). "Modules over operator algebras". American Journal of Mathematics . 75 (4): 839–853. doi:10.2307/2372552. JSTOR   2372552.
  2. Paschke, W. L. (1973). "Inner product modules over B*-algebras". Transactions of the American Mathematical Society . 182: 443–468. doi:10.2307/1996542. JSTOR   1996542.
  3. Rieffel, M. A. (1974). "Induced representations of C*-algebras". Advances in Mathematics . 13 (2): 176–257. doi: 10.1016/0001-8708(74)90068-1 .
  4. Kasparov, G. G. (1980). "Hilbert C*-modules: Theorems of Stinespring and Voiculescu". Journal of Operator Theory. Theta Foundation. 4: 133–150.
  5. Rieffel, M. A. (1982). "Morita equivalence for operator algebras". Proceedings of Symposia in Pure Mathematics. American Mathematical Society. 38: 176–257.
  6. Baaj, S.; Skandalis, G. (1993). "Unitaires multiplicatifs et dualité pour les produits croisés de C*-algèbres". Annales Scientifiques de l'École Normale Supérieure . 26 (4): 425–488. doi:10.24033/asens.1677.
  7. Woronowicz, S. L. (1991). "Unbounded elements affiliated with C*-algebras and non-compact quantum groups". Communications in Mathematical Physics. 136 (2): 399–432. Bibcode:1991CMaPh.136..399W. doi:10.1007/BF02100032. S2CID   118184597.
  8. Arveson, William (1976). An Invitation to C*-Algebras. Springer-Verlag. p. 35.
  9. In the case when is non-unital, the spectrum of an element is calculated in the C*-algebra generated by adjoining a unit to .
  10. This result in fact holds for semi-inner-product -modules, which may have non-zero elements such that , as the proof does not rely on the nondegeneracy property.
  11. Kasparov, G. G. (1980). "Hilbert C*-modules: Theorems of Stinespring and Voiculescu". Journal of Operator Theory. ThetaFoundation. 4: 133–150.

Related Research Articles

In mathematics, specifically in functional analysis, a C-algebra is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra A of continuous linear operators on a complex Hilbert space with two additional properties:

In abstract algebra, the direct sum is a construction which combines several modules into a new, larger module. The direct sum of modules is the smallest module which contains the given modules as submodules with no "unnecessary" constraints, making it an example of a coproduct. Contrast with the direct product, which is the dual notion.

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In mathematics, specifically functional analysis, a trace-class operator is a linear operator for which a trace may be defined, such that the trace is a finite number independent of the choice of basis used to compute the trace. This trace of trace-class operators generalizes the trace of matrices studied in linear algebra. All trace-class operators are compact operators.

In mathematics, particularly in functional analysis, the spectrum of a bounded linear operator is a generalisation of the set of eigenvalues of a matrix. Specifically, a complex number is said to be in the spectrum of a bounded linear operator if

In functional analysis, a unitary operator is a surjective bounded operator on a Hilbert space that preserves the inner product. Unitary operators are usually taken as operating on a Hilbert space, but the same notion serves to define the concept of isomorphism between Hilbert spaces.

In the mathematical field of representation theory, a weight of an algebra A over a field F is an algebra homomorphism from A to F, or equivalently, a one-dimensional representation of A over F. It is the algebra analogue of a multiplicative character of a group. The importance of the concept, however, stems from its application to representations of Lie algebras and hence also to representations of algebraic and Lie groups. In this context, a weight of a representation is a generalization of the notion of an eigenvalue, and the corresponding eigenspace is called a weight space.

In mathematics, a function between two complex vector spaces is said to be antilinear or conjugate-linear if

In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter.

In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature.

In mathematics, particularly in functional analysis and ring theory, an approximate identity is a net in a Banach algebra or ring that acts as a substitute for an identity element.

The spectrum of a linear operator that operates on a Banach space is a fundamental concept of functional analysis. The spectrum consists of all scalars such that the operator does not have a bounded inverse on . The spectrum has a standard decomposition into three parts:

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

In 3-dimensional topology, a part of the mathematical field of geometric topology, the Casson invariant is an integer-valued invariant of oriented integral homology 3-spheres, introduced by Andrew Casson.

In mathematics, Macdonald polynomialsPλ(x; t,q) are a family of orthogonal symmetric polynomials in several variables, introduced by Macdonald in 1987. He later introduced a non-symmetric generalization in 1995. Macdonald originally associated his polynomials with weights λ of finite root systems and used just one variable t, but later realized that it is more natural to associate them with affine root systems rather than finite root systems, in which case the variable t can be replaced by several different variables t=(t1,...,tk), one for each of the k orbits of roots in the affine root system. The Macdonald polynomials are polynomials in n variables x=(x1,...,xn), where n is the rank of the affine root system. They generalize many other families of orthogonal polynomials, such as Jack polynomials and Hall–Littlewood polynomials and Askey–Wilson polynomials, which in turn include most of the named 1-variable orthogonal polynomials as special cases. Koornwinder polynomials are Macdonald polynomials of certain non-reduced root systems. They have deep relationships with affine Hecke algebras and Hilbert schemes, which were used to prove several conjectures made by Macdonald about them.

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space

In mathematics, Hilbert spaces allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space.

In linear algebra, particularly projective geometry, a semilinear map between vector spaces V and W over a field K is a function that is a linear map "up to a twist", hence semi-linear, where "twist" means "field automorphism of K". Explicitly, it is a function T : VW that is:

Within bayesian statistics for machine learning, kernel methods arise from the assumption of an inner product space or similarity structure on inputs. For some such methods, such as support vector machines (SVMs), the original formulation and its regularization were not Bayesian in nature. It is helpful to understand them from a Bayesian perspective. Because the kernels are not necessarily positive semidefinite, the underlying structure may not be inner product spaces, but instead more general reproducing kernel Hilbert spaces. In Bayesian probability kernel methods are a key component of Gaussian processes, where the kernel function is known as the covariance function. Kernel methods have traditionally been used in supervised learning problems where the input space is usually a space of vectors while the output space is a space of scalars. More recently these methods have been extended to problems that deal with multiple outputs such as in multi-task learning.

This is a glossary for the terminology in a mathematical field of functional analysis.

References