Basis (universal algebra)

Last updated

In universal algebra, a basis is a structure inside of some (universal) algebras, which are called free algebras. It generates all algebra elements from its own elements by the algebra operations in an independent manner. It also represents the endomorphisms of an algebra by certain indexings of algebra elements, which can correspond to the usual matrices when the free algebra is a vector space.

Contents

Definitions

A basis (or reference frame) of a (universal) algebra is a function that takes some algebra elements as values and satisfies either one of the following two equivalent conditions. Here, the set of all is called the basis set, whereas several authors call it the "basis". [1] [2] The set of its arguments is called the dimension set. Any function, with all its arguments in the whole , that takes algebra elements as values (even outside the basis set) will be denoted by . Then, will be an .

Outer condition

This condition will define bases by the set of the -ary elementary functions of the algebra, which are certain functions that take every as argument to get some algebra element as value In fact, they consist of all the projections with in which are the functions such that for each , and of all functions that rise from them by repeated "multiple compositions" with operations of the algebra.

(When an algebra operation has a single algebra element as argument, the value of such a composed function is the one that the operation takes from the value of a single previously computed -ary function as in composition. When it does not, such compositions require that many (or none for a nullary operation) -ary functions are evaluated before the algebra operation: one for each possible algebra element in that argument. In case and the numbers of elements in the arguments, or “arity”, of the operations are finite, this is the finitary multiple composition .)

Then, according to the outer condition a basis has to generate the algebra (namely when ranges over the whole , gets every algebra element) and must be independent (namely whenever any two -ary elementary functions coincide at , they will do everywhere: implies ). [3] This is the same as to require that there exists a single function that takes every algebra element as argument to get an -ary elementary function as value and satisfies for all in .

Inner condition

This other condition will define bases by the set E of the endomorphisms of the algebra, which are the homomorphisms from the algebra into itself, through its analytic representation by a basis. The latter is a function that takes every endomorphism e as argument to get a function m as value: , where this m is the "sample" of the values of e at b, namely for all i in the dimension set.

Then, according to the inner conditionb is a basis, when is a bijection from E onto the set of all m, namely for each m there is one and only one endomorphism e such that . This is the same as to require that there exists an extension function, namely a function that takes every (sample) m as argument to extend it onto an endomorphism such that . [4]

The link between these two conditions is given by the identity , which holds for all m and all algebra elements a. [5] Several other conditions that characterize bases for universal algebras are omitted.

As the next example will show, present bases are a generalization of the bases of vector spaces. Then, the name "reference frame" can well replace "basis". Yet, contrary to the vector space case, a universal algebra might lack bases and, when it has them, their dimension sets might have different finite positive cardinalities. [6]

Examples

Vector space algebras

In the universal algebra corresponding to a vector space with finite dimension the bases essentially are the ordered bases of this vector space. Yet, this will come after several details.

When the vector space is finite-dimensional, for instance with , the functions in the set L of the outer condition exactly are the ones that provide the spanning and linear independence properties with linear combinations and present generator property becomes the spanning one. On the contrary, linear independence is a mere instance of present independence, which becomes equivalent to it in such vector spaces. (Also, several other generalizations of linear independence for universal algebras do not imply present independence.)

The functions m for the inner condition correspond to the square arrays of field elements (namely, usual vector-space square matrices) that serve to build the endomorphisms of vector spaces (namely, linear maps into themselves). Then, the inner condition requires a bijection property from endomorphisms also to arrays. In fact, each column of such an array represents a vector as its n-tuple of coordinates with respect to the basis b. For instance, when the vectors are n-tuples of numbers from the underlying field and b is the Kronecker basis, m is such an array seen by columns, is the sample of such a linear map at the reference vectors and extends this sample to this map as below.

When the vector space is not finite-dimensional, further distinctions are needed. In fact, though the functions formally have an infinity of vectors in every argument, the linear combinations they evaluate never require infinitely many addenda and each determines a finite subset J of that contains all required i. Then, every value equals an , where is the restriction of m to J and is the J-ary elementary function corresponding to . When the replace the , both the linear independence and spanning properties for infinite basis sets follow from present outer condition and conversely.

Therefore, as far as vector spaces of a positive dimension are concerned, the only difference between present bases for universal algebras and the ordered bases of vector spaces is that here no order on is required. Still it is allowed, in case it serves some purpose.

When the space is zero-dimensional, its ordered basis is empty. Then, being the empty function, it is a present basis. Yet, since this space only contains the null vector and its only endomorphism is the identity, any function b from any set (even a nonempty one) to this singleton space works as a present basis. This is not so strange from the point of view of universal algebra, where singleton algebras, which are called "trivial", enjoy a lot of other seemingly strange properties.

Word monoid

Let be an "alphabet", namely a (usually finite) set of objects called "letters". Let W denote the corresponding set of words or "strings", which will be denoted as in strings, namely either by writing their letters in sequence or by in case of the empty word (formal language notation). [7] Accordingly, the juxtaposition will denote the concatenation of two words v and w, namely the word that begins with v and is followed by w.

Concatenation is a binary operation on W that together with the empty word defines a free monoid, the monoid of the words on , which is one of the simplest universal algebras. Then, the inner condition will immediately prove that one of its bases is the function b that makes a single-letter word of each letter , .

(Depending on the set-theoretical implementation of sequences, b may not be an identity function, namely may not be , rather an object like , namely a singleton function, or a pair like or . [7] )

In fact, in the theory of D0L systems (Rozemberg & Salomaa 1980) such are the tables of "productions", which such systems use to define the simultaneous substitutions of every by a single word in any word u in W: if , then . Then, b satisfies the inner condition, since the function is the well-known bijection that identifies every word endomorphism with any such table. (The repeated applications of such an endomorphism starting from a given "seed" word are able to model many growth processes, where words and concatenation serve to build fairly heterogeneous structures as in L-system, not just "sequences".)

Notes

  1. Gould.
  2. Grätzer 1968, p.198.
  3. For instance, see (Grätzer 1968, p.198).
  4. For instance, see 0.4 and 0.5 of (Ricci 2007)
  5. For instance, see 0.4 (E) of (Ricci 2007)
  6. Grätzer 1979.
  7. 1 2 Formal Language notation is used in Computer Science and sometimes collides with the set-theoretical definitions of words. See G. Ricci, An observation on a Formal Language notation, SIGACT News, 17 (1972), 1823.

Related Research Articles

Associative algebra Algebraic structure with (a + b)(c + d) = ac + ad + bc + bd and (a)(bc) = (ab)(c)

In mathematics, an associative algebraA is an algebraic structure with compatible operations of addition, multiplication, and a scalar multiplication by elements in some field. The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a vector space over K. In this article we will also use the term K-algebra to mean an associative algebra over the field K. A standard first example of a K-algebra is a ring of square matrices over a field K, with the usual matrix multiplication.

In engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their base quantities and units of measure and tracking these dimensions as calculations or comparisons are performed. The conversion of units from one dimensional unit to another is often easier within the metric or SI system than in others, due to the regular 10-base in all units. Dimensional analysis, or more specifically the factor-label method, also known as the unit-factor method, is a widely used technique for such conversions using the rules of algebra.

In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.

Linear algebra Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

Tensor Algebraic object with geometric applications

In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Objects that tensors may map between include vectors and scalars, and even other tensors. There are many types of tensors, including scalars and vectors, dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system.

In mathematics, the tensor product of two vector spaces V and W is a vector space to which is associated a bilinear map that maps a pair to an element of denoted

Vector space Algebraic structure in linear algebra

In mathematics, physics, and engineering, a vector space is a set whose elements, often called vectors, may be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. The terms real vector space and complex vector space are often used to specify the nature of the scalars: real coordinate space or complex coordinate space.

In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite dimension is a basis for whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, the standard basis for a Euclidean space is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection is also orthonormal, and every orthonormal basis for arises in this fashion.

In mathematics, the modern component-free approach to the theory of a tensor views a tensor as an abstract object, expressing some definite type of multilinear concept. Their properties can be derived from their definitions, as linear maps or more generally; and the rules for manipulations of tensors arise as an extension of linear algebra to multilinear algebra.

Lie algebra representation

In the mathematical field of representation theory, a Lie algebra representation or representation of a Lie algebra is a way of writing a Lie algebra as a set of matrices in such a way that the Lie bracket is given by the commutator. In the language of physics, one looks for a vector space together with a collection of operators on satisfying some fixed set of commutation relations, such as the relations satisfied by the angular momentum operators.

In mathematics, and especially differential geometry and gauge theory, a connection on a fiber bundle is a device that defines a notion of parallel transport on the bundle; that is, a way to "connect" or identify fibers over nearby points. The most common case is that of a linear connection on a vector bundle, for which the notion of parallel transport must be linear. A linear connection is equivalently specified by a covariant derivative, an operator that differentiates sections of the bundle along tangent directions in the base manifold, in such a way that parallel sections have derivative zero. Linear connections generalize, to arbitrary vector bundles, the Levi-Civita connection on the tangent bundle of a pseudo-Riemannian manifold, which gives a standard way to differentiate vector fields. Nonlinear connections generalize this concept to bundles whose fibers are not necessarily linear.

In mathematics, a Casimir element is a distinguished element of the center of the universal enveloping algebra of a Lie algebra. A prototypical example is the squared angular momentum operator, which is a Casimir element of the three-dimensional rotation group.

In mathematics, Schur's lemma is an elementary but extremely useful statement in representation theory of groups and algebras. In the group case it says that if M and N are two finite-dimensional irreducible representations of a group G and φ is a linear map from M to N that commutes with the action of the group, then either φ is invertible, or φ = 0. An important special case occurs when M = N and φ is a self-map; in particular, any element of the center of a group must act as a scalar operator on M. The lemma is named after Issai Schur who used it to prove the Schur orthogonality relations and develop the basics of the representation theory of finite groups. Schur's lemma admits generalisations to Lie groups and Lie algebras, the most common of which are due to Jacques Dixmier and Daniel Quillen.

In linear algebra and functional analysis, the partial trace is a generalization of the trace. Whereas the trace is a scalar valued function on operators, the partial trace is an operator-valued function. The partial trace has applications in quantum information and decoherence which is relevant for quantum measurement and thereby to the decoherent approaches to interpretations of quantum mechanics, including consistent histories and the relative state interpretation.

Change of basis Coordinate change in linear algebra

In mathematics, an ordered basis of a vector space of finite dimension n allows representing uniquely any element of the vector space by a coordinate vector, which is a sequence of n scalars called coordinates. If two different bases are considered, the coordinate vector that represents a vector v on one basis is, in general, different from the coordinate vector that represents v on the other basis. A change of basis consists of converting every assertion expressed in terms of coordinates relative to one basis into an assertion expressed in terms of coordinates relative to the other basis.

In linear algebra, given a vector space V with a basis B of vectors indexed by an index set I, the dual set of B is a set B of vectors in the dual space V with the same index set I such that B and B form a biorthogonal system. The dual set is always linearly independent but does not necessarily span V. If it does span V, then B is called the dual basis or reciprocal basis for the basis B.

Cartesian tensor

In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components. Converting a tensor's components from one such basis to another is through an orthogonal transformation.

In mathematics, an operad is a structure that consists of abstract operations, each one having a fixed finite number of inputs (arguments) and one output, as well as a specification of how to compose these operations. Given an operad , one defines an algebra over to be a set together with concrete operations on this set which behave just like the abstract operations of . For instance, there is a Lie operad such that the algebras over are precisely the Lie algebras; in a sense abstractly encodes the operations that are common to all Lie algebras. An operad is to its algebras as a group is to its group representations.

Boolean algebra is a mathematically rich branch of abstract algebra. Just as group theory deals with groups, and linear algebra with vector spaces, Boolean algebras are models of the equational theory of the two values 0 and 1. Common to Boolean algebras, groups, and vector spaces is the notion of an algebraic structure, a set closed under some operations satisfying certain equations.

In mathematics, an Orlicz sequence space is any of certain class of linear spaces of scalar-valued sequences, endowed with a special norm, specified below, under which it forms a Banach space. Orlicz sequence spaces generalize the spaces, and as such play an important role in functional analysis.

References

  1. Gould, V. Independence algebras, Algebra Universalis 33 (1995), 294318.
  2. Grätzer, G. (1968). Universal Algebra, D. Van Nostrand Company Inc..
  3. Grätzer, G. (1979). Universal Algebra 2-nd 2ed., Springer Verlag. ISBN   0-387-90355-0.
  4. Ricci, G. (2007). Dilatations kill fields, Int. J. Math. Game Theory Algebra, 16 5/6, pp. 1334.
  5. Rozenberg G. and Salomaa A. (1980). The mathematical theory of L systems, Academic Press, New York. ISBN   0-12-597140-0