Triple product property

Last updated

In abstract algebra, the triple product property is an identity satisfied in some groups.

Let be a non-trivial group. Three nonempty subsets are said to have the triple product property in if for all elements , , it is the case that

where is the identity of .

It plays a role in research of fast matrix multiplication algorithms.

Related Research Articles

In mathematics, the associative property is a property of some binary operations. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.

In mathematics, an associative algebra is an algebraic structure with compatible operations of addition, multiplication, and a scalar multiplication by elements in some field. The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a vector space over K. In this article we will also use the term K-algebra to mean an associative algebra over the field K. A standard first example of a K-algebra is a ring of square matrices over a field K, with the usual matrix multiplication.

Convolution mathematical operation

In mathematics convolution is a mathematical operation on two functions that produces a third function expressing how the shape of one is modified by the other. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reversed and shifted.

In linear algebra, the determinant is a scalar value that can be computed from the elements of a square matrix and encodes certain properties of the linear transformation described by the matrix. The determinant of a matrix A is denoted det(A), det A, or |A|. Geometrically, it can be viewed as the volume scaling factor of the linear transformation described by the matrix. This is also the signed volume of the n-dimensional parallelepiped spanned by the column or row vectors of the matrix. The determinant is positive or negative according to whether the linear mapping preserves or reverses the orientation of n-space.

Fast Fourier transform O(N logN) divide and conquer algorithm to calculate the discrete Fourier transforms

A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse factors. As a result, it manages to reduce the complexity of computing the DFT from , which arises if one simply applies the definition of DFT, to , where is the data size. The difference in speed can be enormous, especially for long data sets where N may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.

Lie group Group that is also a differentiable manifold with group operations that are smooth

In mathematics, a Lie group is a group whose elements are organized continuously and smoothly, as opposed to discrete groups, where the elements are separated—this makes Lie groups differentiable manifolds. Lie groups are named after Norwegian mathematician Sophus Lie, who laid the foundations of the theory of continuous transformation groups.

Linear algebra Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as

Semigroup algebraic structure with an associative binary operation

In mathematics, a semigroup is an algebraic structure consisting of a set together with an associative binary operation.

Ring (mathematics) Algebraic structure with addition and multiplication

In mathematics, a ring is one of the fundamental algebraic structures used in abstract algebra. It consists of a set equipped with two binary operations that generalize the arithmetic operations of addition and multiplication. Through this generalization, theorems from arithmetic are extended to non-numerical objects such as polynomials, series, matrices and functions.

In mathematics, matrix multiplication or matrix product is a binary operation that produces a matrix from two matrices with entries in a field, or, more generally, in a ring or even a semiring. The matrix product is designed for representing the composition of linear maps that are represented by matrices. Matrix multiplication is thus a basic tool of linear algebra, and as such has numerous applications in many areas of mathematics, as well as in applied mathematics, statistics, physics, economics, and engineering. In more detail, if A is an n × m matrix and B is an m × p matrix, their matrix product AB is an n × p matrix, in which the m entries across a row of A are multiplied with the m entries down a column of B and summed to produce an entry of AB. When two linear maps are represented by matrices, then the matrix product represents the composition of the two maps.

In mathematics, an empty product, or nullary product, is the result of multiplying no factors. It is by convention equal to the multiplicative identity, just as the empty sum—the result of adding no numbers—is by convention zero, or the additive identity.

In mathematics, an algebra over a field is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure, which consists of a set, together with operations of multiplication, addition, and scalar multiplication by elements of the underlying field, and satisfies the axioms implied by "vector space" and "bilinear".

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

Hypergraph Generalization of graph theory

In mathematics, a hypergraph is a generalization of a graph in which an edge can join any number of vertices. Formally, a hypergraph is a pair where is a set of elements called nodes or vertices, and is a set of non-empty subsets of called hyperedges or edges. Therefore, is a subset of , where is the power set of . The size of vertex set is called the order of the hypergraph, and the size of edges set is the size of the hypergraph.

In mathematics, a Moufang loop is a special kind of algebraic structure. It is similar to a group in many ways but need not be associative. Moufang loops were introduced by Ruth Moufang (1935). Smooth Moufang loops have an associated algebra, the Malcev algebra, similar in some ways to how a Lie group has an associated Lie algebra.

In linear algebra, the Gram matrix of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by .

In linear algebra, the Coppersmith–Winograd algorithm, named after Don Coppersmith and Shmuel Winograd, was the asymptotically fastest known matrix multiplication algorithm from 1990 until 2010. It can multiply two matrices in time. This is an improvement over the naïve time algorithm and the time Strassen algorithm. Algorithms with better asymptotic running time than the Strassen algorithm are rarely used in practice, because the large constant factors in their running times make them impractical. It is possible to improve the exponent further; however, the exponent must be at least 2.

A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0,1) matrix is a matrix with entries from the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of finite sets.

Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors.

Exponential map (Lie theory) map from Lie algebra to Lie group

In the theory of Lie groups, the exponential map is a map from the Lie algebra of a Lie group to the group, which allows one to recapture the local group structure from the Lie algebra. The existence of the exponential map is one of the primary reasons that Lie algebras are a useful tool for studying Lie groups.

References