This article needs additional citations for verification .(June 2009) |
Type | Law, rule of replacement |
---|---|
Field | |
Symbolic statement |
|
In mathematics, the associative property [1] is a property of some binary operations that means that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.
Within an expression containing two or more occurrences in a row of the same associative operator, the order in which the operations are performed does not matter as long as the sequence of the operands is not changed. That is (after rewriting the expression with parentheses and in infix notation if necessary), rearranging the parentheses in such an expression will not change its value. Consider the following equations:
Even though the parentheses were rearranged on each line, the values of the expressions were not altered. Since this holds true when performing addition and multiplication on any real numbers, it can be said that "addition and multiplication of real numbers are associative operations".
Associativity is not the same as commutativity, which addresses whether the order of two operands affects the result. For example, the order does not matter in the multiplication of real numbers, that is, a × b = b × a, so we say that the multiplication of real numbers is a commutative operation. However, operations such as function composition and matrix multiplication are associative, but not (generally) commutative.
Associative operations are abundant in mathematics; in fact, many algebraic structures (such as semigroups and categories) explicitly require their binary operations to be associative.
However, many important and interesting operations are non-associative; some examples include subtraction, exponentiation, and the vector cross product. In contrast to the theoretical properties of real numbers, the addition of floating point numbers in computer science is not associative, and the choice of how to associate an expression can have a significant effect on rounding error.
Formally, a binary operation on a set S is called associative if it satisfies the associative law:
Here, ∗ is used to replace the symbol of the operation, which may be any symbol, and even the absence of symbol (juxtaposition) as for multiplication.
The associative law can also be expressed in functional notation thus:
If a binary operation is associative, repeated application of the operation produces the same result regardless of how valid pairs of parentheses are inserted in the expression. [2] This is called the generalized associative law.
The number of possible bracketings is just the Catalan number, , for n operations on n+1 values. For instance, a product of 3 operations on 4 elements may be written (ignoring permutations of the arguments), in possible ways:
If the product operation is associative, the generalized associative law says that all these expressions will yield the same result. So unless the expression with omitted parentheses already has a different meaning (see below), the parentheses can be considered unnecessary and "the" product can be written unambiguously as
As the number of elements increases, the number of possible ways to insert parentheses grows quickly, but they remain unnecessary for disambiguation.
An example where this does not work is the logical biconditional ↔. It is associative; thus, A ↔ (B ↔ C) is equivalent to (A ↔ B) ↔ C, but A ↔ B ↔ C most commonly means (A ↔ B) and (B ↔ C), which is not equivalent.
Some examples of associative operations include the following.
"hello"
, " "
, "world"
can be computed by concatenating the first two strings (giving "hello "
) and appending the third string ("world"
), or by joining the second and third string (giving " world"
) and concatenating the first string ("hello"
) with the result. The two methods produce the same result; string concatenation is associative (but not commutative).× | A | B | C |
---|---|---|---|
A | A | A | A |
B | A | B | C |
C | A | A | A |
In standard truth-functional propositional logic, association, [4] [5] or associativity [6] are two valid rules of replacement. The rules allow one to move parentheses in logical expressions in logical proofs. The rules (using logical connectives notation) are:
and
where "" is a metalogical symbol representing "can be replaced in a proof with".
Associativity is a property of some logical connectives of truth-functional propositional logic. The following logical equivalences demonstrate that associativity is a property of particular connectives. The following (and their converses, since ↔ is commutative) are truth-functional tautologies.[ citation needed ]
Joint denial is an example of a truth functional connective that is not associative.
A binary operation on a set S that does not satisfy the associative law is called non-associative. Symbolically,
For such an operation the order of evaluation does matter. For example:
Also although addition is associative for finite sums, it is not associative inside infinite sums (series). For example, whereas
Some non-associative operations are fundamental in mathematics. They appear often as the multiplication in structures called non-associative algebras, which have also an addition and a scalar multiplication. Examples are the octonions and Lie algebras. In Lie algebras, the multiplication satisfies Jacobi identity instead of the associative law; this allows abstracting the algebraic nature of infinitesimal transformations.
Other examples are quasigroup, quasifield, non-associative ring, and commutative non-associative magmas.
In mathematics, addition and multiplication of real numbers are associative. By contrast, in computer science, addition and multiplication of floating point numbers are not associative, as different rounding errors may be introduced when dissimilar-sized values are joined in a different order. [7]
To illustrate this, consider a floating point representation with a 4-bit significand:
Even though most computers compute with 24 or 53 bits of significand, [8] this is still an important source of rounding error, and approaches such as the Kahan summation algorithm are ways to minimise the errors. It can be especially problematic in parallel computing. [9] [10]
In general, parentheses must be used to indicate the order of evaluation if a non-associative operation appears more than once in an expression (unless the notation specifies the order in another way, like ). However, mathematicians agree on a particular order of evaluation for several common non-associative operations. This is simply a notational convention to avoid parentheses.
A left-associative operation is a non-associative operation that is conventionally evaluated from left to right, i.e.,
while a right-associative operation is conventionally evaluated from right to left:
Both left-associative and right-associative operations occur. Left-associative operations include the following:
This notation can be motivated by the currying isomorphism, which enables partial application.
Right-associative operations include the following:
Exponentiation is commonly used with brackets or right-associatively because a repeated left-associative exponentiation operation is of little use. Repeated powers would mostly be rewritten with multiplication:
Formatted correctly, the superscript inherently behaves as a set of parentheses; e.g. in the expression the addition is performed before the exponentiation despite there being no explicit parentheses wrapped around it. Thus given an expression such as , the full exponent of the base is evaluated first. However, in some contexts, especially in handwriting, the difference between , and can be hard to see. In such a case, right-associativity is usually implied.
Using right-associative notation for these operations can be motivated by the Curry–Howard correspondence and by the currying isomorphism.
Non-associative operations for which no conventional evaluation order is defined include the following.
(Compare material nonimplication in logic.)
William Rowan Hamilton seems to have coined the term "associative property" [17] around 1844, a time when he was contemplating the non-associative algebra of the octonions he had learned about from John T. Graves. [18]
In mathematics, an associative algebraA over a commutative ring K is a ring A together with a ring homomorphism from K into the center of A. This is thus an algebraic structure with an addition, a multiplication, and a scalar multiplication. The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a module or vector space over K. In this article we will also use the term K-algebra to mean an associative algebra over K. A standard first example of a K-algebra is a ring of square matrices over a commutative ring K, with the usual matrix multiplication.
In mathematics, a binary operation or dyadic operation is a rule for combining two elements to produce another element. More formally, a binary operation is an operation of arity two.
In mathematics, the greatest common divisor (GCD), also known as greatest common factor (GCF), of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers x, y, the greatest common divisor of x and y is denoted . For example, the GCD of 8 and 12 is 4, that is, gcd(8, 12) = 4.
In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type. The word homomorphism comes from the Ancient Greek language: ὁμός meaning "same" and μορφή meaning "form" or "shape". However, the word was apparently introduced to mathematics due to a (mis)translation of German ähnlich meaning "similar" to ὁμός meaning "same". The term "homomorphism" appeared as early as 1892, when it was attributed to the German mathematician Felix Klein (1849–1925).
In logic, a logical connective is a logical constant. Connectives can be used to connect logical formulas. For instance in the syntax of propositional logic, the binary connective can be used to join the two atomic formulas and , rendering the complex formula .
In mathematics, and more specifically in ring theory, an ideal of a ring is a special subset of its elements. Ideals generalize certain subsets of the integers, such as the even numbers or the multiples of 3. Addition and subtraction of even numbers preserves evenness, and multiplying an even number by any integer results in an even number; these closure and absorption properties are the defining properties of an ideal. An ideal can be used to construct a quotient ring in a way similar to how, in group theory, a normal subgroup can be used to construct a quotient group.
In mathematics, exponentiation is an operation involving two numbers: the base and the exponent or power. Exponentiation is written as bn, where b is the base and n is the power; this is pronounced as "b (raised) to the n". When n is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, bn is the product of multiplying n bases:
In mathematics, the distributive property of binary operations is a generalization of the distributive law, which asserts that the equality is always true in elementary algebra. For example, in elementary arithmetic, one has Therefore, one would say that multiplication distributes over addition.
In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, if and are real numbers then the complex conjugate of is The complex conjugate of is often denoted as or .
In mathematics, the unitary group of degree n, denoted U(n), is the group of n × n unitary matrices, with the group operation of matrix multiplication. The unitary group is a subgroup of the general linear group GL(n, C), and it has as a subgroup the special unitary group, consisting of those unitary matrices with determinant 1.
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it is a diagonal matrix called a scalar matrix, for example, . In geometry, a diagonal matrix may be used as a scaling matrix, since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale.
In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Perhaps most familiar as a property of arithmetic, e.g. "3 + 4 = 4 + 3" or "2 × 5 = 5 × 2", the property can also be used in more advanced settings. The name is needed because there are operations, such as division and subtraction, that do not have it ; such operations are not commutative, and so are referred to as noncommutative operations. The idea that simple operations, such as the multiplication and addition of numbers, are commutative was for many years implicitly assumed. Thus, this property was not named until the 19th century, when mathematics started to become formalized. A similar property exists for binary relations; a binary relation is said to be symmetric if the relation applies regardless of the order of its operands; for example, equality is symmetric as two equal mathematical objects are equal regardless of their order.
In mathematics, coalgebras or cogebras are structures that are dual to unital associative algebras. The axioms of unital associative algebras can be formulated in terms of commutative diagrams. Turning all arrows around, one obtains the axioms of coalgebras. Every coalgebra, by duality, gives rise to an algebra, but not in general the other way. In finite dimensions, this duality goes in both directions.
In category theory, a branch of mathematics, a pushout is the colimit of a diagram consisting of two morphisms f : Z → X and g : Z → Y with a common domain. The pushout consists of an object P along with two morphisms X → P and Y → P that complete a commutative square with the two given morphisms f and g. In fact, the defining universal property of the pushout essentially says that the pushout is the "most general" way to complete this commutative square. Common notations for the pushout are and .
In mathematics, a GCD domain is an integral domain R with the property that any two elements have a greatest common divisor (GCD); i.e., there is a unique minimal principal ideal containing the ideal generated by two given elements. Equivalently, any two elements of R have a least common multiple (LCM).
In mathematics, the tensor product of modules is a construction that allows arguments about bilinear maps to be carried out in terms of linear maps. The module construction is analogous to the construction of the tensor product of vector spaces, but can be carried out for a pair of modules over a commutative ring resulting in a third module, and also for a pair of a right-module and a left-module over any ring, with result an abelian group. Tensor products are important in areas of abstract algebra, homological algebra, algebraic topology, algebraic geometry, operator algebras and noncommutative geometry. The universal property of the tensor product of vector spaces extends to more general situations in abstract algebra. The tensor product of an algebra and a module can be used for extension of scalars. For a commutative ring, the tensor product of modules can be iterated to form the tensor algebra of a module, allowing one to define multiplication in the module in a universal way.
In mathematics, specifically abstract algebra, the opposite of a ring is another ring with the same elements and addition operation, but with the multiplication performed in the reverse order. More explicitly, the opposite of a ring (R, +, ⋅) is the ring (R, +, ∗) whose multiplication ∗ is defined by a ∗ b = b⋅a for all a, b in R. The opposite ring can be used to define multimodules, a generalization of bimodules. They also help clarify the relationship between left and right modules (see § Properties).
In commutative algebra, an element b of a commutative ring B is said to be integral over a subring A of B if b is a root of some monic polynomial over A.
In mathematics, the Hadamard product is a binary operation that takes in two matrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from the matrix product. It is attributed to, and named after, either French mathematician Jacques Hadamard or German mathematician Issai Schur.
Definition 1.1 (i) a(bc) = (ab)c for all a, b, c in G.
If are elements of a set with an associative operation, then the product is unambiguous; this is, the same element will be obtained regardless of how parentheses are inserted in the product.