In mathematics, a family, or indexed family, is informally a collection of objects, each associated with an index from some index set. For example, a family of real numbers, indexed by the set of integers, is a collection of real numbers, where a given function selects one real number for each integer (possibly the same) as indexing.
More formally, an indexed family is a mathematical function together with its domain and image (that is, indexed families and mathematical functions are technically identical, just points of view are different). Often the elements of the set are referred to as making up the family. In this view, indexed families are interpreted as collections of indexed elements instead of functions. The set is called the index set of the family, and is the indexed set.
Sequences are one type of families indexed by natural numbers. In general, the index set is not restricted to be countable. For example, one could consider an uncountable family of subsets of the natural numbers indexed by the real numbers.
Let and be sets and a function such that where is an element of and the image of under the function is denoted by . For example, is denoted by The symbol is used to indicate that is the element of indexed by The function thus establishes a family of elements inindexed by which is denoted by or simply if the index set is assumed to be known. Sometimes angle brackets or braces are used instead of parentheses, although the use of braces risks confusing indexed families with sets.
Functions and indexed families are formally equivalent, since any function with a domain induces a family and conversely. Being an element of a family is equivalent to being in the range of the corresponding function. In practice, however, a family is viewed as a collection, rather than a function.
Any set gives rise to a family where is indexed by itself (meaning that is the identity function). However, families differ from sets in that the same object can appear multiple times with different indices in a family, whereas a set is a collection of distinct objects. A family contains any element exactly once if and only if the corresponding function is injective.
An indexed family defines a set that is, the image of under Since the mapping is not required to be injective, there may exist with such that Thus, , where denotes the cardinality of the set For example, the sequence indexed by the natural numbers has image set In addition, the set does not carry information about any structures on Hence, by using a set instead of the family, some information might be lost. For example, an ordering on the index set of a family induces an ordering on the family, but no ordering on the corresponding image set.
An indexed family is a subfamily of an indexed family if and only if is a subset of and holds for all
For example, consider the following sentence:
The vectors are linearly independent.
Here denotes a family of vectors. The -th vector only makes sense with respect to this family, as sets are unordered so there is no -th vector of a set. Furthermore, linear independence is defined as a property of a collection; it therefore is important if those vectors are linearly independent as a set or as a family. For example, if we consider and as the same vector, then the set of them consists of only one element (as a set is a collection of unordered distinct elements) and is linearly independent, but the family contains the same element twice (since indexed differently) and is linearly dependent (same vectors are linearly dependent).
Suppose a text states the following:
A square matrix is invertible, if and only if the rows of are linearly independent.
As in the previous example, it is important that the rows of are linearly independent as a family, not as a set. For example, consider the matrix The set of the rows consists of a single element as a set is made of unique elements so it is linearly independent, but the matrix is not invertible as the matrix determinant is 0. On the other hands, the family of the rows contains two elements indexed differently such as the 1st row and the 2nd row so it is linearly dependent. The statement is therefore correct if it refers to the family of rows, but wrong if it refers to the set of rows. (The statement is also correct when "the rows" is interpreted as referring to a multiset, in which the elements are also kept distinct but which lacks some of the structure of an indexed family.)
Let be the finite set where is a positive integer.
Index sets are often used in sums and other similar operations. For example, if is an indexed family of numbers, the sum of all those numbers is denoted by
When is a family of sets, the union of all those sets is denoted by
Likewise for intersections and Cartesian products.
The analogous concept in category theory is called a diagram . A diagram is a functor giving rise to an indexed family of objects in a category C, indexed by another category J, and related by morphisms depending on two indices.
Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread.
In mathematics, any vector space has a corresponding dual vector space consisting of all linear forms on together with the vector space structure of pointwise addition and scalar multiplication by constants.
In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
In mathematics, and more specifically in linear algebra, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.
In mathematics, a set B of vectors in a vector space V is called a basis if every element of V may be written in a unique way as a finite linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to B. The elements of a basis are called basis vectors.
Linear algebra is the branch of mathematics concerning linear equations such as:
In mathematics, a product is the result of multiplication, or an expression that identifies objects to be multiplied, called factors. For example, 21 is the product of 3 and 7, and is the product of and . When one factor is an integer, the product is called a multiple.
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors, dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix.
In mathematics and physics, a vector space is a set whose elements, often called vectors, can be added together and multiplied ("scaled") by numbers called scalars. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector spaces and complex vector spaces are kinds of vector spaces based on different kinds of scalars: real numbers and complex numbers. Scalars can also be, more generally, elements of any field.
In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal of A. The trace is only defined for a square matrix.
In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.
A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula. As formulas are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics.
In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension.
In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In mathematics, an algebra over a field is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear".
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which is mapped to the zero vector of the co-domain; the kernel is always a linear subspace of the domain. That is, given a linear map L : V → W between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:
In mathematics, the real coordinate space or real coordinate n-space, of dimension n, denoted Rn or , is the set of all ordered n-tuples of real numbers, that is the set of all sequences of n real numbers, also known as coordinate vectors. Special cases are called the real lineR1, the real coordinate planeR2, and the real coordinate three-dimensional spaceR3. With component-wise addition and scalar multiplication, it is a real vector space.
In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics.
In mathematics, a zero element is one of several generalizations of the number zero to other algebraic structures. These alternate meanings may or may not reduce to the same thing, depending on the context.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.