Rado's theorem is a theorem from the branch of mathematics known as Ramsey theory. It is named for the German mathematician Richard Rado. It was proved in his thesis, Studien zur Kombinatorik.
Let be a system of linear equations, where is a matrix with integer entries. This system is said to be -regular if, for every -coloring of the natural numbers 1, 2, 3, ..., the system has a monochromatic solution. A system is regular if it is r-regular for all r ≥ 1.
Rado's theorem states that a system is regular if and only if the matrix A satisfies the columns condition. Let ci denote the i-th column of A. The matrix A satisfies the columns condition provided that there exists a partition C1, C2, ..., Cn of the column indices such that if , then
Folkman's theorem, the statement that there exist arbitrarily large sets of integers all of whose nonempty sums are monochromatic, may be seen as a special case of Rado's theorem concerning the regularity of the system of equations
where T ranges over each nonempty subset of the set {1, 2, ..., x}. [2]
Other special cases of Rado's theorem are Schur's theorem and Van der Waerden's theorem. For proving the former apply Rado's theorem to the matrix . For Van der Waerden's theorem with m chosen to be length of the monochromatic arithmetic progression, one can for example consider the following matrix:
Given a system of linear equations it is a priori unclear how to check computationally that it is regular. Fortunately, Rado's theorem provides a criterion which is testable in finite time. Instead of considering colourings (of infinitely many natural numbers), it must be checked that the given matrix satisfies the columns condition. Since the matrix consists only of finitely many columns, this property can be verified in finite time.
However, the subset sum problem can be reduced to the problem of computing the required partition C1, C2, ..., Cn of columns: Given an input set S for the subset sum problem we can write the elements of S in a matrix of shape 1 × |S|. Then the elements of S corresponding to vectors in the partition C1 sum to zero. The subset sum problem is NP-complete. Hence, verifying that a system of linear equations is regular is also an NP-complete problem.
In mathematics, any vector space has a corresponding dual vector space consisting of all linear forms on , together with the vector space structure of pointwise addition and scalar multiplication by constants.
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It allows characterizing some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible, and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants . The determinant of a matrix A is denoted det(A), det A, or |A|.
In mathematics, a linear map is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism.
In mathematics, a set B of vectors in a vector space V is called a basis if every element of V may be written in a unique way as a finite linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to B. The elements of a basis are called basis vectors.
Linear algebra is the branch of mathematics concerning linear equations such as:
In mathematics, and more specifically in linear algebra, a linear subspace, also known as a vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.
In linear algebra, the column space of a matrix A is the span of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of vector spaces of finite dimension is the characteristic polynomial of the matrix of the endomorphism over any base; it does not depend on the choice of a basis. The characteristic equation, also known as the determinantal equation, is the equation obtained by equating to zero the characteristic polynomial.
In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.
In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a generalization of the outer product from vectors to matrices, and gives the matrix of the tensor product with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.
In linear algebra, the Gram matrix of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by . If the vectors are real and the columns of matrix , then the Gram matrix is .
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map L : V → W between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:
In mathematics, a real coordinate space of dimension n, written Rn or , is a coordinate space over the real numbers. This means that it is the set of the n-tuples of real numbers. With component-wise addition and scalar multiplication, it is a real vector space.
In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the given dataset and those predicted by the linear function of the independent variable.
In combinatorics, a branch of mathematics, partition regularity is one notion of largeness for a collection of sets.
In linear algebra, a coefficient matrix is a matrix consisting of the coefficients of the variables in a set of linear equations. The matrix is used in solving systems of linear equations.
In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimension of the matrix below is 2 × 3, because there are two rows and three columns:
Folkman's theorem is a theorem in mathematics, and more particularly in arithmetic combinatorics and Ramsey theory. According to this theorem, whenever the natural numbers are partitioned into finitely many subsets, there exist arbitrarily large sets of numbers all of whose sums belong to the same subset of the partition. The theorem had been discovered and proved independently by several mathematicians, before it was named "Folkman's theorem", as a memorial to Jon Folkman, by Graham, Rothschild, and Spencer.