Bondy's theorem

Last updated

In mathematics, Bondy's theorem is a bound on the number of elements needed to distinguish the sets in a family of sets from each other. It belongs to the field of combinatorics, and is named after John Adrian Bondy, who published it in 1972. [1]

Contents

Statement

The theorem is as follows:

Let X be a set with n elements and let A1, A2, ..., An be distinct subsets of X. Then there exists a subset S of X with n  1 elements such that the sets Ai  S are all distinct.

In other words, if we have a 0-1 matrix with n rows and n columns such that each row is distinct, we can remove one column such that the rows of the resulting n × (n  1) matrix are distinct. [2] [3]

Example

Consider the 4 × 4 matrix

where all rows are pairwise distinct. If we delete, for example, the first column, the resulting matrix

no longer has this property: the first row is identical to the second row. Nevertheless, by Bondy's theorem we know that we can always find a column that can be deleted without introducing any identical rows. In this case, we can delete the third column: all rows of the 3 × 4 matrix

are distinct. Another possibility would have been deleting the fourth column.

Learning theory application

From the perspective of computational learning theory, Bondy's theorem can be rephrased as follows: [4]

Let C be a concept class over a finite domain X. Then there exists a subset S of X with the size at most |C|  1 such that S is a witness set for every concept in C.

This implies that every finite concept class C has its teaching dimension bounded by |C|  1.

Notes

  1. Bondy, J. A. (1972), "Induced subsets", Journal of Combinatorial Theory, Series B , 12 (2): 201–202, doi: 10.1016/0095-8956(72)90025-1 , MR   0319773 .
  2. Jukna, Stasys (2001), Extremal Combinatorics with Applications in Computer Science, Springer, ISBN   978-3-540-66313-3 , Section 12.1.
  3. Clote, Peter; Remmel, Jeffrey B. (1995), Feasible Mathematics II, Birkhäuser, ISBN   978-3-7643-3675-2 , Section 4.1.
  4. Kushilevitz, Eyal; Linial, Nathan; Rabinovich, Yuri; Saks, Michael (1996), "Witness sets for families of binary vectors", Journal of Combinatorial Theory, Series A, 73 (2): 376–380, doi: 10.1016/S0097-3165(96)80015-X , MR   1370141 .

Related Research Articles

In linear algebra, the rank of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

<span class="mw-page-title-main">Latin square</span> Square array with symbols that each occur once per row and column

In combinatorics and in experimental design, a Latin square is an n × n array filled with n different symbols, each occurring exactly once in each row and exactly once in each column. An example of a 3×3 Latin square is

<span class="mw-page-title-main">Linear subspace</span> In mathematics, vector subspace

In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.

In mathematics, Hall's marriage theorem, proved by Philip Hall (1935), is a theorem with two equivalent formulations. In each case, the theorem gives a necessary and sufficient condition for an object to exist:

In combinatorics, a branch of mathematics, a matroid is a structure that abstracts and generalizes the notion of linear independence in vector spaces. There are many equivalent ways to define a matroid axiomatically, the most significant being in terms of: independent sets; bases or circuits; rank functions; closure operators; and closed sets or flats. In the language of partially ordered sets, a finite simple matroid is equivalent to a geometric lattice.

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

<span class="mw-page-title-main">Inclusion–exclusion principle</span> Counting technique in combinatorics

In combinatorics, a branch of mathematics, the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union of two finite sets; symbolically expressed as

In mathematics, a Hadamard matrix, named after the French mathematician Jacques Hadamard, is a square matrix whose entries are either +1 or −1 and whose rows are mutually orthogonal. In geometric terms, this means that each pair of rows in a Hadamard matrix represents two perpendicular vectors, while in combinatorial terms, it means that each pair of rows has matching entries in exactly half of their columns and mismatched entries in the remaining columns. It is a consequence of this definition that the corresponding properties hold for columns as well as rows.

In set theory and related branches of mathematics, a collection of subsets of a given set is called a family of subsets of , or a family of sets over More generally, a collection of any sets whatsoever is called a family of sets, set family, or a set system.

Sperner's theorem, in discrete mathematics, describes the largest possible families of finite sets none of which contain any other sets in the family. It is one of the central results in extremal set theory. It is named after Emanuel Sperner, who published it in 1928.

In mathematics, in the areas of order theory and combinatorics, Dilworth's theorem characterizes the width of any finite partially ordered set in terms of a partition of the order into a minimum number of chains. It is named for the mathematician Robert P. Dilworth (1950).

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map L : VW between two vector spaces V and W, the kernel of L is the vector space of all elements v of V such that L(v) = 0, where 0 denotes the zero vector in W, or more symbolically:

In mathematics, the Smith normal form is a normal form that can be defined for any matrix with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can be obtained from the original matrix by multiplying on the left and right by invertible square matrices. In particular, the integers are a PID, so one can always calculate the Smith normal form of an integer matrix. The Smith normal form is very useful for working with finitely generated modules over a PID, and in particular for deducing the structure of a quotient of a free module. It is named after the Irish mathematician Henry John Stephen Smith.

In combinatorial mathematics, a block design is an incidence structure consisting of a set together with a family of subsets known as blocks, chosen such that frequency of the elements satisfies certain conditions making the collection of blocks exhibit symmetry (balance). Block designs have applications in many areas, including experimental design, finite geometry, physical chemistry, software testing, cryptography, and algebraic geometry.

In the mathematical field of graph theory, Kirchhoff's theorem or Kirchhoff's matrix tree theorem named after Gustav Kirchhoff is a theorem about the number of spanning trees in a graph, showing that this number can be computed in polynomial time from the determinant of a submatrix of the Laplacian matrix of the graph; specifically, the number is equal to any cofactor of the Laplacian matrix. Kirchhoff's theorem is a generalization of Cayley's formula which provides the number of spanning trees in a complete graph.

Combinatorial design theory is the part of combinatorial mathematics that deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. These concepts are not made precise so that a wide range of objects can be thought of as being under the same umbrella. At times this might involve the numerical sizes of set intersections as in block designs, while at other times it could involve the spatial arrangement of entries in an array as in sudoku grids.

A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0, 1)-matrix is a matrix with entries from the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of finite sets. It is an important tool in combinatorial mathematics and theoretical computer science.

In quantum computing, quantum finite automata (QFA) or quantum state machines are a quantum analog of probabilistic automata or a Markov decision process. They provide a mathematical abstraction of real-world quantum computers. Several types of automata may be defined, including measure-once and measure-many automata. Quantum finite automata can also be understood as the quantization of subshifts of finite type, or as a quantization of Markov chains. QFAs are, in turn, special cases of geometric finite automata or topological finite automata.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

In mathematics, a set of n functions f1, f2, ..., fn is unisolvent on a domain Ω if the vectors