Growth function

Last updated

The growth function, also called the shatter coefficient or the shattering number, measures the richness of a set family or class of function. It is especially used in the context of statistical learning theory, where it is used to study properties of statistical learning methods. The term 'growth function' was coined by Vapnik and Chervonenkis in their 1968 paper, where they also proved many of its properties. [1] It is a basic concept in machine learning. [2] [3]

Contents

Definitions

Set-family definition

Let be a set family (a set of sets) and a set. Their intersection is defined as the following set-family:

The intersection-size (also called the index) of with respect to is . If a set has elements then the index is at most . If the index is exactly 2m then the set is said to be shattered by , because contains all the subsets of , i.e.:

The growth function measures the size of as a function of . Formally:

Hypothesis-class definition

Equivalently, let be a hypothesis-class (a set of binary functions) and a set with elements. The restriction of to is the set of binary functions on that can be derived from : [3] :45

The growth function measures the size of as a function of : [3] :49

Examples

1. The domain is the real line . The set-family contains all the half-lines (rays) from a given number to positive infinity, i.e., all sets of the form for some . For any set of real numbers, the intersection contains sets: the empty set, the set containing the largest element of , the set containing the two largest elements of , and so on. Therefore: . [1] :Ex.1 The same is true whether contains open half-lines, closed half-lines, or both.

2. The domain is the segment . The set-family contains all the open sets. For any finite set of real numbers, the intersection contains all possible subsets of . There are such subsets, so . [1] :Ex.2

3. The domain is the Euclidean space . The set-family contains all the half-spaces of the form: , where is a fixed vector. Then , where Comp is the number of components in a partitioning of an n-dimensional space by m hyperplanes. [1] :Ex.3

4. The domain is the real line . The set-family contains all the real intervals, i.e., all sets of the form for some . For any set of real numbers, the intersection contains all runs of between 0 and consecutive elements of . The number of such runs is , so .

Polynomial or exponential

The main property that makes the growth function interesting is that it can be either polynomial or exponential - nothing in-between.

The following is a property of the intersection-size: [1] :Lem.1

This implies the following property of the Growth function. [1] :Th.1 For every family there are two cases:

Other properties

Trivial upper bound

For any finite :

since for every , the number of elements in is at most . Therefore, the growth function is mainly interesting when is infinite.

Exponential upper bound

For any nonempty :

I.e, the growth function has an exponential upper-bound.

We say that a set-family shatters a set if their intersection contains all possible subsets of , i.e. . If shatters of size , then , which is the upper bound.

Cartesian intersection

Define the Cartesian intersection of two set-families as:

.

Then: [2] :57

Union

For every two set-families: [2] :58

VC dimension

The VC dimension of is defined according to these two cases:

So if-and-only-if .

The growth function can be regarded as a refinement of the concept of VC dimension. The VC dimension only tells us whether is equal to or smaller than , while the growth function tells us exactly how changes as a function of .

Another connection between the growth function and the VC dimension is given by the Sauer–Shelah lemma: [3] :49

If , then:
for all :

In particular,

for all :
so when the VC dimension is finite, the growth function grows polynomially with .

This upper bound is tight, i.e., for all there exists with VC dimension such that: [2] :56

Entropy

While the growth-function is related to the maximum intersection-size, the entropy is related to the average intersection size: [1] :272–273

The intersection-size has the following property. For every set-family :

Hence:

Moreover, the sequence converges to a constant when .

Moreover, the random-variable is concentrated near .

Applications in probability theory

Let be a set on which a probability measure is defined. Let be family of subsets of (= a family of events).

Suppose we choose a set that contains elements of , where each element is chosen at random according to the probability measure , independently of the others (i.e., with replacements). For each event , we compare the following two quantities:

We are interested in the difference, . This difference satisfies the following upper bound:

which is equivalent to: [1] :Th.2

In words: the probability that for all events in , the relative-frequency is near the probability, is lower-bounded by an expression that depends on the growth-function of .

A corollary of this is that, if the growth function is polynomial in (i.e., there exists some such that ), then the above probability approaches 1 as . I.e, the family enjoys uniform convergence in probability.

Related Research Articles

<span class="mw-page-title-main">Entropy (information theory)</span> Expected amount of information needed to specify the output of a stochastic data source

In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable , which takes values in the alphabet and is distributed according to :

<span class="mw-page-title-main">Random variable</span> Variable representing a random phenomenon

A random variable is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' can be misleading as its mathematical definition is not actually random nor a variable, but rather it is a function from possible outcomes in a sample space to a measurable space, often to the real numbers.

In mathematical analysis and in probability theory, a σ-algebra on a set X is a nonempty collection Σ of subsets of X closed under complement, countable unions, and countable intersections. The ordered pair is called a measurable space.

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.

In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression.

In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.

In Vapnik–Chervonenkis theory, the Vapnik–Chervonenkis (VC) dimension is a measure of the size of a class of sets. The notion can be extended to classes of binary functions. It is defined as the cardinality of the largest set of points that the algorithm can shatter, which means the algorithm can always learn a perfect classifier for any labeling of at least one configuration of those data points. It was originally defined by Vladimir Vapnik and Alexey Chervonenkis.

In information theory, the information content, self-information, surprisal, or Shannon information is a basic quantity derived from the probability of a particular event occurring from a random variable. It can be thought of as an alternative way of expressing probability, much like odds or log-odds, but which has particular mathematical advantages in the setting of information theory.

Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics a statistical ensemble is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics. One such formalism is provided by quantum logic.

In linear algebra and related areas of mathematics a balanced set, circled set or disk in a vector space is a set such that for all scalars satisfying

In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.

A Dynkin system, named after Eugene Dynkin, is a collection of subsets of another universal set satisfying a set of axioms weaker than those of 𝜎-algebra. Dynkin systems are sometimes referred to as 𝜆-systems or d-system. These set families have applications in measure theory and probability.

In mathematics, a π-system on a set is a collection of certain subsets of such that

<span class="mw-page-title-main">Binary entropy function</span>

In information theory, the binary entropy function, denoted or , is defined as the entropy of a Bernoulli process with probability of one of two values. It is a special case of , the entropy function. Mathematically, the Bernoulli trial is modelled as a random variable that can take on only two values: 0 and 1, which are mutually exclusive and exhaustive.

This article discusses how information theory is related to measure theory.

<span class="mw-page-title-main">Z-channel (information theory)</span>

In coding theory and information theory, a Z-channel or binary asymmetric channel is a communications channel used to model the behaviour of some data storage systems.

In mathematics, a filter on a set is a family of subsets such that:

  1. and
  2. if and , then
  3. If , and , then

In computational learning theory, Rademacher complexity, named after Hans Rademacher, measures richness of a class of sets with respect to a probability distribution. The concept can also be extended to real valued functions.

The exponential mechanism is a technique for designing differentially private algorithms. It was developed by Frank McSherry and Kunal Talwar in 2007. Their work was recognized as a co-winner of the 2009 PET Award for Outstanding Research in Privacy Enhancing Technologies.

In mathematics the symmetrization methods are algorithms of transforming a set to a ball with equal volume and centered at the origin. B is called the symmetrized version of A, usually denoted . These algorithms show up in solving the classical isoperimetric inequality problem, which asks: Given all two-dimensional shapes of a given area, which of them has the minimal perimeter. The conjectured answer was the disk and Steiner in 1838 showed this to be true using the Steiner symmetrization method. From this many other isoperimetric problems sprung and other symmetrization algorithms. For example, Rayleigh's conjecture is that the first eigenvalue of the Dirichlet problem is minimized for the ball. Another problem is that the Newtonian capacity of a set A is minimized by and this was proved by Polya and G. Szego (1951) using circular symmetrization.

References

  1. 1 2 3 4 5 6 7 8 Vapnik, V. N.; Chervonenkis, A. Ya. (1971). "On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities". Theory of Probability & Its Applications. 16 (2): 264. doi:10.1137/1116025. This is an English translation, by B. Seckler, of the Russian paper: "On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities". Dokl. Akad. Nauk. 181 (4): 781. 1968. The translation was reproduced as: Vapnik, V. N.; Chervonenkis, A. Ya. (2015). "On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities". Measures of Complexity. p. 11. doi:10.1007/978-3-319-21852-6_3. ISBN   978-3-319-21851-9.
  2. 1 2 3 4 Mohri, Mehryar; Rostamizadeh, Afshin; Talwalkar, Ameet (2012). Foundations of Machine Learning. US, Massachusetts: MIT Press. ISBN   9780262018258., especially Section 3.2
  3. 1 2 3 4 Shalev-Shwartz, Shai; Ben-David, Shai (2014). Understanding Machine Learning – from Theory to Algorithms. Cambridge University Press. ISBN   9781107057135.