Property testing

Last updated

Property testing is a field of theoretical computer science, concerned with the design of super-fast algorithms for approximate decision making, where the decision refers to properties or parameters of huge objects. [1]

Contents

A property testing algorithm for a decision problem is an algorithm whose query complexity to its input is much smaller than the instance size of the problem. Typically property testing algorithms are used to distinguish if some combinatorial structure S (such as a graph or a boolean function) satisfies some property P, or is "far" from having this property (meaning an ε-fraction of the representation of S need be modified in order to make S satisfy P), using only a small number of "local" queries to the object. [2] [3]

For example, the following promise problem admits an algorithm whose query complexity is independent of the instance size (for an arbitrary constant ε > 0):

"Given a graph G on n vertices, decide if G is bipartite, or G cannot be made bipartite even after removing an arbitrary subset of at most edges of G."

Property testing algorithms are central to the definition of probabilistically checkable proofs, as a probabilistically checkable proof is essentially a proof that can be verified by a property testing algorithm.

Definition and variants

Formally, a property testing algorithm with query complexity q(n) and proximity parameter ε for a decision problem L is a randomized algorithm that, on input x (an instance of L) makes at most q(|x|) queries to x and behaves as follows:

Here, "x is ε-far from L" means that the Hamming distance between x and any string in L is at least ε|x|.

A property testing algorithm is said to have one-sided error if it satisfies the stronger condition that the accepting probability for instances x ∈ L is 1 instead of ⅔.

A property testing algorithm is said be non-adaptive if it performs all its queries before it "observes" any answers to previous queries. Such an algorithm can be viewed as operating in the following manner. First the algorithm receives its input. Before looking at the input, using its internal randomness, the algorithm decides which symbols of the input are to be queried. Next, the algorithm observes these symbols. Finally, without making any additional queries (but possibly using its randomness), the algorithm decides whether to accept or reject the input. [2]

Features and limitations

The main efficiency parameter of a property testing algorithm is its query complexity, which is the maximum number of input symbols inspected over all inputs of a given length (and all random choices made by the algorithm). One is interested in designing algorithms whose query complexity is as small as possible. In many cases the running time of property testing algorithms is sublinear in the instance length. Typically, the goal is first to make the query complexity as small as possible as a function of the instance size n, and then study the dependency on the proximity parameter ε.

Unlike other complexity-theoretic settings, the asymptotic query complexity of property testing algorithms is affected dramatically by the representation of instances. For example, when ε = 0.01, the problem of testing bipartiteness of dense graphs (which are represented by their adjacency matrix) admits an algorithm of constant query complexity. In contrast, sparse graphs on n vertices (which are represented by their adjacency list) require property testing algorithms of query complexity .

The query complexity of property testing algorithms grows as the proximity parameter ε becomes smaller for all non-trivial properties. This dependence on ε is necessary as a change of fewer than ε symbols in the input cannot be detected with constant probability using fewer than O(1/ε) queries. Many interesting properties of dense graphs can be tested using query complexity that depends only on ε and not on the graph size n. However, the query complexity can grow enormously fast as a function of ε. For example, for a long time the best known algorithm for testing if a graph does not contain any triangle had a query complexity which is a tower function of poly(1/ε), and only in 2010 this has been improved to a tower function of log(1/ε). One of the reasons for this enormous growth in bounds is that many of the positive results for property testing of graphs are established using the Szemerédi regularity lemma, which also has tower-type bounds in its conclusions. The connection of property testing to the Szemerédi regularity lemma and related graph removal lemmas is elaborated on below.

Testing graph properties

For a graph G with n vertices, the notion of distance we will use is the edit distance . That is, we say that the distance between two graphs is the smallest ε such that one can add and/or delete edges and get from the first graph to the second. Under a reasonable representation of graphs, this is equivalent to the earlier Hamming distance definition (up to possibly a change of constants).

To make precise the general notions of property testing in the context of graphs, we say a tester for graph property P should distinguish with at least ⅔ probability between the cases of G satisfying P and the cases where G is ε-far in edit distance from satisfying P. The tester can access some oracle to query whether a pair of vertices has an edge between them in G or not. The query complexity is the number of such oracle queries. Say the tester has one-sided error if it has false positives and not false negatives, i.e. if G satisfies P, the tester always outputs the correct answer. [4] [5]

We can only differentiate between graphs that satisfy P versus those far from P as opposed to satisfy versus not satisfy P. In the latter case, consider two graphs: G satisfying P and H not satisfying P by changing only a few edges. One example is testing triangle-freeness with H a graph with exactly one triangle and G having one of these edges removed. Then, the tester cannot tell them apart unless it queries every edge, which it cannot do.

Short history

The field of graph property testing was first introduced by Goldreich, Goldwasser, and Ron. In their seminal paper published in 1998, an abstract graph partition problem is analyzed and some testers provided. These include as special cases several important graph properties like bipartiteness, k-colorability, having a large clique, and having a large cut. [4] In particular, the natural algorithms that sample a subgraph and check whether it satisfy the property are all correct albeit with perhaps suboptimal query complexities.

Since then, several related discoveries have been made

Testing hereditary graph properties

A graph property is hereditary if it is preserved under deletion of vertices, or equivalently, if it is preserved under taking induced subgraphs, hence the name hereditary. A few important hereditary properties are H-freeness (for some graph H), k-colorability, and planarity. All hereditary properties are testable.

Theorem (Alon & Shapira 2008). Every hereditary graph property is testable with one-sided error. [2]

The proof relies on a version of the graph removal lemma for infinite families of induced subgraphs. We reproduce the theorem here without proof. In particular, note that constant come up naturally as the size of the samples. Also, the query complexity using this regularity approach is large due to the tower function bound in Szemerédi regularity lemma .

Theorem (Infinite graph removal lemma). For each (possibly infinite) set of graphs and , there exist and so that if is an -vertex graph with fewer than copies of for every with at most vertices, then can be made induced -free by adding/removing fewer than edges. [9]

Oblivious testers

Informally, an oblivious tester is oblivious to the size of the input. For a graph property P, it is an algorithm that takes as input a parameter ε and graph G, and then runs as a property testing algorithm on G for the property P with proximity parameter ε that makes exactly q(ε) queries to G.

Definition. An oblivious tester is an algorithm that takes as input a parameter ε. It computes an integer q(ε) and then asks an oracle for an induced subgraph H on exactly q(ε) vertices from G chosen uniformly at random. It then accepts or rejects (possibly randomly) according to ε and H. We say it tests for the property P if it accepts with probability at least ⅔ for G that has property P, and rejects with probability at least ⅔ or G that is ε-far from having property P. [2] [1] [10]

Crucially, the number of queries an oblivious tester makes is a constant only dependent on ε and not the size of the input graph G. In complete analogy with property testing algorithms, we can talk about oblivious testers with one-sided error.

Testing semi-hereditary graph properties

We can certainly contrive some graph properties where a tester for it must access the number of vertices. Here is one example.

Example. A graph G satisfies property P if it is bipartite with an even number of vertices or perfect with an odd number of vertices. [2]

In this case, the tester cannot even differentiate which property (bipartiteness or perfectness) to test unless it knows the number of vertices. There are many examples of such unnatural properties. In fact, the characterization of graph properties testable by an oblivious tester with one-sided error leads to a class of natural properties.

Definition. A graph property H is semi-hereditary if there exists a hereditary graph property H such that any graph satisfying P satisfies H, and for every there is an such that every graph of size at least that is ε-far from satisfying P contains an induced subgraph that does not satisfy H. [2]

Trivially, hereditary properties are also semi-hereditary. This characterization partially answers the converse to the other Alon & Shapira theorem above: the properties that are easy to test properties (having oblivious testers with one-sided error) are almost hereditary. In the same paper, they showed

Theorem (Alon & Shapira 2008). A graph property P has an oblivious one-sided error tester if and only if P is semi-hereditary. [2]

Examples: testing some graph properties

In this section, we will give some natural oblivious testing algorithms with one-sided error for triangle-freeness, bipartiteness, and k-colorability. They are natural in the sense that we follow the natural idea of randomly sampling some subset X of vertices of G and checking whether the graph property holds on the subgraph spanned by X by brute-force search. We have one-sided error since these properties are actually hereditary: if G satisfy the property, so must the induced subgraph spanned by X, so our tester always accepts.

For triangle-freeness, the tester is an application of the triangle removal lemma. In particular, it tells us that if graph G is ε-far from being triangle-free, there is a (computable) constant so that G has at least triangles.

Example (Triangle-freeness Testing Algorithm).

  1. Given graph G, choose a random set X of triples of vertices independently at random, where δ is as above.
  2. For every triple of vertices in X, query if all three pairs of vertices are adjacent in G.
  3. The algorithm accepts if no triple of vertices induces a triangle, and rejects otherwise. [1]

For bipartiteness and k-colorability, let δ be the desired upper bound on error probability for the following testers. Note, query complexity should not be confused with running time. The latter is often exponential (as is the case of both) due to a lack of polynomial time decision algorithm to test the peroperty on the induced subgraph. We instead check by brute-force search. [4]

Example (Bipartite Testing Algorithm).

  1. Given graph G, choose a random set X of vertices.
  2. For every pair of vertices in X, query if they are adjacent in G.
  3. It accepts if the induced subgraph of G on X is bipartite and reject otherwise. [4]

Example (k-colorability Testing Algorithm).

  1. Given graph G, choose a random set X of vertices.
  2. For every pair of vertices in X, query if they are adjacent in G.
  3. It accepts if the induced subgraph of G on X is k-colorable and reject otherwise. [4]

Related Research Articles

In graph theory, an expander graph is a sparse graph that has strong connectivity properties, quantified using vertex, edge or spectral expansion. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes.

<span class="mw-page-title-main">Component (graph theory)</span> Maximal subgraph whose vertices can reach each other

In graph theory, a component of an undirected graph is a connected subgraph that is not part of any larger connected subgraph. The components of any graph partition its vertices into disjoint sets, and are the induced subgraphs of those sets. A graph that is itself connected has exactly one component, consisting of the whole graph. Components are sometimes called connected components.

In theoretical computer science, the subgraph isomorphism problem is a computational task in which two graphs G and H are given as input, and one must determine whether G contains a subgraph that is isomorphic to H. Subgraph isomorphism is a generalization of both the maximum clique problem and the problem of testing whether a graph contains a Hamiltonian cycle, and is therefore NP-complete. However certain other cases of subgraph isomorphism may be solved in polynomial time.

<span class="mw-page-title-main">Extremal graph theory</span>

Extremal graph theory is a branch of combinatorics, itself an area of mathematics, that lies at the intersection of extremal combinatorics and graph theory. In essence, extremal graph theory studies how global properties of a graph influence local substructure. Results in extremal graph theory deal with quantitative connections between various graph properties, both global and local, and problems in extremal graph theory can often be formulated as optimization problems: how big or small a parameter of a graph can be, given some constraints that the graph has to satisfy? A graph that is an optimal solution to such an optimization problem is called an extremal graph, and extremal graphs are important objects of study in extremal graph theory.

<span class="mw-page-title-main">Edge coloring</span> Problem of coloring a graphs edges such that meeting edges do not match

In graph theory, a proper edge coloring of a graph is an assignment of "colors" to the edges of the graph so that no two incident edges have the same color. For example, the figure to the right shows an edge coloring of a graph by the colors red, blue, and green. Edge colorings are one of several different types of graph coloring. The edge-coloring problem asks whether it is possible to color the edges of a given graph using at most k different colors, for a given value of k, or with the fewest possible colors. The minimum required number of colors for the edges of a given graph is called the chromatic index of the graph. For example, the edges of the graph in the illustration can be colored by three colors but cannot be colored by two colors, so the graph shown has chromatic index three.

<span class="mw-page-title-main">Feedback arc set</span> Edges that hit all cycles in a graph

In graph theory and graph algorithms, a feedback arc set or feedback edge set in a directed graph is a subset of the edges of the graph that contains at least one edge out of every cycle in the graph. Removing these edges from the graph breaks all of the cycles, producing an acyclic subgraph of the given graph, often called a directed acyclic graph. A feedback arc set with the fewest possible edges is a minimum feedback arc set and its removal leaves a maximum acyclic subgraph; weighted versions of these optimization problems are also used. If a feedback arc set is minimal, meaning that removing any edge from it produces a subset that is not a feedback arc set, then it has an additional property: reversing all of its edges, rather than removing them, produces a directed acyclic graph.

<span class="mw-page-title-main">Szemerédi regularity lemma</span>

Szemerédi's regularity lemma is one of the most powerful tools in extremal graph theory, particularly in the study of large dense graphs. It states that the vertices of every large enough graph can be partitioned into a bounded number of parts so that the edges between different parts behave almost randomly.

In computational complexity theory, the unique games conjecture is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP, then for many important problems it is not only impossible to get an exact solution in polynomial time, but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines.

In graph theory, a graph is said to be a pseudorandom graph if it obeys certain properties that random graphs obey with high probability. There is no concrete definition of graph pseudorandomness, but there are many reasonable characterizations of pseudorandomness one can consider.

<span class="mw-page-title-main">Triangle-free graph</span> Graph without triples of adjacent vertices

In the mathematical area of graph theory, a triangle-free graph is an undirected graph in which no three vertices form a triangle of edges. Triangle-free graphs may be equivalently defined as graphs with clique number ≤ 2, graphs with girth ≥ 4, graphs with no induced 3-cycle, or locally independent graphs.

In theoretical computer science, the Aanderaa–Karp–Rosenberg conjecture is a group of related conjectures about the number of questions of the form "Is there an edge between vertex and vertex ?" that have to be answered to determine whether or not an undirected graph has a particular property such as planarity or bipartiteness. They are named after Stål Aanderaa, Richard M. Karp, and Arnold L. Rosenberg. According to the conjecture, for a wide class of properties, no algorithm can guarantee that it will be able to skip any questions: any algorithm for determining whether the graph has the property, no matter how clever, might need to examine every pair of vertices before it can give its answer. A property satisfying this conjecture is called evasive.

<span class="mw-page-title-main">Expander code</span>

In coding theory, expander codes form a class of error-correcting codes that are constructed from bipartite expander graphs. Along with Justesen codes, expander codes are of particular interest since they have a constant positive rate, a constant positive relative distance, and a constant alphabet size. In fact, the alphabet contains only two elements, so expander codes belong to the class of binary codes. Furthermore, expander codes can be both encoded and decoded in time proportional to the block length of the code.

In graph theory, a branch of mathematics, the Erdős–Hajnal conjecture states that families of graphs defined by forbidden induced subgraphs have either large cliques or large independent sets. It is named for Paul Erdős and András Hajnal.

<span class="mw-page-title-main">Error tolerance (PAC learning)</span>

In PAC learning, error tolerance refers to the ability of an algorithm to learn when the examples received have been corrupted in some way. In fact, this is a very common and important issue since in many applications it is not possible to access noise-free data. Noise can interfere with the learning process at different levels: the algorithm may receive data that have been occasionally mislabeled, or the inputs may have some false information, or the classification of the examples may have been maliciously adulterated.

<span class="mw-page-title-main">Odd cycle transversal</span>

In graph theory, an odd cycle transversal of an undirected graph is a set of vertices of the graph that has a nonempty intersection with every odd cycle in the graph. Removing the vertices of an odd cycle transversal from a graph leaves a bipartite graph as the remaining induced subgraph.

In graph theory, the graph removal lemma states that when a graph contains few copies of a given subgraph, then all of the copies can be eliminated by removing a small number of edges. The special case in which the subgraph is a triangle is known as the triangle removal lemma.

<span class="mw-page-title-main">Ruzsa–Szemerédi problem</span>

In combinatorial mathematics and extremal graph theory, the Ruzsa–Szemerédi problem or (6,3)-problem asks for the maximum number of edges in a graph in which every edge belongs to a unique triangle. Equivalently it asks for the maximum number of edges in a balanced bipartite graph whose edges can be partitioned into a linear number of induced matchings, or the maximum number of triples one can choose from points so that every six points contain at most two triples. The problem is named after Imre Z. Ruzsa and Endre Szemerédi, who first proved that its answer is smaller than by a slowly-growing factor.

The counting lemmas this article discusses are statements in combinatorics and graph theory. The first one extracts information from -regular pairs of subsets of vertices in a graph , in order to guarantee patterns in the entire graph; more explicitly, these patterns correspond to the count of copies of a certain graph in . The second counting lemma provides a similar yet more general notion on the space of graphons, in which a scalar of the cut distance between two graphs is correlated to the homomorphism density between them and .

The blow-up lemma, proved by János Komlós, Gábor N. Sárközy, and Endre Szemerédi in 1997, is an important result in extremal graph theory, particularly within the context of the regularity method. It states that the regular pairs in the statement of Szemerédi's regularity lemma behave like complete bipartite graphs in the context of embedding spanning graphs of bounded degree.

The method of (hypergraph) containers is a powerful tool that can help characterize the typical structure and/or answer extremal questions about families of discrete objects with a prescribed set of local constraints. Such questions arise naturally in extremal graph theory, additive combinatorics, discrete geometry, coding theory, and Ramsey theory; they include some of the most classical problems in the associated fields.

References

  1. 1 2 3 Goldreich, Oded (2017). Introduction to Property Testing. Cambridge University Press. ISBN   9781107194052.
  2. 1 2 3 4 5 6 7 8 Alon, Noga; Shapira, Asaf (2008). "A characterization of the (natural) graph properties testable with one-sided error" (PDF). SIAM Journal on Computing. 37 (6): 1703–1727. doi:10.1137/06064888X.
  3. Goldreich, Oded (1999). "Combinatorial Property Testing (a survey)". Randomization Methods in Algorithm Design. DIMACS Series in Discrete Mathematics and Theoretical Computer Science. 43: 45–59. doi: 10.1090/dimacs/043/04 . ISBN   0821870874.
  4. 1 2 3 4 5 Goldreich, Oded; Goldwasser, Shafi; Ron, Dana (1 July 1998). "Property testing and its connection to learning and approximation". Journal of the ACM. 45 (4): 653–750. doi: 10.1145/285055.285060 .
  5. Rubinfeld, Ronitt; Shapira, Asaf (2011). "Sublinear Time Algorithms". SIAM Journal on Discrete Mathematics. 25 (4): 1562–1588. CiteSeerX   10.1.1.221.1797 . doi:10.1137/100791075. S2CID   1319122.
  6. Alon, N.; Duke, R. A.; Lefmann, H.; Rodl, V.; Yuster, R. (1 January 1994). "The Algorithmic Aspects of the Regularity Lemma". Journal of Algorithms. 16 (1): 80–109. doi:10.1006/jagm.1994.1005.
  7. Alon, Noga; Fischer, Eldar; Krivelevich, Michael; Szegedy, Mario (1 April 2000). "Efficient Testing of Large Graphs". Combinatorica. 20 (4): 451–476. doi:10.1007/s004930070001.
  8. Alon, Noga; Shapira, Asaf (22 May 2005). "Every monotone graph property is testable". Proceedings of the thirty-seventh annual ACM symposium on Theory of computing. pp. 128–137. doi:10.1145/1060590.1060611. ISBN   1581139608. S2CID   14096855.
  9. Fox, Jacob (2010). "A new proof of the graph removal lemma". arXiv: 1006.1300 [math.CO].
  10. Ron, Dana (2000). Property Testing (Technical report).