Banerjee test

Last updated

In compiler theory, the Banerjee test is a dependence test. The Banerjee test assumes that all loop indices are independent, however in reality, this is often not true. The Banerjee test is a conservative test. That is, it will not break a dependence that does not exist.

Contents

This means that the only thing the test can guarantee is the absence of a dependence.

Antidependence is brokenTrue dependence is broken
TrueThere are no
antidependencies
There are no
true dependencies
FalseThere may or may not be
antidependencies
There may or may not be
true dependencies

General form

For a loop of the form:

for(i=0;i<n;i++){c[f(i)]=a[i]+b[i];/* statement s1 */d[i]=c[g(i)]+e[i];/* statement s2 */}

A true dependence exists between statement s1 and statement s2 if and only if :

An anti dependence exists between statement s1 and statement s2 if and only if :

For a loop of the form:

for(i=0;i<n;i++){c[i]=a[g(i)]+b[i];/* statement s1 */a[f(i)]=d[i]+e[i];/* statement s2 */}

A true dependence exists between statement s1 and statement s2 if and only if :

Example

An example of Banerjee's test follows below.

The loop to be tested for dependence is:

for(i=0;i<10;i++){c[i+9]=a[i]+b[i];/*statement s1*/d[i]=c[i]+e[i];/*statement s2*/}

Let

So therefore,

and

Testing for antidependence

Then

which gives

Now, the bounds on are

Clearly, -9 is not inside the bounds, so the antidependence is broken.

Testing for true dependence

Which gives:

Now, the bounds on are

Clearly, -9 is inside the bounds, so the true dependence is not broken.

Conclusion

Because the antidependence was broken, we can assert that anti dependence does not exist between the statements.

Because the true dependence was not broken, we do not know if a true dependence exists between the statements.

Therefore, the loop is parallelisable, but the statements must be executed in order of their (potential) true dependence.


See also

Related Research Articles

In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if

In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than the statistic, as to which of those probability distributions is the sampling distribution.

In probability theory, Chebyshev's inequality guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/k2 of the distribution's values can be k or more standard deviations away from the mean. The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers.

In probability theory, the Azuma–Hoeffding inequality gives a concentration result for the values of martingales that have bounded differences.

In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature.

In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.

In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices.

In mathematics, the Iverson bracket, named after Kenneth E. Iverson, is a notation that generalises the Kronecker delta, which is the Iverson bracket of the statement x = y. It maps any statement to a function of the free variables in that statement. This function is defined to take the value 1 for the values of the variables for which the statement is true, and takes the value 0 otherwise. It is generally denoted by putting the statement inside square brackets:

Kumaraswamy distribution

In probability and statistics, the Kumaraswamy's double bounded distribution is a family of continuous probability distributions defined on the interval (0,1). It is similar to the Beta distribution, but much simpler to use especially in simulation studies since its probability density function, cumulative distribution function and quantile functions can be expressed in closed form. This distribution was originally proposed by Poondi Kumaraswamy for variables that are lower and upper bounded with a zero-inflation. This was extended to inflations at both extremes [0,1] in.

In mathematics, the Bauer–Fike theorem is a standard result in the perturbation theory of the eigenvalue of a complex-valued diagonalizable matrix. In its substance, it states an absolute upper bound for the deviation of one perturbed matrix eigenvalue from a properly chosen eigenvalue of the exact matrix. Informally speaking, what it says is that the sensitivity of the eigenvalues is estimated by the condition number of the matrix of eigenvectors.

Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

In mathematics, a univariate polynomial of degree n with real or complex coefficients has n complex roots, if counted with their multiplicities. They form a set of n points in the complex plane. This article concerns the geometry of these points, that is the information about their localization in the complex plane that can be deduced from the degree and the coefficients of the polynomial.

Linear Programming Boosting (LPBoost) is a supervised classifier from the boosting family of classifiers. LPBoost maximizes a margin between training samples of different classes and hence also belongs to the class of margin-maximizing supervised classification algorithms. Consider a classification function

In mathematics, the Remez inequality, discovered by the Soviet mathematician Evgeny Yakovlevich Remez, gives a bound on the sup norms of certain polynomials, the bound being attained by the Chebyshev polynomials.

Fréchet distribution

The Fréchet distribution, also known as inverse Weibull distribution, is a special case of the generalized extreme value distribution. It has the cumulative distribution function

In applied mathematics – specifically in fuzzy logic – the ordered weighted averaging (OWA) operators provide a parameterized class of mean type aggregation operators. They were introduced by Ronald R. Yager. Many notable mean operators such as the max, arithmetic average, median and min, are members of this class. They have been widely used in computational intelligence because of their ability to model linguistically expressed aggregation instructions.

In geometry, the Chebyshev center of a bounded set having non-empty interior is the center of the minimal-radius ball enclosing the entire set , or alternatively the center of largest inscribed ball of .

In probability theory, the rectified Gaussian distribution is a modification of the Gaussian distribution when its negative elements are reset to 0. It is essentially a mixture of a discrete distribution and a continuous distribution as a result of censoring.

In probabilistic logic, the Fréchet inequalities, also known as the Boole–Fréchet inequalities, are rules implicit in the work of George Boole and explicitly derived by Maurice Fréchet that govern the combination of probabilities about logical propositions or events logically linked together in conjunctions or disjunctions as in Boolean expressions or fault or event trees common in risk assessments, engineering design and artificial intelligence. These inequalities can be considered rules about how to bound calculations involving probabilities without assuming independence or, indeed, without making any dependence assumptions whatsoever. The Fréchet inequalities are closely related to the Boole–Bonferroni–Fréchet inequalities, and to Fréchet bounds.

LSH is a cryptographic hash function designed in 2014 by South Korea to provide integrity in general-purpose software environments such as PCs and smart devices. LSH is one of the cryptographic algorithms approved by the Korean Cryptographic Module Validation Program (KCMVP). And it is the national standard of South Korea.

References