Elementary comparison testing

Last updated

Elementary comparison testing (ECT) is a white-box, control-flow, test-design methodology used in software development. [1] [2] The purpose of ECT is to enable detailed testing of complex software. Software code or pseudocode is tested to assess the proper handling of all decision outcomes. As with multiple-condition coverage [3] and basis path testing, [1] coverage of all independent and isolated conditions is accomplished through modified condition/decision coverage (MC/DC). [4] Isolated conditions are aggregated into connected situations creating formal test cases. The independence of a condition is shown by changing the condition value in isolation. Each relevant condition value is covered by test cases.

Contents

Test case

A test case consists of a logical path through one or many decisions from start to end of a process. Contradictory situations are deduced from the test case matrix and excluded. The MC/DC approach isolates every condition, neglecting all possible subpath combinations and path coverage. [1]

where

The decision consists of a combination of elementary conditions

The transition function is defined as

Given the transition

the isolated test path consists of

Test case graph

A test case graph illustrates all the necessary independent paths (test cases) to cover all isolated conditions. Conditions are represented by nodes, and condition values (situations) by edges. An edge addresses all program situations. Each situation is connected to one preceding and successive condition. Test cases might overlap due to isolated conditions.

Inductive proof of a number of condition paths

The elementary comparison testing method can be used to determine the number of condition paths by inductive proof.

Figure 2: ECT Inductive Proof Anchor ECT Example Inductive Proof Plain.svg
Figure 2: ECT Inductive Proof Anchor

There are possible condition value combinations

When each condition is isolated, the number of required test cases per decision is:

Figure 3: ECT Inductive Proof End ECT Example Inductive Proof Plain 4.svg
Figure 3: ECT Inductive Proof End

there are edges from parent nodes and edges to child nodes from .

Each individual condition connects to at least one path

from the maximal possible connecting to isolating .

All predecessor conditions and respective paths are isolated. Therefore, when one node (condition) is added, the total number of paths, and required test cases, from start to finish increases by:

Q.E.D.

Test-case design steps

  1. Identify decisions
  2. Determine test situations per decision point (Modified Condition / Decision Coverage)
  3. Create logical test-case matrix
  4. Create physical test-case matrix

Example

Figure 4: ECT Example Control-Flow Graph ECT Example Control Flows.svg
Figure 4: ECT Example Control-Flow Graph
Figure 5: ECT Example D2 Conditions ECT Example Decision D2 Conditions.svg
Figure 5: ECT Example D2 Conditions

This example shows ETC applied to a holiday booking system. The discount system offers reduced-price vacations. The offered discounts are for members or for expensive vacations, for moderate vacations with workday departures, and otherwise. The example shows the creation of logical and physical test cases for all isolated conditions.

Pseudocode

if days > 15 or price > 1000 or member thenreturn −0.2 else if (days > 8 and days ≤ 15 or price ≥ 500 and price ≤ 1000) and workday thenreturn −0.1 elsereturn 0.0

Factors

possible combinations (test cases).

Example in Python:

ifdays>15orprice>1000ormember:return-0.2elif(days>8anddays<=15orprice>=500andprice<=1000)andworkday:return-0.1else:return0.0

Step 1: Decisions

Table 1: Example D1 MC/DC
Outcome
Decision D110
Conditionsc1c2c3c1c2c3
c1100000
c2010000
c3001000

Step 2: MC/DC Matrix

Table 2: Example D2 MC/DC
Outcome
Decision D210
Conditionsc4c5c6c4c5c6
c4101001
c5011001
c6101100

The highlighted diagonals in the MC/DC Matrix are describing the isolated conditions:

all duplicate situations are regarded as proven and removed.

Step 3: Logical test-Case matrix

Table 3: Example Logical Test Case Matrix
Situation
x
xxxx
x
x
x
x
x
x

Test cases are formed by tracing decision paths. For every decision a succeeding and preceding subpath is searched until every connected path has a start and an end :

Step 4: Physical test-case matrix

Table 4: Example Physical Test Cases
Factor\Test Case
days1614888
price1100600
departuresa
membersilver
Result
00
-10111
-20111

Physical test cases are created from logical test cases by filling in actual value representations and their respective results.

Test-case graph

Figure 6: ECT Example Test Case Graph ECT Example Test Case Graph c.svg
Figure 6: ECT Example Test Case Graph

In the example test case graph, all test cases and their isolated conditions are marked by colors, and the remaining paths are implicitly passed.

See also

Related Research Articles

<span class="mw-page-title-main">Cauchy–Riemann equations</span> Chacteristic property of holomorphic functions

In the field of complex analysis in mathematics, the Cauchy–Riemann equations, named after Augustin Cauchy and Bernhard Riemann, consist of a system of two partial differential equations which form a necessary and sufficient condition for a complex function of a complex variable to be complex differentiable.

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

<span class="mw-page-title-main">Standard deviation</span> In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation of a random variable expected about its mean. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not.

<span class="mw-page-title-main">Variance</span> Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

In game theory, the Nash equilibrium, named after the mathematician John Nash, is the most common way to define the solution of a non-cooperative game involving two or more players. In a Nash equilibrium, each player is assumed to know the equilibrium strategies of the other players, and no one has anything to gain by changing only one's own strategy. The principle of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to competing firms choosing outputs.

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

The Black–Scholes or Black–Scholes–Merton model is a mathematical model for the dynamics of a financial market containing derivative investment instruments, using various underlying assumptions. From the parabolic partial differential equation in the model, known as the Black–Scholes equation, one can deduce the Black–Scholes formula, which gives a theoretical estimate of the price of European-style options and shows that the option has a unique price given the risk of the security and its expected return. The equation and model are named after economists Fischer Black and Myron Scholes. Robert C. Merton, who first wrote an academic paper on the subject, is sometimes also credited.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

<span class="mw-page-title-main">Fokker–Planck equation</span> Partial differential equation

In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.

The Feynman–Kac formula, named after Richard Feynman and Mark Kac, establishes a link between parabolic partial differential equations (PDEs) and stochastic processes. In 1947, when Kac and Feynman were both Cornell faculty, Kac attended a presentation of Feynman's and remarked that the two of them were working on the same thing from different directions. The Feynman–Kac formula resulted, which proves rigorously the real-valued case of Feynman's path integrals. The complex case, which occurs when a particle's spin is included, is still an open question.

In logic, a rule of inference is admissible in a formal system if the set of theorems of the system does not change when that rule is added to the existing rules of the system. In other words, every formula that can be derived using that rule is already derivable without that rule, so, in a sense, it is redundant. The concept of an admissible rule was introduced by Paul Lorenzen (1955).

Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation and selection: in each generation (iteration) new individuals are generated by variation, usually in a stochastic way, of the current parental individuals. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, over the generation sequence, individuals with better and better -values are generated.

<span class="mw-page-title-main">Viscoplasticity</span> Theory in continuum mechanics

Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.

<span class="mw-page-title-main">Black–Scholes equation</span> Partial differential equation in mathematical finance

In mathematical finance, the Black–Scholes equation, also called the Black–Scholes–Merton equation, is a partial differential equation (PDE) governing the price evolution of derivatives under the Black–Scholes model. Broadly speaking, the term may refer to a similar PDE that can be derived for a variety of options, or more generally, derivatives.

<span class="mw-page-title-main">Stokes' theorem</span> Theorem in vector calculus

Stokes' theorem, also known as the Kelvin–Stokes theorem after Lord Kelvin and George Stokes, the fundamental theorem for curls or simply the curl theorem, is a theorem in vector calculus on . Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence: The line integral of a vector field over a loop is equal to the surface integral of its curl over the enclosed surface. It is illustrated in the figure, where the direction of positive circulation of the bounding contour ∂Σ, and the direction n of positive flux through the surface Σ, are related by a right-hand-rule. For the right hand the fingers circulate along ∂Σ and the thumb is directed along n.

The Vanna–Volga method is a mathematical tool used in finance. It is a technique for pricing first-generation exotic options in foreign exchange market (FX) derivatives.

A Hindley–Milner (HM) type system is a classical type system for the lambda calculus with parametric polymorphism. It is also known as Damas–Milner or Damas–Hindley–Milner. It was first described by J. Roger Hindley and later rediscovered by Robin Milner. Luis Damas contributed a close formal analysis and proof of the method in his PhD thesis.

A non-expanding horizon (NEH) is an enclosed null surface whose intrinsic structure is preserved. An NEH is the geometric prototype of an isolated horizon which describes a black hole in equilibrium with its exterior from the quasilocal perspective. It is based on the concept and geometry of NEHs that the two quasilocal definitions of black holes, weakly isolated horizons and isolated horizons, are developed.

<span class="mw-page-title-main">Rock mass plasticity</span>

Plasticity theory for rocks is concerned with the response of rocks to loads beyond the elastic limit. Historically, conventional wisdom has it that rock is brittle and fails by fracture while plasticity is identified with ductile materials. In field scale rock masses, structural discontinuities exist in the rock indicating that failure has taken place. Since the rock has not fallen apart, contrary to expectation of brittle behavior, clearly elasticity theory is not the last word.

<span class="mw-page-title-main">Forward problem of electrocardiology</span>

The forward problem of electrocardiology is a computational and mathematical approach to study the electrical activity of the heart through the body surface. The principal aim of this study is to computationally reproduce an electrocardiogram (ECG), which has important clinical relevance to define cardiac pathologies such as ischemia and infarction, or to test pharmaceutical intervention. Given their important functionalities and the relative small invasiveness, the electrocardiography techniques are used quite often as clinical diagnostic tests. Thus, it is natural to proceed to computationally reproduce an ECG, which means to mathematically model the cardiac behaviour inside the body.

References

  1. 1 2 3 Lee Copeland (2004). A Practitioners Guide to Software Test Design, chapter 10. Artech House Publishers, Norwood. ISBN   0140289712.
  2. "All about the elementary comparison test | Testlearning". www.testlearning.net. Retrieved 2022-09-02.
  3. Glenford J. Myers (2004). The Art of Software Testing, Second Edition, p. 40., John Wiley & Sons, New Jersey. ISBN   0-471-46912-2.
  4. Tim Kroom (2006). TMap Next, for result driven testing, p. 668. UTN Publishers, Rotterdam. ASIN   B01K3PXI5U.