Mutation testing (or mutation analysis or program mutation) is used to design new software tests and evaluate the quality of existing software tests. Mutation testing involves modifying a program in small ways. [1] Each mutated version is called a mutant and tests detect and reject mutants by causing the behaviour of the original version to differ from the mutant. This is called killing the mutant. Test suites are measured by the percentage of mutants that they kill. New tests can be designed to kill additional mutants. Mutants are based on well-defined mutation operators that either mimic typical programming errors (such as using the wrong operator or variable name) or force the creation of valuable tests (such as dividing each expression by zero). The purpose is to help the tester develop effective tests or locate weaknesses in the test data used for the program or in sections of the code that are seldom or never accessed during execution. Mutation testing is a form of white-box testing. [2] [3]
Most of this article is about "program mutation", in which the program is modified. A more general definition of mutation analysis is using well-defined rules defined on syntactic structures to make systematic changes to software artifacts. [4] Mutation analysis has been applied to other problems, but is usually applied to testing. So mutation testing is defined as using mutation analysis to design new software tests or to evaluate existing software tests. [4] Thus, mutation analysis and testing can be applied to design models, specifications, databases, tests, XML, and other types of software artifacts, although program mutation is the most common. [5]
Tests can be created to verify the correctness of the implementation of a given software system, but the creation of tests still poses the question whether the tests are correct and sufficiently cover the requirements that have originated the implementation. [6] (This technological problem is itself an instance of a deeper philosophical problem named "Quis custodiet ipsos custodes?" ["Who will guard the guards?"].) The idea behind mutation testing is that if a mutant is introduced, this normally causes a bug in the program's functionality which the tests should find. This way, the tests are tested. If a mutant is not detected by the test suite, this typically indicates that the test suite is unable to locate the faults represented by the mutant, but it can also indicate that the mutation introduces no faults, that is, the mutation is a valid change that does not affect functionality. One (common) way a mutant can be valid is that the code that has been changed is "dead code" that is never executed.
For mutation testing to function at scale, a large number of mutants are usually introduced, leading to the compilation and execution of an extremely large number of copies of the program. This problem of the expense of mutation testing had reduced its practical use as a method of software testing. However, the increased use of object oriented programming languages and unit testing frameworks has led to the creation of mutation testing tools that test individual portions of an application.
The goals of mutation testing are multiple:
Mutation testing was originally proposed by Richard Lipton as a student in 1971, [8] and first developed and published by DeMillo, Lipton and Sayward. [1] The first implementation of a mutation testing tool was by Timothy Budd as part of his PhD work (titled Mutation Analysis) in 1980 from Yale University. [9]
Recently, with the availability of massive computing power, there has been a resurgence of mutation analysis within the computer science community, and work has been done to define methods of applying mutation testing to object oriented programming languages and non-procedural languages such as XML, SMV, and finite state machines.
In 2004, a company called Certess Inc. (now part of Synopsys) extended many of the principles into the hardware verification domain. Whereas mutation analysis only expects to detect a difference in the output produced, Certess extends this by verifying that a checker in the testbench will actually detect the difference. This extension means that all three stages of verification, namely: activation, propagation, and detection are evaluated. They called this functional qualification.
Fuzzing can be considered to be a special case of mutation testing. In fuzzing, the messages or data exchanged inside communication interfaces (both inside and between software instances) are mutated to catch failures or differences in processing the data. Codenomicon [10] (2001) and Mu Dynamics (2005) evolved fuzzing concepts to a fully stateful mutation testing platform, complete with monitors for thoroughly exercising protocol implementations.
Mutation testing is based on two hypotheses. The first is the competent programmer hypothesis. This hypothesis states that competent programmers write programs that are close to being correct. [1] "Close" is intended to be based on behavior, not syntax. The second hypothesis is called the coupling effect. The coupling effect asserts that simple faults can cascade or couple to form other emergent faults. [11] [12]
Subtle and important faults are also revealed by higher-order mutants, which further support the coupling effect. [13] [14] [7] [15] [16] Higher-order mutants are enabled by creating mutants with more than one mutation.
Mutation testing is done by selecting a set of mutation operators and then applying them to the source program one at a time for each applicable piece of the source code. The result of applying one mutation operator to the program is called a mutant. If the test suite is able to detect the change (i.e. one of the tests fails), then the mutant is said to be killed.
For example, consider the following C++ code fragment:
if(a&&b){c=1;}else{c=0;}
The condition mutation operator would replace &&
with ||
and produce the following mutant:
if(a||b){c=1;}else{c=0;}
Now, for the test to kill this mutant, the following three conditions should be met:
a = 1
and b = 0
would do this.These conditions are collectively called the RIP model. [8]
Weak mutation testing (or weak mutation coverage) requires that only the first and second conditions are satisfied. Strong mutation testing requires that all three conditions are satisfied. Strong mutation is more powerful, since it ensures that the test suite can really catch the problems. Weak mutation is closely related to code coverage methods. It requires much less computing power to ensure that the test suite satisfies weak mutation testing than strong mutation testing.
However, there are cases where it is not possible to find a test case that could kill this mutant. The resulting program is behaviorally equivalent to the original one. Such mutants are called equivalent mutants.
Equivalent mutants detection is one of biggest obstacles for practical usage of mutation testing. The effort needed to check if mutants are equivalent or not can be very high even for small programs. [17] A 2014 systematic literature review of a wide range of approaches to overcome the Equivalent Mutant Problem [18] identified 17 relevant techniques (in 22 articles) and three categories of techniques: detecting (DEM); suggesting (SEM); and avoiding equivalent mutant generation (AEMG). The experiment indicated that Higher Order Mutation in general and JudyDiffOp strategy in particular provide a promising approach to the Equivalent Mutant Problem.
In addition to equivalent mutants, there are subsumed mutants which are mutants that exist in the same source code location as another mutant, and are said to be "subsumed" by the other mutant. Subsumed mutants are not visible to a mutation testing tool, and do not contribute to coverage metrics. For example, let's say you have two mutants, A and B, that both change a line of code in the same way. Mutant A is tested first, and the result is that the code is not working correctly. Mutant B is then tested, and the result is the same as with mutant A. In this case, Mutant B is considered to be subsumed by Mutant A, since the result of testing Mutant B is the same as the result of testing Mutant A. Therefore, Mutant B does not need to be tested, as the result will be the same as Mutant A.
To make syntactic changes to a program, a mutation operator serves as a guideline that substitutes portions of the source code. Given that mutations depend on these operators, scholars have created a collection of mutation operators to accommodate different programming languages, like Java. The effectiveness of these mutation operators plays a pivotal role in mutation testing. [19]
Many mutation operators have been explored by researchers. Here are some examples of mutation operators for imperative languages:
goto fail;
[20] +
with *
, -
with /
>
with >=
, ==
and <=
These mutation operators are also called traditional mutation operators. There are also mutation operators for object-oriented languages, [22] for concurrent constructions, [23] complex objects like containers, [24] etc.
Operators for containers are called class-level mutation operators. Operators at the class level alter the program's structure by adding, removing, or changing the expressions being examined. Specific operators have been established for each category of changes. [19] For example, the muJava tool offers various class-level mutation operators such as Access Modifier Change, Type Cast Operator Insertion, and Type Cast Operator Deletion. Mutation operators have also been developed to perform security vulnerability testing of programs. [25]
Apart from the class-level operators, MuJava also includes method-level mutation operators, referred to as traditional operators. These traditional operators are designed based on features commonly found in procedural languages. They carry out changes to statements by adding, substituting, or removing primitive operators. These operators fall into six categories: Arithmetic operators, Relational operators, Conditional operators, Shift operators, Logical operators and Assignment operators. [19]
There are three types of mutation testing;
Statement mutation is a process where a block of code is intentionally modified by either deleting or copying certain statements. Moreover, it allows for the reordering of statements within the code block to generate various sequences. [26] This technique is crucial in software testing as it helps identify potential weaknesses or errors in the code. By deliberately making changes to the code and observing how it behaves, developers can uncover hidden bugs or flaws that might go unnoticed during regular testing. [27] Statement mutation is like a diagnostic tool that provides insights into the code's robustness and resilience, helping programmers improve the overall quality and reliability of their software.
For example, in the code snippet below, entire 'else' section is removed:
functioncheckCredentials(username,password){if(username==="admin"&&password==="password"){returntrue;}}
Value mutation occurs when modification is executed to the parameter and/or constant values within the code. This typically involves adjusting the values by adding or subtracting 1, but it can also involve making more substantial changes to the values. The specific alterations made during value mutation include two main scenarios:
Firstly, there's the transformation from a small value to a higher value. This entails replacing a small value in the code with a larger one. The purpose of this change is to assess how the code responds when it encounters larger inputs. It helps ensure that the code can accurately and efficiently process these larger values without encountering errors or unexpected issues. [26]
Conversely, the second scenario involves changing a higher value to a smaller one. In this case, we replace a higher value within the code with a smaller value. This test aims to evaluate how the code handles smaller inputs. Ensuring that the code performs correctly with smaller values is essential to prevent unforeseen problems or errors when dealing with such input data. [26]
For example:
// Original codefunctionmultiplyByTwo(value){returnvalue*2;}// Value mutation: Small value to higher valuefunctionmultiplyByTwoMutation1(value){returnvalue*10;}// Value mutation: Higher value to small valuefunctionmultiplyByTwoMutation2(value){returnvalue/10;}
Decision mutation testing centers on the identification of design errors within the code, with a particular emphasis on detecting flaws or weaknesses in the program's decision-making logic. This method involves deliberately altering arithmetic and logical operators to expose potential issues. [26] By manipulating these operators, developers can systematically evaluate how the code responds to different decision scenarios. This process helps ensure that the program's decision-making pathways are robust and accurate, preventing costly errors that could arise from faulty logic. Decision mutation testing serves as a valuable tool in software development, enabling developers to enhance the reliability and effectiveness of their decision-making code segments.
For example:
// Original codefunctionisPositive(number){returnnumber>0;}// Decision mutation: Changing the comparison operatorfunctionisPositiveMutation1(number){returnnumber>=0;}// Decision mutation: Negating the resultfunctionisPositiveMutation2(number){return!(number>0);}
In software engineering, code coverage, also called test coverage, is a percentage measure of the degree to which the source code of a program is executed when a particular test suite is run. A program with high code coverage has more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low code coverage. Many different metrics can be used to calculate test coverage. Some of the most basic are the percentage of program subroutines and the percentage of program statements called during execution of the test suite.
Software testing is the act of checking whether software satisfies expectations.
A software bug is a design defect (bug) in computer software. A computer program with many or serious bugs may be described as buggy.
Regression testing is re-running functional and non-functional tests to ensure that previously developed and tested software still performs as expected after a change. If not, that would be called a regression.
Unit testing, a.k.a. component or module testing, is a form of software testing by which isolated source code is tested to validate expected behavior.
In computer programming, specifically when using the imperative programming paradigm, an assertion is a predicate connected to a point in the program, that always should evaluate to true at that point in code execution. Assertions can help a programmer read the code, help a compiler compile it, or help the program detect its own defects.
Test-driven development (TDD) is a way of writing code that involves writing an automated unit-level test case that fails, then writing just enough code to make the test pass, then refactoring both the test code and the production code, then repeating with another new test case.
A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events, leading to unexpected or inconsistent results. It becomes a bug when one or more of the possible behaviors is undesirable.
In computer programming, unreachable code is part of the source code of a program which can never be executed because there exists no control flow path to the code from the rest of the program.
In programming and software development, fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, e.g., in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to expose corner cases that have not been properly dealt with.
Runtime verification is a computing system analysis and execution approach based on extracting information from a running system and using it to detect and possibly react to observed behaviors satisfying or violating certain properties. Some very particular properties, such as datarace and deadlock freedom, are typically desired to be satisfied by all systems and may be best implemented algorithmically. Other properties can be more conveniently captured as formal specifications. Runtime verification specifications are typically expressed in trace predicate formalisms, such as finite state machines, regular expressions, context-free patterns, linear temporal logics, etc., or extensions of these. This allows for a less ad-hoc approach than normal testing. However, any mechanism for monitoring an executing system is considered runtime verification, including verifying against test oracles and reference implementations. When formal requirements specifications are provided, monitors are synthesized from them and infused within the system by means of instrumentation. Runtime verification can be used for many purposes, such as security or safety policy monitoring, debugging, testing, verification, validation, profiling, fault protection, behavior modification, etc. Runtime verification avoids the complexity of traditional formal verification techniques, such as model checking and theorem proving, by analyzing only one or a few execution traces and by working directly with the actual system, thus scaling up relatively well and giving more confidence in the results of the analysis, at the expense of less coverage. Moreover, through its reflective capabilities runtime verification can be made an integral part of the target system, monitoring and guiding its execution during deployment.
Dynamic program analysis is the act of analyzing software that involves executing a program – as opposed to static program analysis, which does not execute it.
The following outline is provided as an overview of and topical guide to computer programming:
In computer science, fault injection is a testing technique for understanding how computing systems behave when stressed in unusual ways. This can be achieved using physical- or software-based means, or using a hybrid approach. Widely studied physical fault injections include the application of high voltages, extreme temperatures and electromagnetic pulses on electronic components, such as computer memory and central processing units. By exposing components to conditions beyond their intended operating limits, computing systems can be coerced into mis-executing instructions and corrupting critical data.
In engineering, debugging is the process of finding the root cause, workarounds and possible fixes for bugs.
High performance computing applications run on massively parallel supercomputers consist of concurrent programs designed using multi-threaded, multi-process models. The applications may consist of various constructs with varying degree of parallelism. Although high performance concurrent programs use similar design patterns, models and principles as that of sequential programs, unlike sequential programs, they typically demonstrate non-deterministic behavior. The probability of bugs increases with the number of interactions between the various parallel constructs. Race conditions, data races, deadlocks, missed signals and live lock are common error types.
In programming jargon, Yoda conditions is a programming style where the two parts of an expression are reversed from the typical order in a conditional statement. A Yoda condition places the constant portion of the expression on the left side of the conditional statement.
Automatic bug-fixing is the automatic repair of software bugs without the intervention of a human programmer. It is also commonly referred to as automatic patch generation, automatic bug repair, or automatic program repair. The typical goal of such techniques is to automatically generate correct patches to eliminate bugs in software programs without causing software regression.
This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to software quality assurance and general application of the test method.
{{cite book}}
: CS1 maint: bot: original URL status unknown (link)