Mutation testing

Last updated

Mutation testing (or mutation analysis or program mutation) is used to design new software tests and evaluate the quality of existing software tests. Mutation testing involves modifying a program in small ways. [1] Each mutated version is called a mutant and tests detect and reject mutants by causing the behaviour of the original version to differ from the mutant. This is called killing the mutant. Test suites are measured by the percentage of mutants that they kill. New tests can be designed to kill additional mutants. Mutants are based on well-defined mutation operators that either mimic typical programming errors (such as using the wrong operator or variable name) or force the creation of valuable tests (such as dividing each expression by zero). The purpose is to help the tester develop effective tests or locate weaknesses in the test data used for the program or in sections of the code that are seldom or never accessed during execution. Mutation testing is a form of white-box testing. [2] [3]

Contents

Introduction

Most of this article is about "program mutation", in which the program is modified. A more general definition of mutation analysis is using well-defined rules defined on syntactic structures to make systematic changes to software artifacts. [4] Mutation analysis has been applied to other problems, but is usually applied to testing. So mutation testing is defined as using mutation analysis to design new software tests or to evaluate existing software tests. [4] Thus, mutation analysis and testing can be applied to design models, specifications, databases, tests, XML, and other types of software artifacts, although program mutation is the most common. [5]

Overview

Tests can be created to verify the correctness of the implementation of a given software system, but the creation of tests still poses the question whether the tests are correct and sufficiently cover the requirements that have originated the implementation. [6] (This technological problem is itself an instance of a deeper philosophical problem named "Quis custodiet ipsos custodes?" ["Who will guard the guards?"].) The idea behind mutation testing is that if a mutant is introduced, this normally causes a bug in the program's functionality which the tests should find. This way, the tests are tested. If a mutant is not detected by the test suite, this typically indicates that the test suite is unable to locate the faults represented by the mutant, but it can also indicate that the mutation introduces no faults, that is, the mutation is a valid change that does not affect functionality. One (common) way a mutant can be valid is that the code that has been changed is "dead code" that is never executed.

For mutation testing to function at scale, a large number of mutants are usually introduced, leading to the compilation and execution of an extremely large number of copies of the program. This problem of the expense of mutation testing had reduced its practical use as a method of software testing. However, the increased use of object oriented programming languages and unit testing frameworks has led to the creation of mutation testing tools that test individual portions of an application.

Goals

The goals of mutation testing are multiple:

History

Mutation testing was originally proposed by Richard Lipton as a student in 1971, [8] and first developed and published by DeMillo, Lipton and Sayward. [1] The first implementation of a mutation testing tool was by Timothy Budd as part of his PhD work (titled Mutation Analysis) in 1980 from Yale University. [9]

Recently, with the availability of massive computing power, there has been a resurgence of mutation analysis within the computer science community, and work has been done to define methods of applying mutation testing to object oriented programming languages and non-procedural languages such as XML, SMV, and finite state machines.

In 2004, a company called Certess Inc. (now part of Synopsys) extended many of the principles into the hardware verification domain. Whereas mutation analysis only expects to detect a difference in the output produced, Certess extends this by verifying that a checker in the testbench will actually detect the difference. This extension means that all three stages of verification, namely: activation, propagation, and detection are evaluated. They called this functional qualification.

Fuzzing can be considered to be a special case of mutation testing. In fuzzing, the messages or data exchanged inside communication interfaces (both inside and between software instances) are mutated to catch failures or differences in processing the data. Codenomicon [10] (2001) and Mu Dynamics (2005) evolved fuzzing concepts to a fully stateful mutation testing platform, complete with monitors for thoroughly exercising protocol implementations.

Mutation testing overview

Mutation testing is based on two hypotheses. The first is the competent programmer hypothesis. This hypothesis states that competent programmers write programs that are close to being correct. [1] "Close" is intended to be based on behavior, not syntax. The second hypothesis is called the coupling effect. The coupling effect asserts that simple faults can cascade or couple to form other emergent faults. [11] [12]

Subtle and important faults are also revealed by higher-order mutants, which further support the coupling effect. [13] [14] [7] [15] [16] Higher-order mutants are enabled by creating mutants with more than one mutation.

Mutation testing is done by selecting a set of mutation operators and then applying them to the source program one at a time for each applicable piece of the source code. The result of applying one mutation operator to the program is called a mutant. If the test suite is able to detect the change (i.e. one of the tests fails), then the mutant is said to be killed.

For example, consider the following C++ code fragment:

if(a&&b){c=1;}else{c=0;}

The condition mutation operator would replace && with || and produce the following mutant:

if(a||b){c=1;}else{c=0;}

Now, for the test to kill this mutant, the following three conditions should be met:

  1. A test must reach the mutated statement.
  2. Test input data should infect the program state by causing different program states for the mutant and the original program. For example, a test with a = 1 and b = 0 would do this.
  3. The incorrect program state (the value of 'c') must propagate to the program's output and be checked by the test.

These conditions are collectively called the RIP model. [8]

Weak mutation testing (or weak mutation coverage) requires that only the first and second conditions are satisfied. Strong mutation testing requires that all three conditions are satisfied. Strong mutation is more powerful, since it ensures that the test suite can really catch the problems. Weak mutation is closely related to code coverage methods. It requires much less computing power to ensure that the test suite satisfies weak mutation testing than strong mutation testing.

However, there are cases where it is not possible to find a test case that could kill this mutant. The resulting program is behaviorally equivalent to the original one. Such mutants are called equivalent mutants.

Equivalent mutants detection is one of biggest obstacles for practical usage of mutation testing. The effort needed to check if mutants are equivalent or not can be very high even for small programs. [17] A 2014 systematic literature review of a wide range of approaches to overcome the Equivalent Mutant Problem [18] identified 17 relevant techniques (in 22 articles) and three categories of techniques: detecting (DEM); suggesting (SEM); and avoiding equivalent mutant generation (AEMG). The experiment indicated that Higher Order Mutation in general and JudyDiffOp strategy in particular provide a promising approach to the Equivalent Mutant Problem.

In addition to equivalent mutants, there are subsumed mutants which are mutants that exist in the same source code location as another mutant, and are said to be "subsumed" by the other mutant. Subsumed mutants are not visible to a mutation testing tool, and do not contribute to coverage metrics. For example, let's say you have two mutants, A and B, that both change a line of code in the same way. Mutant A is tested first, and the result is that the code is not working correctly. Mutant B is then tested, and the result is the same as with mutant A. In this case, Mutant B is considered to be subsumed by Mutant A, since the result of testing Mutant B is the same as the result of testing Mutant A. Therefore, Mutant B does not need to be tested, as the result will be the same as Mutant A.

Mutation operators

To make syntactic changes to a program, a mutation operator serves as a guideline that substitutes portions of the source code. Given that mutations depend on these operators, scholars have created a collection of mutation operators to accommodate different programming languages, like Java. The effectiveness of these mutation operators plays a pivotal role in mutation testing. [19]

Many mutation operators have been explored by researchers. Here are some examples of mutation operators for imperative languages:

These mutation operators are also called traditional mutation operators. There are also mutation operators for object-oriented languages, [22] for concurrent constructions, [23] complex objects like containers, [24] etc.

Types of mutation operators

Operators for containers are called class-level mutation operators. Operators at the class level alter the program's structure by adding, removing, or changing the expressions being examined. Specific operators have been established for each category of changes. [19] For example, the muJava tool offers various class-level mutation operators such as Access Modifier Change, Type Cast Operator Insertion, and Type Cast Operator Deletion. Mutation operators have also been developed to perform security vulnerability testing of programs. [25]

Apart from the class-level operators, MuJava also includes method-level mutation operators, referred to as traditional operators. These traditional operators are designed based on features commonly found in procedural languages. They carry out changes to statements by adding, substituting, or removing primitive operators. These operators fall into six categories: Arithmetic operators, Relational operators, Conditional operators, Shift operators, Logical operators and Assignment operators. [19]

Types of mutation testing

There are three types of mutation testing;

Statement mutation

Statement mutation is a process where a block of code is intentionally modified by either deleting or copying certain statements. Moreover, it allows for the reordering of statements within the code block to generate various sequences. [26] This technique is crucial in software testing as it helps identify potential weaknesses or errors in the code. By deliberately making changes to the code and observing how it behaves, developers can uncover hidden bugs or flaws that might go unnoticed during regular testing. [27] Statement mutation is like a diagnostic tool that provides insights into the code's robustness and resilience, helping programmers improve the overall quality and reliability of their software.

For example, in the code snippet below, entire 'else' section is removed:

functioncheckCredentials(username,password){if(username==="admin"&&password==="password"){returntrue;}}

Value mutation

Value mutation occurs when modification is executed to the parameter and/or constant values within the code. This typically involves adjusting the values by adding or subtracting 1, but it can also involve making more substantial changes to the values. The specific alterations made during value mutation include two main scenarios:

Firstly, there's the transformation from a small value to a higher value. This entails replacing a small value in the code with a larger one. The purpose of this change is to assess how the code responds when it encounters larger inputs. It helps ensure that the code can accurately and efficiently process these larger values without encountering errors or unexpected issues. [26]

Conversely, the second scenario involves changing a higher value to a smaller one. In this case, we replace a higher value within the code with a smaller value. This test aims to evaluate how the code handles smaller inputs. Ensuring that the code performs correctly with smaller values is essential to prevent unforeseen problems or errors when dealing with such input data. [26]

For example:

// Original codefunctionmultiplyByTwo(value){returnvalue*2;}// Value mutation: Small value to higher valuefunctionmultiplyByTwoMutation1(value){returnvalue*10;}// Value mutation: Higher value to small valuefunctionmultiplyByTwoMutation2(value){returnvalue/10;}

Decision mutation

Decision mutation testing centers on the identification of design errors within the code, with a particular emphasis on detecting flaws or weaknesses in the program's decision-making logic. This method involves deliberately altering arithmetic and logical operators to expose potential issues. [26] By manipulating these operators, developers can systematically evaluate how the code responds to different decision scenarios. This process helps ensure that the program's decision-making pathways are robust and accurate, preventing costly errors that could arise from faulty logic. Decision mutation testing serves as a valuable tool in software development, enabling developers to enhance the reliability and effectiveness of their decision-making code segments.

For example:

// Original codefunctionisPositive(number){returnnumber>0;}// Decision mutation: Changing the comparison operatorfunctionisPositiveMutation1(number){returnnumber>=0;}// Decision mutation: Negating the resultfunctionisPositiveMutation2(number){return!(number>0);}

Frameworks implementing mutation testing

See also

Related Research Articles

In software engineering, code coverage, also called test coverage, is a percentage measure of the degree to which the source code of a program is executed when a particular test suite is run. A program with high code coverage has more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low code coverage. Many different metrics can be used to calculate test coverage. Some of the most basic are the percentage of program subroutines and the percentage of program statements called during execution of the test suite.

<span class="mw-page-title-main">Software testing</span> Checking software against a standard

Software testing is the act of checking whether software satisfies expectations.

A software bug is a design defect (bug) in computer software. A computer program with many or serious bugs may be described as buggy.

Regression testing is re-running functional and non-functional tests to ensure that previously developed and tested software still performs as expected after a change. If not, that would be called a regression.

Unit testing, a.k.a. component or module testing, is a form of software testing by which isolated source code is tested to validate expected behavior.

In computer programming, specifically when using the imperative programming paradigm, an assertion is a predicate connected to a point in the program, that always should evaluate to true at that point in code execution. Assertions can help a programmer read the code, help a compiler compile it, or help the program detect its own defects.

Test-driven development (TDD) is a way of writing code that involves writing an automated unit-level test case that fails, then writing just enough code to make the test pass, then refactoring both the test code and the production code, then repeating with another new test case.

<span class="mw-page-title-main">Race condition</span> When a systems behavior depends on timing of uncontrollable events

A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events, leading to unexpected or inconsistent results. It becomes a bug when one or more of the possible behaviors is undesirable.

In computer programming, unreachable code is part of the source code of a program which can never be executed because there exists no control flow path to the code from the rest of the program.

<span class="mw-page-title-main">Fuzzing</span> Automated software testing technique

In programming and software development, fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, e.g., in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to expose corner cases that have not been properly dealt with.

Runtime verification is a computing system analysis and execution approach based on extracting information from a running system and using it to detect and possibly react to observed behaviors satisfying or violating certain properties. Some very particular properties, such as datarace and deadlock freedom, are typically desired to be satisfied by all systems and may be best implemented algorithmically. Other properties can be more conveniently captured as formal specifications. Runtime verification specifications are typically expressed in trace predicate formalisms, such as finite state machines, regular expressions, context-free patterns, linear temporal logics, etc., or extensions of these. This allows for a less ad-hoc approach than normal testing. However, any mechanism for monitoring an executing system is considered runtime verification, including verifying against test oracles and reference implementations. When formal requirements specifications are provided, monitors are synthesized from them and infused within the system by means of instrumentation. Runtime verification can be used for many purposes, such as security or safety policy monitoring, debugging, testing, verification, validation, profiling, fault protection, behavior modification, etc. Runtime verification avoids the complexity of traditional formal verification techniques, such as model checking and theorem proving, by analyzing only one or a few execution traces and by working directly with the actual system, thus scaling up relatively well and giving more confidence in the results of the analysis, at the expense of less coverage. Moreover, through its reflective capabilities runtime verification can be made an integral part of the target system, monitoring and guiding its execution during deployment.

Dynamic program analysis is the act of analyzing software that involves executing a program – as opposed to static program analysis, which does not execute it.

The following outline is provided as an overview of and topical guide to computer programming:

In computer science, fault injection is a testing technique for understanding how computing systems behave when stressed in unusual ways. This can be achieved using physical- or software-based means, or using a hybrid approach. Widely studied physical fault injections include the application of high voltages, extreme temperatures and electromagnetic pulses on electronic components, such as computer memory and central processing units. By exposing components to conditions beyond their intended operating limits, computing systems can be coerced into mis-executing instructions and corrupting critical data.

In engineering, debugging is the process of finding the root cause, workarounds and possible fixes for bugs.

High performance computing applications run on massively parallel supercomputers consist of concurrent programs designed using multi-threaded, multi-process models. The applications may consist of various constructs with varying degree of parallelism. Although high performance concurrent programs use similar design patterns, models and principles as that of sequential programs, unlike sequential programs, they typically demonstrate non-deterministic behavior. The probability of bugs increases with the number of interactions between the various parallel constructs. Race conditions, data races, deadlocks, missed signals and live lock are common error types.

In programming jargon, Yoda conditions is a programming style where the two parts of an expression are reversed from the typical order in a conditional statement. A Yoda condition places the constant portion of the expression on the left side of the conditional statement.

Automatic bug-fixing is the automatic repair of software bugs without the intervention of a human programmer. It is also commonly referred to as automatic patch generation, automatic bug repair, or automatic program repair. The typical goal of such techniques is to automatically generate correct patches to eliminate bugs in software programs without causing software regression.

This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to software quality assurance and general application of the test method.

References

  1. 1 2 3 4 Richard A. DeMillo, Richard J. Lipton, and Fred G. Sayward. Hints on test data selection: Help for the practicing programmer. IEEE Computer, 11(4):34-41. April 1978.
  2. Ostrand, Thomas (2002), "White-Box Testing", Encyclopedia of Software Engineering, American Cancer Society, doi:10.1002/0471028959.sof378, ISBN   978-0-471-02895-6 , retrieved 2021-03-16
  3. Misra, S. (2003). "Evaluating four white-box test coverage methodologies". CCECE 2003 - Canadian Conference on Electrical and Computer Engineering. Toward a Caring and Humane Technology (Cat. No.03CH37436). Vol. 3. Montreal, Que., Canada: IEEE. pp. 1739–1742. doi:10.1109/CCECE.2003.1226246. ISBN   978-0-7803-7781-3. S2CID   62549502.
  4. 1 2 3 Paul Ammann and Jeff Offutt. Introduction to Software Testing. Cambridge University Press, 2008.
  5. Jia, Yue; Harman, Mark (September 2009). "An Analysis and Survey of the Development of Mutation Testing" (PDF). CREST Centre, King's College London, Technical Report TR-09-06. 37 (5): 649–678. doi:10.1109/TSE.2010.62. S2CID   6853229. Archived from the original (PDF) on 2017-12-04.
  6. Dasso, Aristides; Funes, Ana (2007). Verification, Validation and Testing in Software Engineering. Idea Group Inc. ISBN   978-1591408512.
  7. 1 2 Smith B., "On Guiding Augmentation of an Automated Test Suite via Mutation Analysis," 2008
  8. 1 2 Mutation 2000: Uniting the Orthogonal Archived 2011-09-28 at the Wayback Machine by A. Jefferson Offutt and Roland H. Untch.
  9. Tim A. Budd, Mutation Analysis of Program Test Data. PhD thesis, Yale University New Haven CT, 1980.
  10. Kaksonen, Rauli. A Functional Method for Assessing Protocol Implementation Security (Licentiate thesis). Espoo. 2001.
  11. A. Jefferson Offutt. 1992. Investigations of the software testing coupling effect. ACM Trans. Softw. Eng. Methodol. 1, 1 (January 1992), 5-20.
  12. A. T. Acree, T. A. Budd, R. A. DeMillo, R. J. Lipton, and F. G. Sayward, "Mutation Analysis," Georgia Institute of Technology, Atlanta, Georgia, Technique Report GIT-ICS-79/08, 1979.
  13. Yue Jia; Harman, M., "Constructing Subtle Faults Using Higher Order Mutation Testing," Source Code Analysis and Manipulation, 2008 Eighth IEEE International Working Conference on , vol., no., pp.249,258, 28-29 Sept. 2008
  14. Maryam Umar, "An Evaluation of Mutation Operators For Equivalent Mutants," MS Thesis, 2006
  15. Polo M. and Piattini M., "Mutation Testing: practical aspects and cost analysis," University of Castilla-La Mancha (Spain), Presentation, 2009
  16. Anderson S., "Mutation Testing", the University of Edinburgh, School of Informatics, Presentation, 2011
  17. P. G. Frankl, S. N. Weiss, and C. Hu. All-uses versus mutation testing: An experimental comparison of effectiveness. Journal of Systems and Software, 38:235–253, 1997.
  18. Overcoming the Equivalent Mutant Problem: A Systematic Literature Review and a Comparative Experiment of Second Order Mutation by L. Madeyski, W. Orzeszyna, R. Torkar, M. Józala. IEEE Transactions on Software Engineering
  19. 1 2 3 Hamimoune, Soukaina; Falah, Bouchaib (24 September 2016). "Mutation testing techniques: A comparative study". 2016 International Conference on Engineering & MIS (ICEMIS). pp. 1–9. doi:10.1109/ICEMIS.2016.7745368. ISBN   978-1-5090-5579-1. S2CID   24301702. Archived from the original on 19 June 2018. Retrieved 2023-10-08.{{cite book}}: CS1 maint: bot: original URL status unknown (link)
  20. Apple's SSL/TLS bug by Adam Langley.
  21. Niedermayr, Rainer; Juergens, Elmar; Wagner, Stefan (2016-05-14). "Will my tests tell me if I break this code?". Proceedings of the International Workshop on Continuous Software Evolution and Delivery. CSED '16. Austin, Texas: Association for Computing Machinery. pp. 23–29. arXiv: 1611.07163 . doi:10.1145/2896941.2896944. ISBN   978-1-4503-4157-8. S2CID   9213147.
  22. MuJava: An Automated Class Mutation System Archived 2012-03-11 at the Wayback Machine by Yu-Seung Ma, Jeff Offutt and Yong Rae Kwo.
  23. Mutation Operators for Concurrent Java (J2SE 5.0) by Jeremy S. Bradbury, James R. Cordy, Juergen Dingel.
  24. Mutation of Java Objects by Roger T. Alexander, James M. Bieman, Sudipto Ghosh, Bixia Ji.
  25. Mutation-based Testing of Buffer Overflows, SQL Injections, and Format String Bugs by H. Shahriar and M. Zulkernine.
  26. 1 2 3 4 Walters, Amy (2023-06-01). "Understanding Mutation Testing: A Comprehensive Guide". testRigor AI-Based Automated Testing Tool. Retrieved 2023-10-08.
  27. Deng, Lin; Offutt, Jeff; Li, Nan (22 March 2013). "Empirical Evaluation of the Statement Deletion Mutation Operator". 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation. pp. 84–93. doi:10.1109/ICST.2013.20. ISBN   978-0-7695-4968-2. ISSN   2159-4848. S2CID   12866713 . Retrieved 2023-10-08.