Model-based testing

Last updated
General model-based testing setting Mbt-overview.png
General model-based testing setting

Model-based testing is an application of model-based design for designing and optionally also executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a system under test (SUT), or to represent testing strategies and a test environment. The picture on the right depicts the former approach.

Contents

A model describing a SUT is usually an abstract, partial presentation of the SUT's desired behavior. Test cases derived from such a model are functional tests on the same level of abstraction as the model. These test cases are collectively known as an abstract test suite. An abstract test suite cannot be directly executed against an SUT because the suite is on the wrong level of abstraction. An executable test suite needs to be derived from a corresponding abstract test suite. The executable test suite can communicate directly with the system under test. This is achieved by mapping the abstract test cases to concrete test cases suitable for execution. In some model-based testing environments, models contain enough information to generate executable test suites directly. In others, elements in the abstract test suite must be mapped to specific statements or method calls in the software to create a concrete test suite. This is called solving the "mapping problem". [1] In the case of online testing (see below), abstract test suites exist only conceptually but not as explicit artifacts.

Tests can be derived from models in different ways. Because testing is usually experimental and based on heuristics, there is no known single best approach for test derivation. It is common to consolidate all test derivation related parameters into a package that is often known as "test requirements", "test purpose" or even "use case(s)". This package can contain information about those parts of a model that should be focused on, or the conditions for finishing testing (test stopping criteria).

Because test suites are derived from models and not from source code, model-based testing is usually seen as one form of black-box testing.

Model-based testing for complex software systems is still an evolving field.

Models

Especially in Model Driven Engineering or in Object Management Group's (OMG's) model-driven architecture, models are built before or parallel with the corresponding systems. Models can also be constructed from completed systems. Typical modeling languages for test generation include UML, SysML, mainstream programming languages, finite machine notations, and mathematical formalisms such as Z, B (Event-B), Alloy or Coq.

Deploying model-based testing

An example of a model-based testing workflow (offline test case generation). IXIT refers to implementation extra information and refers to information needed to convert an abstract test suite into an executable one. Typically, IXIT contains information on the test harness, data mappings and SUT configuration. Mbt-process-example.png
An example of a model-based testing workflow (offline test case generation). IXIT refers to implementation extra information and refers to information needed to convert an abstract test suite into an executable one. Typically, IXIT contains information on the test harness, data mappings and SUT configuration.

There are various known ways to deploy model-based testing, which include online testing, offline generation of executable tests, and offline generation of manually deployable tests. [2]

Online testing means that a model-based testing tool connects directly to an SUT and tests it dynamically.

Offline generation of executable tests means that a model-based testing tool generates test cases as computer-readable assets that can be later run automatically; for example, a collection of Python classes that embodies the generated testing logic.

Offline generation of manually deployable tests means that a model-based testing tool generates test cases as human-readable assets that can later assist in manual testing; for instance, a PDF document in a human language describing the generated test steps.

Deriving tests algorithmically

The effectiveness of model-based testing is primarily due to the potential for automation it offers. If a model is machine-readable and formal to the extent that it has a well-defined behavioral interpretation, test cases can in principle be derived mechanically.

From finite state machines

Often the model is translated to or interpreted as a finite state automaton or a state transition system. This automaton represents the possible configurations of the system under test. To find test cases, the automaton is searched for executable paths. A possible execution path can serve as a test case. This method works if the model is deterministic or can be transformed into a deterministic one. Valuable off-nominal test cases may be obtained by leveraging unspecified transitions in these models.

Depending on the complexity of the system under test and the corresponding model the number of paths can be very large, because of the huge amount of possible configurations of the system. To find test cases that can cover an appropriate, but finite, number of paths, test criteria are needed to guide the selection. This technique was first proposed by Offutt and Abdurazik in the paper that started model-based testing. [3] Multiple techniques for test case generation have been developed and are surveyed by Rushby. [4] Test criteria are described in terms of general graphs in the testing textbook. [1]

Theorem proving

Theorem proving was originally used for automated proving of logical formulas. For model-based testing approaches, the system is modeled by a set of predicates, specifying the system's behavior. [5] To derive test cases, the model is partitioned into equivalence classes over the valid interpretation of the set of the predicates describing the system under test. Each class describes a certain system behavior, and, therefore, can serve as a test case. The simplest partitioning is with the disjunctive normal form approach wherein the logical expressions describing the system's behavior are transformed into the disjunctive normal form.

Constraint logic programming and symbolic execution

Constraint programming can be used to select test cases satisfying specific constraints by solving a set of constraints over a set of variables. The system is described by the means of constraints. [6] Solving the set of constraints can be done by Boolean solvers (e.g. SAT-solvers based on the Boolean satisfiability problem) or by numerical analysis, like the Gaussian elimination. A solution found by solving the set of constraints formulas can serve as a test cases for the corresponding system.

Constraint programming can be combined with symbolic execution. In this approach a system model is executed symbolically, i.e. collecting data constraints over different control paths, and then using the constraint programming method for solving the constraints and producing test cases. [7]

Model checking

Model checkers can also be used for test case generation. [8] Originally model checking was developed as a technique to check if a property of a specification is valid in a model. When used for testing, a model of the system under test, and a property to test is provided to the model checker. Within the procedure of proofing, if this property is valid in the model, the model checker detects witnesses and counterexamples. A witness is a path where the property is satisfied, whereas a counterexample is a path in the execution of the model where the property is violated. These paths can again be used as test cases.

Test case generation by using a Markov chain test model

Markov chains are an efficient way to handle Model-based Testing. Test models realized with Markov chains can be understood as a usage model: it is referred to as Usage/Statistical Model Based Testing. Usage models, so Markov chains, are mainly constructed of 2 artifacts : the Finite State Machine (FSM) which represents all possible usage scenario of the tested system and the Operational Profiles (OP) which qualify the FSM to represent how the system is or will be used statistically. The first (FSM) helps to know what can be or has been tested and the second (OP) helps to derive operational test cases. Usage/Statistical Model-based Testing starts from the facts that is not possible to exhaustively test a system and that failure can appear with a very low rate. [9] This approach offers a pragmatic way to statically derive test cases which are focused on improving the reliability of the system under test. Usage/Statistical Model Based Testing was recently extended to be applicable to embedded software systems. [10] [11]

See also

Related Research Articles

Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not necessarily limited to:

In computer science, formal methods are mathematically rigorous techniques for the specification, development, analysis, and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.

In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of a system with respect to a certain formal specification or property, using formal methods of mathematics. Formal verification is a key incentive for formal specification of systems, and is at the core of formal methods. It represents an important dimension of analysis and verification in electronic design automation and is one approach to software verification. The use of formal verification enables the highest Evaluation Assurance Level (EAL7) in the framework of common criteria for computer security certification.

A modeling language is any artificial language that can be used to express data, information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure Programing language.

Model Driven Architecture (MDA) is a software design approach for the development of software systems. It provides a set of guidelines for the structuring of specifications, which are expressed as models. Model Driven Architecture is a kind of domain engineering, and supports model-driven engineering of software systems. It was launched by the Object Management Group (OMG) in 2001.

In computer science, symbolic execution (also symbolic evaluation or symbex) is a means of analyzing a program to determine what inputs cause each part of a program to execute. An interpreter follows the program, assuming symbolic values for inputs rather than obtaining actual inputs as normal execution of the program would. It thus arrives at expressions in terms of those symbols for expressions and variables in the program, and constraints in terms of those symbols for the possible outcomes of each conditional branch. Finally, the possible inputs that trigger a branch can be determined by solving the constraints.

Round-trip engineering (RTE) in the context of model-driven architecture is a functionality of software development tools that synchronizes two or more related software artifacts, such as, source code, models, configuration files, documentation, etc. between each other. The need for round-trip engineering arises when the same information is present in multiple artifacts and when an inconsistency may arise in case some artifacts are updated. For example, some piece of information was added to/changed in only one artifact and, as a result, it became missing in/inconsistent with the other artifacts.

In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.

Executable UML is both a software development method and a highly abstract software language. It was described for the first time in 2002 in the book "Executable UML: A Foundation for Model-Driven Architecture". The language "combines a subset of the UML graphical notation with executable semantics and timing rules." The Executable UML method is the successor to the Shlaer–Mellor method.

<span class="mw-page-title-main">MagicDraw</span> Systems modelling software

MagicDraw is a proprietary visual UML, SysML, BPMN, and UPDM modeling tool with team collaboration support.

Dynamic program analysis is analysis of computer software that involves executing the program in question. Dynamic program analysis includes familiar techniques from software engineering such as unit testing, debugging, and measuring code coverage, but also includes lesser-known techniques like program slicing and invariant inference. Dynamic program analysis is widely applied in security in the form of runtime memory error detection, fuzzing, dynamic symbolic execution, and taint tracking.

Keyword-driven testing, also known as action word based testing, is a software testing methodology suitable for both manual and automated testing. This method separates the documentation of test cases – including both the data and functionality to use – from the prescription of the way the test cases are executed. As a result, it separates the test creation process into two distinct stages: a design and development stage, and an execution stage. The design substage covers the requirement analysis and assessment and the data analysis, definition, and population.

In software engineering, graphical user interface testing is the process of testing a product's graphical user interface (GUI) to ensure it meets its specifications. This is normally done through the use of a variety of test cases.

In computing, software engineering, and software testing, a test oracle is a mechanism for determining whether a test has passed or failed. The use of oracles involves comparing the output(s) of the system under test, for a given test-case input, to the output(s) that the oracle determines that product should have. The term "test oracle" was first introduced in a paper by William E. Howden. Additional work on different kinds of oracles was explored by Elaine Weyuker.

<span class="mw-page-title-main">Behavior tree</span>

Behavior trees are a formal, graphical modelling language used primarily in systems and software engineering. Behavior trees employ a well-defined notation to unambiguously represent the hundreds or even thousands of natural language requirements that are typically used to express the stakeholder needs for a large-scale software-integrated system.

Concolic testing is a hybrid software verification technique that performs symbolic execution, a classical technique that treats program variables as symbolic variables, along a concrete execution path. Symbolic execution is used in conjunction with an automated theorem prover or constraint solver based on constraint logic programming to generate new concrete inputs with the aim of maximizing code coverage. Its main focus is finding bugs in real-world software, rather than demonstrating program correctness.

In software development, a test suite, less commonly known as a validation suite, is a collection of test cases that are intended to be used to test a software program to show that it has some specified set of behaviors. A test suite often contains detailed instructions or goals for each collection of test cases and information on the system configuration to be used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

The Classification Tree Method is a method for test design, as it is used in different areas of software development. It was developed by Grimm and Grochtmann in 1993. Classification Trees in terms of the Classification Tree Method must not be confused with decision trees.

Automatic bug-fixing is the automatic repair of software bugs without the intervention of a human programmer. It is also commonly referred to as automatic patch generation, automatic bug repair, or automatic program repair. The typical goal of such techniques is to automatically generate correct patches to eliminate bugs in software programs without causing software regression.

Protocol engineering is the application of systematic methods to the development of communication protocols. It uses many of the principles of software engineering, but it is specific to the development of distributed systems.

References

  1. 1 2 Paul Ammann and Jeff Offutt. Introduction to Software Testing, 2nd edition. Cambridge University Press, 2016.
  2. Practical Model-Based Testing: A Tools Approach Archived 2012-08-25 at the Wayback Machine , Mark Utting and Bruno Legeard, ISBN   978-0-12-372501-1, Morgan-Kaufmann 2007
  3. Jeff Offutt and Aynur Abdurazik. Generating Tests from UML Specifications. Second International Conference on the Unified Modeling Language (UML ’99), pages 416-429, Fort Collins, CO, October 1999.
  4. John Rushby. Automated Test Generation and Verified Software. Verified Software: Theories, Tools, Experiments: First IFIP TC 2/WG 2.3 Conference, VSTTE 2005, Zurich, Switzerland, October 10–13. pp. 161-172, Springer-Verlag
  5. Brucker, Achim D.; Wolff, Burkhart (2012). "On Theorem Prover-based Testing". Formal Aspects of Computing. 25 (5): 683–721. CiteSeerX   10.1.1.208.3135 . doi:10.1007/s00165-012-0222-y. S2CID   5774837.
  6. Jefferson Offutt. Constraint-Based Automatic Test Data Generation. IEEE Transactions on Software Engineering, 17:900-910, 1991
  7. Antti Huima. Implementing Conformiq Qtronic. Testing of Software and Communicating Systems, Lecture Notes in Computer Science, 2007, Volume 4581/2007, 1-12, DOI: 10.1007/978-3-540-73066-8_1
  8. Gordon Fraser, Franz Wotawa, and Paul E. Ammann. Testing with model checkers: a survey. Software Testing, Verification and Reliability, 19(3):215–261, 2009. URL:
  9. Helene Le Guen. Validation d'un logiciel par le test statistique d'usage : de la modelisation de la decision à la livraison, 2005. URL:ftp://ftp.irisa.fr/techreports/theses/2005/leguen.pdf
  10. Böhr, Frank (2011). "Model Based Statistical Testing of Embedded Systems". 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops. pp. 18–25. doi:10.1109/ICSTW.2011.11. ISBN   978-1-4577-0019-4. S2CID   9582606.
  11. Böhr, Frank (2012). Model-Based Statistical Testing of Embedded Real-Time Software with Continuous and Discrete Signals in a Concurrent Environment: The Usage Net Approach. ISBN   978-3843903486.

Further reading