Test design

Last updated

In software engineering, test design is the activity of deriving and specifying test cases from test conditions to test software.

Contents

Definition

A test condition is a statement about the test object. Test conditions can be stated for any part of a component or system that could be verified: functions, transactions, features, quality attributes or structural elements.

The fundamental challenge of test design is that there are infinitely many different tests that you could run, but there is not enough time to run them all. A subset of tests must be selected; small enough to run, but well-chosen enough that the tests find bug and expose other quality-related information. [1]

Test design is one of the most important prerequisites of software quality. Good test design supports:

  1. defining and improving quality related processes and procedures (quality assurance);
  2. evaluating the quality of the product with regards to customer expectations and needs (quality control);
  3. finding defects in the product (software testing).

The essential prerequisites of test design are: [2]

  1. Appropriate specification (test bases).
  2. Risk and complexity analysis.
  3. Historical data of your previous developments (if exists).

The test bases, such as requirements or user stories, determine what should be tested (test objects and test conditions). The test bases involves some test design techniques to be used or not to be used.

Risk analysis is inevitable to decide the thoroughness of testing. The more risk the usage of the function/object has, the more thorough the testing that is needed. The same can be said for complexity. Risk and complexity analysis determines the test design techniques to be applied for a given specification.

Historical data of your previous developments help setting the best set of test design techniques to reach a cost optimum and high quality together. In lack of historical data some assumptions can be made, which should be refined for subsequent projects.

Based on these prerequisites an optimal test design strategy can be implemented.

The result of the test design is a set of test cases based on the specification. These test cases can be designed prior to the implementation starts, and should be implementation-independent. Test first way of test design is very important as efficiently supports defect prevention. Based on the application and the present test coverage further test cases can be created (but it is not test design).

In practice, more test design techniques should be applied together for complex specifications.

Altogether, test design does not depend on the extraordinary (near magical) skill of the person creating the test but is based on well understood principles. [3]

ISO/IEC/IEEE 29119-4:2015, Part 4 details the standard definitions of test design techniques. The site of test designers offers the LEA (Learn-Exercise-Apply) methodology to support effective learning, exercising and applying the techniques. [4]

Automatic test design

Entire test suites or test cases exposing real bugs can be automatically generated by software using model checking or symbolic execution. Model checking can ensure all the paths of a simple program are exercised, while symbolic execution can detect bugs and generate a test case that will expose the bug when the software is run using this test case.

However, as good as automatic test design can be, it is not appropriate for all circumstances. If the complexity becomes too high, then human test design must come into play as it is far more flexible and it can concentrate on generating higher level test suites.

Related Research Articles

Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but not necessarily limited to:

Regression testing is re-running functional and non-functional tests to ensure that previously developed and tested software still performs as expected after a change. If not, that would be called a regression.

In computer programming, unit testing is a software testing method by which individual units of source code—sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures—are tested to determine whether they are fit for use.

In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.

Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied virtually to every level of software testing: unit, integration, system and acceptance. It is sometimes referred to as specification-based testing.

Test-driven development (TDD) is a software development process relying on software requirements being converted to test cases before software is fully developed, and tracking all software development by repeatedly testing the software against all test cases. This is as opposed to software being developed first and test cases created later.

In software project management, software testing, and software engineering, verification and validation (V&V) is the process of checking that a software system meets specifications and requirements so that it fulfills its intended purpose. It may also be referred to as software quality control. It is normally the responsibility of software testers as part of the software development lifecycle. In simple terms, software verification is: "Assuming we should build X, does our software achieve its goals without any bugs or gaps?" On the other hand, software validation is: "Was X what we should have built? Does X meet the high-level requirements?"

White-box testing is a method of software testing that tests internal structures or workings of an application, as opposed to its functionality. In white-box testing, an internal perspective of the system is used to design test cases. The tester chooses inputs to exercise paths through the code and determine the expected outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT). White-box testing can be applied at the unit, integration and system levels of the software testing process. Although traditional testers tended to think of white-box testing as being done at the unit level, it is used for integration and system testing more frequently today. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it has the potential to miss unimplemented parts of the specification or missing requirements. Where white-box testing is design-driven, that is, driven exclusively by agreed specifications of how each component of software is required to behave, white-box test techniques can accomplish assessment for unimplemented or missing requirements.

Cyclomatic complexity is a software metric used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program's source code. It was developed by Thomas J. McCabe, Sr. in 1976.

In software engineering, a test case is a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test cases underlie testing that is methodical rather than haphazard. A battery of test cases can be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistent regression testing.

<span class="mw-page-title-main">Fuzzing</span> Automated software testing technique

In programming and software development, fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, e.g., in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to expose corner cases that have not been properly dealt with.

Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

In software engineering, continuous integration (CI) is the practice of merging all developers' working copies to a shared mainline several times a day. Grady Booch first proposed the term CI in his 1991 method, although he did not advocate integrating several times a day. Extreme programming (XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day.

In object-oriented programming, mock objects are simulated objects that mimic the behaviour of real objects in controlled ways, most often as part of a software testing initiative. A programmer typically creates a mock object to test the behaviour of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behaviour of a human in vehicle impacts. The technique is also applicable in generic programming.

Software project management is an art and science of planning and leading software projects. It is a sub-discipline of project management in which software projects are planned, implemented, monitored and controlled.

Software testability is the degree to which a software artifact supports testing in a given test context. If the testability of the software artifact is high, then finding faults in the system by means of testing is easier.

Database testing usually consists of a layered process, including the user interface (UI) layer, the business layer, the data access layer and the database itself. The UI layer deals with the interface design of the database, while the business layer includes databases supporting business strategies.

Random testing is a black-box software testing technique where programs are tested by generating random, independent inputs. Results of the output are compared against software specifications to verify that the test output is pass or fail. In case of absence of specifications the exceptions of the language are used which means if an exception arises during test execution then it means there is a fault in the program, it is also used as a way to avoid biased testing.

This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to Software Quality Assurance (more widely colloquially known as Quality Assurance and general application of the test method.

In software engineering, test design technique is a procedure for determining test conditions, test cases and test data during software testing.

References

  1. Test Design: A BBST Workbook , by Cem Caner and Rebecca L Fiedler, July 2016
  2. Practical Test Design: Selection of traditional and automated test design techniques , by István Forgács and Attila Kovács, August 2019
  3. A Practitioner's Guide to Software Test Design , by Lee Copeland, January 2004
  4. István, Forgács; Kovács, Attila (2021). Paradigm Shift in Software Testing. MeasureIT. ISBN   978-615-01-2781-1.