In software engineering, a test case is a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as to exercise a particular program path or to verify compliance with a specific requirement. [1] Test cases underlie testing that is methodical rather than haphazard. A battery of test cases can be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistent regression testing. [2]
In order to fully test that all the requirements of an application are met, there must be at least two test cases for each requirement: one positive test and one negative test. [3] If a requirement has sub-requirements, each sub-requirement must have at least two test cases. Keeping track of the link between the requirement and the test is frequently done using a traceability matrix. Written test cases should include a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted.
A formal written test case is characterized by a known input and by an expected output, which is worked out before the test is executed. [4] The known input should test a precondition and the expected output should test a postcondition.
For applications or systems without formal requirements, test cases can be written based on the accepted normal operation of programs of a similar class. In some schools of testing, test cases are not written at all but the activities and results are reported after the tests have been run.
In scenario testing, hypothetical stories are used to help the tester think through a complex problem or system. These scenarios are usually not written down in any detail. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test is a story that is motivating, credible, complex, and easy to evaluate. They are usually different from test cases in that test cases are single steps while scenarios cover a number of steps of the key. [5] [6]
A test case usually contains a single step or a sequence of steps to test the correct behavior/functionality and features of an application. An expected result or expected outcome is usually given.
Additional information that may be included: [7]
Larger test cases may also contain prerequisite states or steps, and descriptions. [7]
A written test case should also contain a place for the actual result.
These steps can be stored in a word processor document, spreadsheet, database or other common repository.
In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.
Test suites often also contain [8]
Besides a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted, the most time-consuming part in the test case is creating the tests and modifying them when the system changes.
Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would evaluate if the results can be considered as a pass. This happens often on new products' performance number determination. The first test is taken as the base line for subsequent test and product release cycles.
Acceptance tests, which use a variation of a written test case, are commonly performed by a group of end-users or clients of the system to ensure the developed system meets the requirements specified or the contract. [9] [10] User acceptance tests are differentiated by the inclusion of happy path or positive test cases to the almost complete exclusion of negative test cases. [11]
In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests.
Software testing is the act of checking whether software satisfies expectations.
Unit testing, a.k.a. component or module testing, is a form of software testing by which isolated source code is tested to validate expected behavior.
Black-box testing, sometimes referred to as specification-based testing, is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied virtually to every level of software testing: unit, integration, system and acceptance. Black-box testing is also used as a method in penetration testing, where an ethical hacker simulates an external hacking or cyber warfare attack with no knowledge of the system being attacked.
In software and systems engineering, the phrase use case is a polyseme with two senses:
Test-driven development (TDD) is a way of writing code that involves writing an automated unit-level test case that fails, then writing just enough code to make the test pass, then refactoring both the test code and the production code, then repeating with another new test case.
White-box testing is a method of software testing that tests internal structures or workings of an application, as opposed to its functionality. In white-box testing, an internal perspective of the system is used to design test cases. The tester chooses inputs to exercise paths through the code and determine the expected outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT). White-box testing can be applied at the unit, integration and system levels of the software testing process. Although traditional testers tended to think of white-box testing as being done at the unit level, it is used for integration and system testing more frequently today. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it has the potential to miss unimplemented parts of the specification or missing requirements. Where white-box testing is design-driven, that is, driven exclusively by agreed specifications of how each component of software is required to behave, white-box test techniques can accomplish assessment for unimplemented or missing requirements.
In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.
A test script in software testing is a set of instructions that will be performed on the system under test to test that the system functions as expected.
In computing, a scenario is a narrative of foreseeable interactions of user roles and the technical system, which usually includes computer hardware and software.
FitNesse is a web server, a wiki and an automated testing tool for software. It is based on Ward Cunningham's Framework for Integrated Test and is designed to support acceptance testing rather than unit testing in that it facilitates detailed readable description of system function.
Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Cem Kaner, who coined the term in 1984, defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."
Behavior-driven development (BDD) involves naming software tests using domain language to describe the behavior of the code.
In software development, functional testing is a form of software system testing that verifies whether software matches its design.
In software development, the V-model represents a development process that may be considered an extension of the waterfall model and is an example of the more general V-model. Instead of moving down linearly, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction, respectively.
There is considerable variety among software testing writers and consultants about what constitutes responsible software testing. Proponents of a context-driven approach consider much of the writing about software testing to be doctrine, while others believe this contradicts the IEEE 829 documentation standard.
Scenario testing is a software testing activity that uses scenarios: hypothetical stories to help the tester work through a complex problem or test system. The ideal scenario test is a credible, complex, compelling or motivating story; the outcome of which is easy to evaluate. These tests are usually different from test cases in that test cases are single steps whereas scenarios cover a number of steps.
In computer programming and software testing, smoke testing is preliminary testing or sanity testing to reveal simple failures severe enough to, for example, reject a prospective software release. Smoke tests are a subset of test cases that cover the most important functionality of a component or system, used to aid assessment of whether main functions of the software appear to work correctly. When used to determine if a computer program should be subjected to further, more fine-grained testing, a smoke test may be called a pretest or an intake test. Alternatively, it is a set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. In the DevOps paradigm, use of a build verification test step is one hallmark of the continuous integration maturity stage.
Acceptance test–driven development (ATDD) is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example (SBE), behavior-driven development (BDD), example-driven development (EDD), and support-driven development also called story test–driven development (SDD). All these processes aid developers and testers in understanding the customer's needs prior to implementation and allow customers to be able to converse in their own domain language.
This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to software quality assurance and general application of the test method.