Test case

Last updated

In software engineering, a test case is a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as to exercise a particular program path or to verify compliance with a specific requirement. [1] Test cases underlie testing that is methodical rather than haphazard. A battery of test cases can be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistent regression testing. [2]

Contents

Formal test cases

In order to fully test that all the requirements of an application are met, there must be at least two test cases for each requirement: one positive test and one negative test. [3] If a requirement has sub-requirements, each sub-requirement must have at least two test cases. Keeping track of the link between the requirement and the test is frequently done using a traceability matrix. Written test cases should include a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted.

A formal written test case is characterized by a known input and by an expected output, which is worked out before the test is executed. [4] The known input should test a precondition and the expected output should test a postcondition.

Informal test cases

For applications or systems without formal requirements, test cases can be written based on the accepted normal operation of programs of a similar class. In some schools of testing, test cases are not written at all but the activities and results are reported after the tests have been run.

In scenario testing, hypothetical stories are used to help the tester think through a complex problem or system. These scenarios are usually not written down in any detail. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test is a story that is motivating, credible, complex, and easy to evaluate. They are usually different from test cases in that test cases are single steps while scenarios cover a number of steps of the key. [5] [6]

Typical written test case format

A test case usually contains a single step or a sequence of steps to test the correct behaviour/functionality and features of an application. An expected result or expected outcome is usually given.

Additional information that may be included: [7]

Larger test cases may also contain prerequisite states or steps, and descriptions. [7]

A written test case should also contain a place for the actual result.

These steps can be stored in a word processor document, spreadsheet, database or other common repository.

In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.

Test suites often also contain [8]

Besides a description of the functionality to be tested, and the preparation required to ensure that the test can be conducted, the most time-consuming part in the test case is creating the tests and modifying them when the system changes.

Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would evaluate if the results can be considered as a pass. This happens often on new products' performance number determination. The first test is taken as the base line for subsequent test and product release cycles.

Acceptance tests, which use a variation of a written test case, are commonly performed by a group of end-users or clients of the system to ensure the developed system meets the requirements specified or the contract. [9] [10] User acceptance tests are differentiated by the inclusion of happy path or positive test cases to the almost complete exclusion of negative test cases. [11]

See also

Related Research Articles

<span class="mw-page-title-main">Acceptance testing</span> Test to determine if the requirements of a specification or contract are met

In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests.

Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not necessarily limited to:

Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied virtually to every level of software testing: unit, integration, system and acceptance. It is sometimes referred to as specification-based testing.

In software and systems engineering, the phrase use case is a polyseme with two senses:

  1. A usage scenario for a piece of software; often used in the plural to suggest situations where a piece of software may be useful.
  2. A potential scenario in which a system receives an external request and responds to it.
<span class="mw-page-title-main">Systems development life cycle</span> Systems engineering terms

In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.

In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.

A test script in software testing is a set of instructions that will be performed on the system under test to test that the system functions as expected.

Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Cem Kaner, who coined the term in 1984, defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."

In software engineering, behavior-driven development (BDD) is a software development process that encourages collaboration among developers, quality assurance experts, and customer representatives in a software project. It encourages teams to use conversation and concrete examples to formalize a shared understanding of how the application should behave. It emerged from test-driven development (TDD) and can work alongside an agile software development process. Behavior-driven development combines the general techniques and principles of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software development and management teams with shared tools and a shared process to collaborate on software development.

In software development, functional testing is a quality assurance (QA) process and a type of black-box testing that bases its test cases on the specifications of the software component under test. Functions are tested by feeding them input and examining the output, and internal program structure is rarely considered. Functional software testing is conducted to evaluate the compliance of a system or component with specified functional requirements. Functional testing usually describes what the system does.

Test data plays a crucial role in software development by providing inputs that are used to verify the correctness, performance, and reliability of software systems. Test data encompasses various types, such as positive and negative scenarios, edge cases, and realistic user scenarios, and it aims to exercise different aspects of the software to uncover bugs and validate its behavior. By designing and executing test cases with appropriate test data, developers can identify and rectify defects, improve the quality of the software, and ensure it meets the specified requirements. Moreover, test data can also be used for regression testing to validate that new code changes or enhancements do not introduce any unintended side effects or break existing functionalities. Overall, the effective utilization of test data in software development significantly contributes to the production of reliable and robust software systems.

<span class="mw-page-title-main">Functional specification</span>

A functional specification in systems engineering and software development is a document that specifies the functions that a system or component must perform.

Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end user where by they use most of the application's features to ensure correct behaviour. To guarantee completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases.

<span class="mw-page-title-main">V-model (software development)</span> Software development methodology

In software development, the V-model represents a development process that may be considered an extension of the waterfall model, and is an example of the more general V-model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction, respectively.

Gray-box testing is a combination of white-box testing and black-box testing. The aim of this testing is to search for the defects, if any, due to improper structure or improper usage of applications.

Scenario testing is a software testing activity that uses scenarios: hypothetical stories to help the tester work through a complex problem or test system. The ideal scenario test is a credible, complex, compelling or motivating story; the outcome of which is easy to evaluate. These tests are usually different from test cases in that test cases are single steps whereas scenarios cover a number of steps.

<span class="mw-page-title-main">TestLink</span> Web-based test management software

TestLink is a web-based test management system that facilitates software quality assurance. It is developed and maintained by Teamtest. The platform offers support for test cases, test suites, test plans, test projects and user management, as well as various reports and statistics.

In computer programming and software testing, smoke testing is preliminary testing or sanity testing to reveal simple failures severe enough to, for example, reject a prospective software release. Smoke tests are a subset of test cases that cover the most important functionality of a component or system, used to aid assessment of whether main functions of the software appear to work correctly. When used to determine if a computer program should be subjected to further, more fine-grained testing, a smoke test may be called a pretest or an intake test. Alternatively, it is a set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. In the DevOps paradigm, use of a build verification test step is one hallmark of the continuous integration maturity stage.

Acceptance test–driven development (ATDD) is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example (SBE), behavior-driven development (BDD), example-driven development (EDD), and support-driven development also called story test–driven development (SDD). All these processes aid developers and testers in understanding the customer's needs prior to implementation and allow customers to be able to converse in their own domain language.

This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to Software Quality Assurance (more widely colloquially known as Quality Assurance and general application of the test method.

References

  1. Systems and software engineering -- Vocabulary. Iso/Iec/IEEE 24765:2010(E). 2010-12-01. pp. 1–418. doi:10.1109/IEEESTD.2010.5733835. ISBN   978-0-7381-6205-8.
  2. Kaner, Cem (May 2003). "What Is a Good Test Case?" (PDF). STAR East: 2.
  3. "Writing Test Rules to Verify Stakeholder Requirements". StickyMinds.
  4. Beizer, Boris (May 22, 1995). Black Box Testing . New York: Wiley. p.  3. ISBN   9780471120940.
  5. "An Introduction to Scenario Testing" (PDF). Cem Kaner. Retrieved 2009-05-07.
  6. Crispin, Lisa; Gregory, Janet (2009). Agile Testing: A Practical Guide for Testers and Agile Teams . Addison-Wesley. pp.  192–5. ISBN   978-81-317-3068-3.
  7. 1 2 Liu, Juan (2014). "Equilibrium of Decision-Making Process in Financial Market". 2014 International Conference on Computational Science and Computational Intelligence. pp. 113–121. doi:10.1109/CSCI.2014.104. ISBN   9781605951676. S2CID   15204091 . Retrieved 2019-10-22.
  8. Kaner, Cem; Falk, Jack; Nguyen, Hung Q. (1993). Testing Computer Software (2nd ed.). Boston: Thomson Computer Press. p.  123–4. ISBN   1-85032-847-1.
  9. Hambling, Brian; van Goethem, Pauline (2013). User Acceptance Testing: A Step-by-step Guide. BCS Learning & Development Limited. ISBN   9781780171678.
  10. Black, Rex (August 2009). Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing . Hoboken, NJ: Wiley. ISBN   978-0-470-40415-7.
  11. Cimperman, Rob (2006). UAT Defined: A Guide to Practical User Acceptance Testing. Pearson Education. pp. Chapter 2. ISBN   9780132702621.