Software testing controversies

Last updated

There is considerable variety among software testing writers and consultants about what constitutes responsible software testing. Proponents of a context-driven approach [1] consider much of the writing about software testing to be doctrine, while others believe this contradicts the IEEE 829 documentation standard. [2]

Contents

Best practices

Proponents of the context-driven approach believe that there are no best practices of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation. James Marcus Bach wrote "...there is no practice that is better than all other possible practices, regardless of the context." [3] However, some testing practitioners do not see an issue with the concept of "best practices" and do not believe that term implies that a practice is universally applicable. [4]

Types of software testing

Agile vs. traditional

Starting around 1990, a new style of writing about testing began to challenge what had come before. The seminal work in this regard is widely considered to be Testing Computer Software, by Cem Kaner. [5] Instead of assuming that testers have full access to source code and complete specifications, these writers, including Kaner and James Bach, argued that testers must learn to work under conditions of uncertainty and constant change. Meanwhile, an opposing trend toward process "maturity" also gained ground, in the form of the Capability Maturity Model. The agile testing movement (which includes but is not limited to forms of testing practiced on agile development projects) has popularity mainly in commercial circles, whereas the CMM was embraced by government and military software providers.

However, saying that "maturity models" like CMM gained ground against or opposing Agile testing may not be right. Agile movement is a 'way of working', while CMM is a process improvement idea.

But another point of view must be considered: the operational culture of an organization. While it may be true that testers must have an ability to work in a world of uncertainty, it is also true that their flexibility must have direction. In many cases test cultures are self-directed and as a result fruitless, unproductive results can ensue. Furthermore, providing positive evidence of defects may either indicate that you have found the tip of a much larger problem, or that you have exhausted all possibilities. A framework is a test of Testing. It provides a boundary that can measure (validate) the capacity of our work. Both sides have, and will continue to argue the virtues of their work. The proof however is in each and every assessment of delivery quality. It does little good to test systematically if you are too narrowly focused. On the other hand, finding a bunch of errors is not an indicator that Agile methods was the driving force; you may simply have stumbled upon an obviously poor piece of work.

Exploratory vs. scripted

Exploratory testing means simultaneous test design and test execution with an emphasis on learning. Scripted testing means that learning and test design happen prior to test execution, and quite often the learning has to be done again during test execution. Exploratory testing is very common, but in most writing and training about testing it is barely mentioned and generally misunderstood. Some writers consider it a primary and essential practice. Structured exploratory testing is a compromise when the testers are familiar with the software. A vague test plan, known as a test charter, is written up, describing what functionalities need to be tested but not how, allowing the individual testers to choose the method and steps of testing.

There are two main disadvantages associated with a primarily exploratory testing approach. The first is that there is no opportunity to prevent defects, which can happen when the designing of tests in advance serves as a form of structured static testing that often reveals problems in system requirements and design. The second is that, even with test charters, demonstrating test coverage and achieving repeatability of tests using a purely exploratory testing approach is difficult. For this reason, a blended approach of scripted and exploratory testing is often used to reap the benefits while mitigating each approach's disadvantages.

Manual vs. automated

Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. [6] Others, such as advocates of agile development, recommend automating 100% of all tests. A challenge with automation is that automated testing requires automated test oracles (an oracle is a mechanism or principle by which a problem in the software can be recognized). Such tools have value in load testing software (by signing on to an application with hundreds or thousands of instances simultaneously), or in checking for intermittent errors in software.

The success of automated software testing depends on complete and comprehensive test planning. Software development strategies such as test-driven development are highly compatible with the idea of devoting a large part of an organization's testing resources to automated testing. Many large software organizations perform automated testing. Some have developed their own automated testing environments specifically for internal development, and not for resale.

Software design vs. software implementation

[ non sequitur ]

if software testing is gathering information on a product or program for stakeholders then it doesn't follow that there is a choice between testing implementation or design - i.e. this is a faulty assumption.[ clarification needed ] Ideally, software testers should not be limited only to testing software implementation, but also to testing software design. With this assumption, the role and involvement of testers will change dramatically. In such an environment, the test cycle will change too. To test software design, testers would review requirement and design specifications together with designer and programmer, potentially helping to identify bugs earlier in software development.

Oversight

One principle in software testing is summed up by the classical Latin question posed by Juvenal: Quis Custodiet Ipsos Custodes (Who watches the watchmen?), or is alternatively referred informally, as the "Heisenbug" concept (a common misconception that confuses Heisenberg's uncertainty principle with observer effect). The idea is that any form of observation is also an interaction, that the act of testing can also affect that which is being tested. [7]

In practical terms, the test engineer is testing software (and sometimes hardware or firmware) with other software (and hardware and firmware). The process can fail in ways that are not the result of defects in the target but rather result from defects in (or indeed intended features of) the testing tool.

There are metrics being developed to measure the effectiveness of testing. One method is by analyzing code coverage (this is highly controversial) - where everyone can agree what areas are not being covered at all and try to improve coverage in these areas.

Bugs can also be placed into code on purpose, and the number of bugs that have not been found can be predicted based on the percentage of intentionally placed bugs that were found. The problem is that it assumes that the intentional bugs are the same type of bug as the unintentional ones.

Finally, there is the analysis of historical find-rates. By measuring how many bugs are found and comparing them to predicted numbers (based on past experience with similar projects), certain assumptions regarding the effectiveness of testing can be made. While not an absolute measurement of quality, if a project is halfway complete and there have been no defects found, then changes may be needed to the procedures being employed by QA.

Related Research Articles

<span class="mw-page-title-main">Acceptance testing</span> Test to determine if the requirements of a specification or contract are met

In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests.

Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but not necessarily limited to:

Test-driven development (TDD) is a software development process relying on software requirements being converted to test cases before software is fully developed, and tracking all software development by repeatedly testing the software against all test cases. This is as opposed to software being developed first and test cases created later.

System testing is testing conducted on a complete integrated system to evaluate the system's compliance with its specified requirements.

In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.

In software engineering, a test case is a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test cases underlie testing that is methodical rather than haphazard. A battery of test cases can be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistent regression testing.


A test script in software testing is a set of instructions that will be performed on the system under test to test that the system functions as expected.

In software engineering, continuous integration (CI) is the practice of merging all developers' working copies to a shared mainline several times a day. Grady Booch first proposed the term CI in his 1991 method, although he did not advocate integrating several times a day. Extreme programming (XP) adopted the concept of CI and did advocate integrating more than once per day – perhaps as many as tens of times per day.

Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Cem Kaner, who coined the term in 1984, defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."

Cem Kaner is a professor of software engineering at Florida Institute of Technology, and the Director of Florida Tech's Center for Software Testing Education & Research (CSTER) since 2004. He is perhaps best known outside academia as an advocate of software usability and software testing.

Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end user where by they use most of the application's features to ensure correct behaviour. To guarantee completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases.

Agile testing is a software testing practice that follows the principles of agile software development. Agile testing involves all members of a cross-functional agile team, with special expertise contributed by testers, to ensure delivering the business value desired by the customer at frequent intervals, working at a sustainable pace. Specification by example is used to capture examples of desired and undesired behavior and guide coding.

Test automation management tools are specific tools that provide a collaborative environment that is intended to make test automation efficient, traceable and clear for stakeholders. Test automation is becoming a cross-discipline

Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Continuous testing was originally proposed as a way of reducing waiting time for feedback to developers by introducing development environment-triggered tests as well as more traditional developer/tester-triggered tests.

Scenario testing is a software testing activity that uses scenarios: hypothetical stories to help the tester work through a complex problem or test system. The ideal scenario test is a credible, complex, compelling or motivating story; the outcome of which is easy to evaluate. These tests are usually different from test cases in that test cases are single steps whereas scenarios cover a number of steps.

James Marcus Bach is a software tester, author, trainer and consultant.

Specification by example (SBE) is a collaborative approach to defining requirements and business-oriented functional tests for software products based on capturing and illustrating requirements using realistic examples instead of abstract statements. It is applied in the context of agile software development methods, in particular behavior-driven development. This approach is particularly successful for managing requirements and functional tests on large-scale projects of significant domain and organisational complexity.

In computer programming and software testing, smoke testing is preliminary testing to reveal simple failures severe enough to, for example, reject a prospective software release. Smoke tests are a subset of test cases that cover the most important functionality of a component or system, used to aid assessment of whether main functions of the software appear to work correctly. When used to determine if a computer program should be subjected to further, more fine-grained testing, a smoke test may be called an intake test. Alternatively, it is a set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. In the DevOps paradigm, use of a BVT step is one hallmark of the continuous integration maturity stage.

Acceptance test–driven development (ATDD) is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example (SBE), behavior-driven development (BDD), example-driven development (EDD), and support-driven development also called story test–driven development (SDD). All these processes aid developers and testers in understanding the customer's needs prior to implementation and allow customers to be able to converse in their own domain language.

ISO/IEC/IEEE 29119Software and systems engineering -- Software testing is a series of five international standards for software testing. First developed in 2007 and released in 2013, the standard "defines vocabulary, processes, documentation, techniques, and a process assessment model for testing that can be used within any software development lifecycle."

References

  1. "Principles". Context Driven Testing. Retrieved 2022-10-05.
  2. Bath, Graham; Veenendaal, Erik Van (2013-12-13). Improving the Test Process: Implementing Improvement and Change - A Study Guide for the ISTQB Expert Level Module. Rocky Nook, Inc. ISBN   978-1-4920-0133-1.
  3. Bach, James (8 July 2005). "No Best Practices" . Retrieved 5 February 2018.
  4. Colantonio, Joe (13 April 2017). "Best Practices Vs Good Practices – Ranting with Rex Black" . Retrieved 5 February 2018.
  5. Kaner, Cem; Jack Falk; Hung Quoc Nguyen (1993). Testing Computer Software (Third ed.). John Wiley and Sons. ISBN   1-85032-908-7.
  6. An example is Mark Fewster, Dorothy Graham: Software Test Automation. Addison Wesley, 1999, ISBN   0-201-33140-3
  7. Garcia, Boni (2017-10-27). Mastering Software Testing with JUnit 5: Comprehensive guide to develop high quality Java applications. Packt Publishing Ltd. ISBN   978-1-78712-439-4.