Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Cem Kaner, who coined the term in 1984, [1] defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project." [2]
While the software is being tested, the tester learns things that together with experience and creativity generates new good tests to run. Exploratory testing is often thought of as a black box testing technique. Instead, those who have studied it consider it a test approach that can be applied to any test technique, at any stage in the development process. The key is not the test technique nor the item being tested or reviewed; the key is the cognitive engagement of the tester, and the tester's responsibility for managing his or her time. [3]
Exploratory testing has always been performed by skilled testers. In the early 1990s, ad hoc was too often synonymous with sloppy and careless work. As a result, a group of test methodologists (now calling themselves the Context-Driven School) began using the term "exploratory" seeking to emphasize the dominant thought process involved in unscripted testing, and to begin to develop the practice into a teachable discipline. This new terminology was first published by Cem Kaner in his book Testing Computer Software [4] and expanded upon in Lessons Learned in Software Testing. [5] Exploratory testing can be as disciplined as any other intellectual activity.
Exploratory testing seeks to find out how the software actually works, and to ask questions about how it will handle difficult and easy cases. The quality of the testing is dependent on the tester's skill of inventing test cases and finding defects. The more the tester knows about the product and different test methods, the better the testing will be.
To further explain, comparison can be made of freestyle exploratory testing to its antithesis scripted testing. In the latter activity test cases are designed in advance. This includes both the individual steps and the expected results. These tests are later performed by a tester who compares the actual result with the expected. When performing exploratory testing, expectations are open. Some results may be predicted and expected; others may not. The tester configures, operates, observes, and evaluates the product and its behaviour, critically investigating the result, and reporting information that seems likely to be a bug (which threatens the value of the product to some person) or an issue (which threatens the quality of the testing effort).
In reality, testing almost always is a combination of exploratory and scripted testing, but with a tendency towards either one, depending on context.
According to Kaner and James Marcus Bach, exploratory testing is more a mindset or "...a way of thinking about testing" than a methodology. [6] They also say that it crosses a continuum from slightly exploratory (slightly ambiguous or vaguely scripted testing) to highly exploratory (freestyle exploratory testing). [7]
The documentation of exploratory testing ranges from documenting all tests performed to just documenting the bugs. During pair testing, two persons create test cases together; one performs them, and the other documents. Session-based testing is a method specifically designed to make exploratory testing auditable and measurable on a wider scale.
Exploratory testers often use tools, including screen capture or video tools as a record of the exploratory session, or tools to quickly help generate situations of interest, e.g. James Bach's Perlclip.
The main advantage of exploratory testing is that less preparation is needed, important bugs are found quickly, and at execution time, the approach tends to be more intellectually stimulating than execution of scripted tests.
Another major benefit is that testers can use deductive reasoning based on the results of previous results to guide their future testing on the fly. They do not have to complete a current series of scripted tests before focusing in on or moving on to exploring a more target rich environment. This also accelerates bug detection when used intelligently.
Another benefit is that, after initial testing, most bugs are discovered by some sort of exploratory testing. This can be demonstrated logically by stating, "Programs that pass certain tests tend to continue to pass the same tests and are more likely to fail other tests or scenarios that are yet to be explored."
Disadvantages are that tests invented and performed on the fly can't be reviewed in advance (and by that prevent errors in code and test cases), and that it can be difficult to show exactly which tests have been run.
Freestyle exploratory test ideas, when revisited, are unlikely to be performed in exactly the same manner, which can be an advantage if it is important to find new errors; or a disadvantage if it is more important to repeat specific details of the earlier tests. This can be controlled with specific instruction to the tester, or by preparing automated tests where feasible, appropriate, and necessary, and ideally as close to the unit level as possible.
Replicated experiment has shown that while scripted and exploratory testing result in similar defect detection effectiveness (the total number of defects found) exploratory results in higher efficiency (the number of defects per time unit) as no effort is spent on pre-designing the test cases. [8] Observational study on exploratory testers proposed that the use of knowledge about the domain, the system under test, and customers is an important factor explaining the effectiveness of exploratory testing. [9] A case-study of three companies found that ability to provide rapid feedback was a benefit of Exploratory Testing while managing test coverage was pointed as a short-coming. [10] A survey found that Exploratory Testing is also used in critical domains and that Exploratory Testing approach places high demands on the person performing the testing. [11]
In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests.
Software testing is the act of checking whether software satisfies expectations.
In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.
In software engineering, a test case is a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test cases underlie testing that is methodical rather than haphazard. A battery of test cases can be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistent regression testing.
A test script in software testing is a set of instructions that will be performed on the system under test to test that the system functions as expected.
Session-based testing is a software test method that aims to combine accountability and exploratory testing to provide rapid defect discovery, creative on-the-fly test design, management control and metrics reporting. The method can also be used in conjunction with scenario testing. Session-based testing was developed in 2000 by Jonathan and James Marcus Bach.
Cem Kaner is a professor of software engineering at Florida Institute of Technology, and the Director of Florida Tech's Center for Software Testing Education & Research (CSTER) since 2004. He is perhaps best known outside academia as an advocate of software usability and software testing.
Ad hoc testing is a commonly used term for planned software testing that is performed without initial test case documentation; however, ad hoc testing can also be applied to other scientific research and quality control efforts. Ad hoc tests are useful for adding additional confidence to a resulting product or process, as well as quickly spotting important defects or inefficiencies, but they have some disadvantages, such as having inherent uncertainties in their performance and not being as useful without proper documentation post-execution and -completion. Occasionally, ad hoc testing is compared to exploratory testing as being less rigorous, though others argue that ad hoc testing still has value as "improvised testing that deals well with verifying a specific subject."
Risk-based testing (RBT) is a type of software testing that functions as an organizational principle used to prioritize the tests of features and functions in software, based on the risk of failure, the function of their importance and likelihood or impact of failure. In theory, there are an infinite number of possible tests. Risk-based testing uses risk (re-)assessments to steer all phases of the test process, i.e., test planning, test design, test implementation, test execution and test evaluation. This includes for instance, ranking of tests, and subtests, for functionality; test techniques such as boundary-value analysis, all-pairs testing and state transition tables aim to find the areas most likely to be defective.
Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end user where by they use most of the application's features to ensure correct behaviour. To guarantee completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases.
There is considerable variety among software testing writers and consultants about what constitutes responsible software testing. Proponents of a context-driven approach consider much of the writing about software testing to be doctrine, while others believe this contradicts the IEEE 829 documentation standard.
Pair testing is a software development technique in which two team members work together at one keyboard to test the software application. One does the testing and the other analyzes or reviews the testing. This can be done between one tester and developer or business analyst or between two testers with both participants taking turns at driving the keyboard.
API testing is a type of software testing that involves testing application programming interfaces (APIs) directly and as part of integration testing to determine if they meet expectations for functionality, reliability, performance, and security. Since APIs lack a GUI, API testing is performed at the message layer. API testing is now considered critical for automating testing because APIs serve as the primary interface to application logic and because GUI tests are difficult to maintain with the short release cycles and frequent changes commonly used with Agile software development and DevOps.
In software engineering, test design is the activity of deriving and specifying test cases from test conditions to test software.
Gray-box testing is a combination of white-box testing and black-box testing. The aim of this testing is to search for the defects, if any, due to improper structure or improper usage of applications.
Scenario testing is a software testing activity that uses scenarios: hypothetical stories to help the tester work through a complex problem or test system. The ideal scenario test is a credible, complex, compelling or motivating story; the outcome of which is easy to evaluate. These tests are usually different from test cases in that test cases are single steps whereas scenarios cover a number of steps.
James Marcus Bach is an American software tester, author, trainer, and consultant.
Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time and following a pipeline through a "production-like environment", without doing so manually. It aims at building, testing, and releasing software with greater speed and frequency. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery.
In computer programming and software testing, smoke testing is preliminary testing or sanity testing to reveal simple failures severe enough to, for example, reject a prospective software release. Smoke tests are a subset of test cases that cover the most important functionality of a component or system, used to aid assessment of whether main functions of the software appear to work correctly. When used to determine if a computer program should be subjected to further, more fine-grained testing, a smoke test may be called a pretest or an intake test. Alternatively, it is a set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. In the DevOps paradigm, use of a build verification test step is one hallmark of the continuous integration maturity stage.
This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to Software Quality Assurance (more widely colloquially known as Quality Assurance and general application of the test method.