This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations .(August 2010) |
A test strategy is an outline that describes the testing approach of the software development cycle. The purpose of a test strategy is to provide a rational deduction from organizational, high-level objectives to actual test activities to meet those objectives from a quality assurance perspective. The creation and documentation of a test strategy should be done in a systematic way to ensure that all objectives are fully covered and understood by all stakeholders. It should also frequently be reviewed, challenged and updated as the organization and the product evolve over time. Furthermore, a test strategy should also aim to align different stakeholders of quality assurance in terms of terminology, test and integration levels, roles and responsibilities, traceability, planning of resources, etc.
Test strategies describe how the product risks of the stakeholders are mitigated at the test-level, which types of testing are to be performed, and which entry and exit criteria apply. They are created based on development design documents. System design documents are primarily used, and occasionally conceptual design documents may be referred to. Design documents describe the functionality of the software to be enabled in the upcoming release. For every stage of development design, a corresponding test strategy should be created to test the new feature sets.
The test strategy describes the test level to be performed. There are primarily three levels of testing: unit testing, integration testing, and system testing. In most software development organizations, the developers are responsible for unit testing. Individual testers or test teams are responsible for integration and system testing.
The roles and responsibilities of the test leader, individual testers, and project manager are to be clearly defined at a project level in this section. This may not have names associated, but the role must be very clearly defined.
Testing strategies should be reviewed by the developers. They should also be reviewed by leads for all levels of testing to make sure the coverage is complete, yet not overlapping. Both the testing manager and the development managers should approve the test strategy before testing can begin.
Environment requirements are an important part of the test strategy. It describes what operating systems are used for testing. It also clearly informs the necessary OS patch levels and security updates required. For example: a certain test plan may require Windows 8.1 to be installed as a prerequisite for testing.
There are two methods used in executing test cases: manual and automated. Depending on the nature of the testing, it is usually the case that a combination of manual and automated testing is the best testing method.
Any risks that will affect the testing process must be listed along with the mitigation. By documenting a risk, its occurrence can be anticipated well ahead of time. Proactive action may be taken to prevent it from occurring, or to mitigate its damage. Sample risks are dependency of completion of coding done by sub-contractors, or capability of testing tools.
A test plan should make an estimation of how long it will take to complete the testing phase. There are many requirements to complete testing phases. First, testers have to execute all test cases at least once. Furthermore, if a defect was found, the developers will need to fix the problem. The testers should then re-test the failed test case until it is functioning correctly. Last but not the least, the tester need to conduct regression testing towards the end of the cycle to make sure the developers did not accidentally break parts of the software while fixing another part. This can occur on test cases that were previously functioning properly.
The test schedule should also document the number of testers available for testing. If possible, assign test cases to each tester.
It is often difficult to make an accurate estimate of the test schedule since the testing phase involves many uncertainties. Planners should take into account the extra time needed to accommodate contingent issues. One way to make this approximation is to look at the time needed by the previous releases of the software. If the software is new, multiplying the initial testing schedule approximation by two is a good way to start.
This section possibly contains original research .(July 2015) |
When a particular problem is identified, the programs are debugged and the fix is applied to the program. To make sure that the fix works, the program will be tested again for that criterion. Regression tests will reduce the likelihood that one fix creates some other problems in that program or in any other interface. So, a set of related test cases may have to be repeated, to test whether anything else is affected by a particular fix. How this is going to be carried out must be elaborated in this section.
Consider different testing levels when selecting regression test cases. Unit-, integration- and system test cases are good candidates. Select cases that have direct relationship with the fix and also include few business critical cases that prove basic business scenarios still work. Remember also that non-functional testing (security, performance, usability) plays an important role in proving business continuation.
In some companies, whenever there is a fix in one unit, all unit test cases for that unit will be repeated.
From the list of requirements, we can identify related areas, whose functionality is similar. These areas are the test groups. For example, in a railway reservation system, anything related to ticket booking is a functional group; anything related with report generation is a functional group. In the same way, we have to identify the test groups based on the functionality aspect.
Among test cases, we need to establish priorities. While testing software projects, certain test cases will be treated as the most important ones, and if they fail the product cannot be released. Some other test cases may be of lesser functional importance, or even cosmetic and if they fail, we can release the product without much compromise on the functionality. These priority levels must be clearly stated. These may be mapped to the test groups also.
When test cases are executed, the test leader and the project manager must know, where exactly the project stands in terms of testing activities. To know where the project stands, the inputs from the individual testers must come to the test leader. This will include, what test cases are executed, how long it took, how many test cases passed, how many failed, and how many are not executable. Also, how often the project collects the status is to be clearly stated. Some projects will have a practice of collecting the status on a daily basis or weekly basis.
When the test cases are executed, it is important to keep track of the execution details such as when it is executed, who did it, how long it took, what is the result etc. This data must be available to the test leader and the project manager, along with all the team members, in a central location. This may be stored in a specific directory in a central server and the document must say clearly about the locations and the directories. The naming convention for the documents and files must also be mentioned.
Ideally, the software must completely satisfy the set of requirements. From design, each requirement must be addressed in every single document in the software process. The documents include the HLD, LLD, source codes, unit test cases, integration test cases and the system test cases. In a requirements traceability matrix, the rows will have the requirements. The columns represent each document. Intersecting cells are marked when a document addresses a particular requirement with information related to the requirement ID in the document. Ideally, if every requirement is addressed in every single document, all the individual cells have valid section ids or names filled in, then we know that every requirement is addressed. If any cells are empty, it represents that a requirement has not been correctly addressed.
The senior management may like to have test summary on a weekly or monthly basis. If the project is very critical, they may need it even on daily basis. This section must address what kind of test summary reports will be produced for the senior management along with the frequency.
The test strategy must give a clear vision of what the testing team will do for the whole project for the entire duration. This document can be presented to the client, if needed. The person who prepares this document must be functionally strong in the product domain, with very good experience, as this is the document that is going to drive the entire team for the testing activities. Test strategy must be clearly explained to the testing team members right at the beginning of the project.
In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests.
Software testing is the act of checking whether software satisfies expectations.
Regression testing is re-running functional and non-functional tests to ensure that previously developed and tested software still performs as expected after a change. If not, that would be called a regression.
Unit testing, a.k.a. component or module testing, is a form of software testing by which isolated source code is tested to validate expected behavior.
Test-driven development (TDD) is a way of writing code that involves writing an automated unit-level test case that fails, then writing just enough code to make the test pass, then refactoring both the test code and the production code, then repeating with another new test case.
In systems engineering and software engineering, requirements analysis focuses on the tasks that determine the needs or conditions to meet the new or altered product or project, taking account of the possibly conflicting requirements of the various stakeholders, analyzing, documenting, validating, and managing software or system requirements.
In engineering, a requirement is a condition that must be satisfied for the output of a work effort to be acceptable. It is an explicit, objective, clear and often quantitative description of a condition to be satisfied by a material, design, product, or service.
In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.
White-box testing is a method of software testing that tests internal structures or workings of an application, as opposed to its functionality. In white-box testing, an internal perspective of the system is used to design test cases. The tester chooses inputs to exercise paths through the code and determine the expected outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT). White-box testing can be applied at the unit, integration and system levels of the software testing process. Although traditional testers tended to think of white-box testing as being done at the unit level, it is used for integration and system testing more frequently today. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it has the potential to miss unimplemented parts of the specification or missing requirements. Where white-box testing is design-driven, that is, driven exclusively by agreed specifications of how each component of software is required to behave, white-box test techniques can accomplish assessment for unimplemented or missing requirements.
In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.
In software engineering, a test case is a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test cases underlie testing that is methodical rather than haphazard. A battery of test cases can be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistent regression testing.
Behavior-driven development (BDD) involves naming software tests using domain language to describe the behavior of the code.
Game testing, also called quality assurance (QA) testing within the video game industry, is a software testing process for quality control of video games. The primary function of game testing is the discovery and documentation of software defects. Interactive entertainment software testing is a highly technical field requiring computing expertise, analytic competence, critical evaluation skills, and endurance. In recent years the field of game testing has come under fire for being extremely strenuous and unrewarding, both financially and emotionally.
Software project management is the process of planning and leading software projects. It is a sub-discipline of project management in which software projects are planned, implemented, monitored and controlled.
Software Quality Management (SQM) is a management process that aims to develop and manage the quality of software in such a way so as to best ensure that the product meets the quality standards expected by the customer while also meeting any necessary regulatory and developer requirements, if any. Software quality managers require software to be tested before it is released to the market, and they do this using a cyclical process-based quality assessment in order to reveal and fix bugs before release. Their job is not only to ensure their software is in good shape for the consumer but also to encourage a culture of quality throughout the enterprise.
A functional specification in systems engineering and software development is a document that specifies the functions that a system or component must perform.
In software development, the V-model represents a development process that may be considered an extension of the waterfall model and is an example of the more general V-model. Instead of moving down linearly, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction, respectively.
Business requirements, also known as stakeholder requirements specifications (StRS), describe the characteristics of a proposed system from the viewpoint of the system's end user like a CONOPS. Products, systems, software, and processes are ways of how to deliver, satisfy, or meet business requirements. Consequently, business requirements are often discussed in the context of developing or procuring software or other systems.
Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Continuous testing was originally proposed as a way of reducing waiting time for feedback to developers by introducing development environment-triggered tests as well as more traditional developer/tester-triggered tests.
This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to software quality assurance and general application of the test method.