In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests. [1]
In systems engineering, it may involve black-box testing performed on a system (for example: a piece of software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery. [2]
In software testing, the ISTQB defines acceptance testing as:
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether a system satisfies the acceptance criteria [3] and to enable the user, customers or other authorized entity to determine whether to accept the system.
The final test in the QA lifecycle, user acceptance testing, is conducted just before the final release to assess whether the product or application can handle real-world scenarios. By replicating user behavior, it checks if the system satisfies business requirements and rejects changes if certain criteria are not met. [5]
Some forms of acceptance testing are, user acceptance testing (UAT), end-user testing, operational acceptance testing (OAT), acceptance test-driven development (ATDD) and field (acceptance) testing. Acceptance criteria are the criteria that a system or component must satisfy in order to be accepted by a user, customer, or other authorized entity. [6]
Testing is a set of activities conducted to facilitate the discovery and/or evaluation of properties of one or more items under test. [7] Each test, known as a test case, exercises a set of predefined test activities, developed to drive the execution of the test item to meet test objectives; including correct implementation, error identification, quality verification, and other valued details. [7] The test environment is usually designed to be identical, or as close as possible, to the anticipated production environment. It includes all facilities, hardware, software, firmware, procedures, and/or documentation intended for or used to perform the testing of software. [7]
UAT and OAT test cases are ideally derived in collaboration with business customers, business analysts, testers, and developers. These tests must include both business logic tests as well as operational environment conditions. The business customers (product owners) are the primary stakeholders of these tests. As the test conditions successfully achieve their acceptance criteria, the stakeholders are reassured the development is progressing in the right direction. [8]
The acceptance test suite may need to be performed multiple times, as all of the test cases may not be executed within a single test iteration. [9]
The acceptance test suite is run using predefined acceptance test procedures to direct the testers on which data to use, the step-by-step processes to follow, and the expected result following execution. The actual results are retained for comparison with the expected results. [9] If the actual results match the expected results for each test case, the test case is said to pass. If the quantity of non-passing test cases does not breach the project's predetermined threshold, the test suite is said to pass. If it does, the system may either be rejected or accepted on conditions previously agreed between the sponsor and the manufacturer.
The anticipated result of a successful test execution:
The objective is to provide confidence that the developed product meets both the functional and non-functional requirements. The purpose of conducting acceptance testing is that once completed, and provided the acceptance criteria are met, it is expected the sponsors will sign off on the product development/enhancement as satisfying the defined requirements (previously agreed between business and product provider/developer).
User acceptance testing (UAT) consists of a process of verifying that a solution works for the user. [10] It is not system testing (ensuring software does not crash and meets documented requirements) but rather ensures that the solution will work for the user (i.e. tests that the user accepts the solution); software vendors often refer to this as "Beta testing".
This testing should be undertaken by the intended end user, or a subject-matter expert (SME), preferably the owner or client of the solution under test and provide a summary of the findings for confirmation to proceed after trial or review. In software development, UAT as one of the final stages of a project often occurs before a client or customer accepts the new system. Users of the system perform tests in line with what would occur in real-life scenarios. [11]
The materials given to the tester must be similar to the materials that the end user will have. Testers should be given real-life scenarios such as the three most common or difficult tasks that the users they represent will undertake. [12]
The UAT acts as a final verification of the required business functionality and proper functioning of the system, emulating real-world conditions on behalf of the paying client or a specific large customer. If the software works as required and without issues during normal use, one can reasonably extrapolate the same level of stability in production. [13]
User tests, usually performed by clients or by end-users, do not normally focus on identifying simple cosmetic problems such as spelling errors, nor on showstopper defects, such as software crashes; testers and developers identify and fix these issues during earlier unit testing, integration testing, and system testing phases.
UAT should be executed against test scenarios. [14] [15] Test scenarios usually differ from System or Functional test cases in that they represent a "player" or "user" journey. The broad nature of the test scenario ensures that the focus is on the journey and not on technical or system-specific details, staying away from "click-by-click" test steps to allow for a variance in users' behavior. Test scenarios can be broken down into logical "days", which are usually where the actor (player/customer/operator) or system (backoffice, front end) changes. [16]
In industry, a common UAT is a factory acceptance test (FAT). This test takes place before the installation of the equipment. Most of the time testers not only check that the equipment meets the specification but also that it is fully functional. A FAT usually includes a check of completeness, a verification against contractual requirements, a proof of functionality (either by simulation or a conventional function test), and a final inspection. [17] The results of these tests give clients confidence in how the system will perform in production. There may also be legal or contractual requirements for acceptance of the system.
Operational acceptance testing (OAT) is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment. [18]
Acceptance testing is a term used in agile software development methodologies, particularly extreme programming, referring to the functional testing of a user story by the software development team during the implementation phase. [19]
The customer specifies scenarios to test when a user story has been correctly implemented. A story can have one or many acceptance tests, whatever it takes to ensure the functionality works. Acceptance tests are black-box system tests. Each acceptance test represents some expected result from the system. Customers are responsible for verifying the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority. Acceptance tests are also used as regression tests prior to a production release. A user story is not considered complete until it has passed its acceptance tests. This means that new acceptance tests must be created for each iteration, or the development team will report zero progress. [20]
This section needs expansion. You can help by adding to it. (May 2008) |
This article needs additional citations for verification .(November 2024) |
Typical types of acceptance testing include the following
According to the Project Management Institute, acceptance criteria is a "set of conditions that is required to be met before deliverables are accepted." [26] Requirements found in acceptance criteria for a given component of the system are usually very detailed. [27]
Software testing is the act of checking whether software satisfies expectations.
Software maintenance is the modification of software after delivery.
In the context of software engineering, software quality refers to two related but distinct notions:
In software engineering, a test case is a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test cases underlie testing that is methodical rather than haphazard. A battery of test cases can be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistent regression testing.
ISO/IEC 9126Software engineering — Product quality was an international standard for the evaluation of software quality. It has been replaced by ISO/IEC 25010:2011.
Behavior-driven development (BDD) involves naming software tests using domain language to describe the behavior of the code.
Portability testing is the process of determining the degree of ease or difficulty to which a software component or application can be effectively and efficiently transferred from one hardware, software or other operational or usage environment to another. The test results, defined by the individual needs of the system, are some measurement of how easily the component or application will be to integrate into the environment and these results will then be compared to the software system's non-functional requirement of portability for correctness. The levels of correctness are usually measured by the cost to adapt the software to the new environment compared to the cost of redevelopment.
In software development, functional testing is a form of software system testing that verifies whether software matches its design.
A functional specification in systems engineering and software development is a document that specifies the functions that a system or component must perform.
In software development, the V-model represents a development process that may be considered an extension of the waterfall model and is an example of the more general V-model. Instead of moving down linearly, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction, respectively.
Verification and validation are independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose. These are critical components of a quality management system such as ISO 9000. The words "verification" and "validation" are sometimes preceded with "independent", indicating that the verification and validation is to be performed by a disinterested third party. "Independent verification and validation" can be abbreviated as "IV&V".
Software quality control is the set of procedures used by organizations to ensure that a software product will meet its quality goals at the best value to the customer, and to continually improve the organization’s ability to produce software products in the future.
A view model or viewpoints framework in systems engineering, software engineering, and enterprise engineering is a framework which defines a coherent set of views to be used in the construction of a system architecture, software architecture, or enterprise architecture. A view is a representation of the whole system from the perspective of a related set of concerns.
In software engineering, a software development process or software development life cycle (SDLC) is a process of planning and managing software development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improve design and/or product management. The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application.
Operational acceptance testing (OAT) is used to conduct operational readiness (pre-release) of a product, service, or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or operations readiness and assurance testing (OR&A). Functional testing within OAT is limited to those tests which are required to verify the non-functional aspects of the system.
Acceptance test–driven development (ATDD) is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example (SBE), behavior-driven development (BDD), example-driven development (EDD), and support-driven development also called story test–driven development (SDD). All these processes aid developers and testers in understanding the customer's needs prior to implementation and allow customers to be able to converse in their own domain language.
ISO/IEC/IEEE 29119Software and systems engineering -- Software testing is a series of five international standards for software testing. First developed in 2007 and released in 2013, the standard "defines vocabulary, processes, documentation, techniques, and a process assessment model for testing that can be used within any software development lifecycle."
This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to software quality assurance and general application of the test method.