Acceptance testing

Last updated

Acceptance testing of an aircraft catapult US Navy 090407-N-4669J-042 Sailors assigned to the air department of the aircraft carrier USS George H.W. Bush (CVN 77) test the ship's catapult systems during acceptance trials.jpg
Acceptance testing of an aircraft catapult
Six of the primary mirrors of the James Webb Space Telescope being prepared for acceptance testing James Webb Primary Mirror.jpg
Six of the primary mirrors of the James Webb Space Telescope being prepared for acceptance testing

In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests. [1]

Contents

In systems engineering, it may involve black-box testing performed on a system (for example: a piece of software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery. [2]

In software testing, the ISTQB defines acceptance testing as:

Formal testing with respect to user needs, requirements, and business processes conducted to determine whether a system satisfies the acceptance criteria [3] and to enable the user, customers or other authorized entity to determine whether to accept the system.

Standard Glossary of Terms used in Software Testing [4] :2

The final test in the QA lifecycle, user acceptance testing, is conducted just before the final release to assess whether the product or application can handle real-world scenarios. By replicating user behavior, it checks if the system satisfies business requirements and rejects changes if certain criteria are not met. [5]

Some forms of acceptance testing are, user acceptance testing (UAT), end-user testing, operational acceptance testing (OAT), acceptance test-driven development (ATDD) and field (acceptance) testing. Acceptance criteria are the criteria that a system or component must satisfy in order to be accepted by a user, customer, or other authorized entity. [6]

Overview

Testing is a set of activities conducted to facilitate the discovery and/or evaluation of properties of one or more items under test. [7] Each test, known as a test case, exercises a set of predefined test activities, developed to drive the execution of the test item to meet test objectives; including correct implementation, error identification, quality verification, and other valued details. [7] The test environment is usually designed to be identical, or as close as possible, to the anticipated production environment. It includes all facilities, hardware, software, firmware, procedures, and/or documentation intended for or used to perform the testing of software. [7]

UAT and OAT test cases are ideally derived in collaboration with business customers, business analysts, testers, and developers. These tests must include both business logic tests as well as operational environment conditions. The business customers (product owners) are the primary stakeholders of these tests. As the test conditions successfully achieve their acceptance criteria, the stakeholders are reassured the development is progressing in the right direction. [8]

Process

The acceptance test suite may need to be performed multiple times, as all of the test cases may not be executed within a single test iteration. [9]

The acceptance test suite is run using predefined acceptance test procedures to direct the testers on which data to use, the step-by-step processes to follow, and the expected result following execution. The actual results are retained for comparison with the expected results. [9] If the actual results match the expected results for each test case, the test case is said to pass. If the quantity of non-passing test cases does not breach the project's predetermined threshold, the test suite is said to pass. If it does, the system may either be rejected or accepted on conditions previously agreed between the sponsor and the manufacturer.

The anticipated result of a successful test execution:

The objective is to provide confidence that the developed product meets both the functional and non-functional requirements. The purpose of conducting acceptance testing is that once completed, and provided the acceptance criteria are met, it is expected the sponsors will sign off on the product development/enhancement as satisfying the defined requirements (previously agreed between business and product provider/developer).

User acceptance testing

User acceptance testing (UAT) consists of a process of verifying that a solution works for the user. [10] It is not system testing (ensuring software does not crash and meets documented requirements) but rather ensures that the solution will work for the user (i.e. tests that the user accepts the solution); software vendors often refer to this as "Beta testing".

This testing should be undertaken by the intended end user, or a subject-matter expert (SME), preferably the owner or client of the solution under test and provide a summary of the findings for confirmation to proceed after trial or review. In software development, UAT as one of the final stages of a project often occurs before a client or customer accepts the new system. Users of the system perform tests in line with what would occur in real-life scenarios. [11]

The materials given to the tester must be similar to the materials that the end user will have. Testers should be given real-life scenarios such as the three most common or difficult tasks that the users they represent will undertake. [12]

The UAT acts as a final verification of the required business functionality and proper functioning of the system, emulating real-world conditions on behalf of the paying client or a specific large customer. If the software works as required and without issues during normal use, one can reasonably extrapolate the same level of stability in production. [13]

User tests, usually performed by clients or by end-users, do not normally focus on identifying simple cosmetic problems such as spelling errors, nor on showstopper defects, such as software crashes; testers and developers identify and fix these issues during earlier unit testing, integration testing, and system testing phases.

UAT should be executed against test scenarios. [14] [15] Test scenarios usually differ from System or Functional test cases in that they represent a "player" or "user" journey. The broad nature of the test scenario ensures that the focus is on the journey and not on technical or system-specific details, staying away from "click-by-click" test steps to allow for a variance in users' behavior. Test scenarios can be broken down into logical "days", which are usually where the actor (player/customer/operator) or system (backoffice, front end) changes. [16]

In industry, a common UAT is a factory acceptance test (FAT). This test takes place before the installation of the equipment. Most of the time testers not only check that the equipment meets the specification but also that it is fully functional. A FAT usually includes a check of completeness, a verification against contractual requirements, a proof of functionality (either by simulation or a conventional function test), and a final inspection. [17] The results of these tests give clients confidence in how the system will perform in production. There may also be legal or contractual requirements for acceptance of the system.

Operational acceptance testing

Operational acceptance testing (OAT) is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment. [18]

Acceptance testing in extreme programming

Acceptance testing is a term used in agile software development methodologies, particularly extreme programming, referring to the functional testing of a user story by the software development team during the implementation phase. [19]

The customer specifies scenarios to test when a user story has been correctly implemented. A story can have one or many acceptance tests, whatever it takes to ensure the functionality works. Acceptance tests are black-box system tests. Each acceptance test represents some expected result from the system. Customers are responsible for verifying the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority. Acceptance tests are also used as regression tests prior to a production release. A user story is not considered complete until it has passed its acceptance tests. This means that new acceptance tests must be created for each iteration, or the development team will report zero progress. [20]

Types of acceptance testing

Typical types of acceptance testing include the following

User acceptance testing
This may include factory acceptance testing (FAT), i.e. the testing done by a vendor before the product or system is moved to its destination site, after which site acceptance testing (SAT) may be performed by the users at the site. [21]
Operational acceptance testing
Also known as operational readiness testing, this refers to the checking done to a system to ensure that processes and procedures are in place to allow the system to be used and maintained. This may include checks done to back-up facilities, procedures for disaster recovery, training for end users, maintenance procedures, and security procedures. [22]
Contract and regulation acceptance testing
In contract acceptance testing, a system is tested against acceptance criteria as documented in a contract, before the system is accepted. In regulation acceptance testing, a system is tested to ensure it meets governmental, legal and safety standards. [23]
Factory acceptance testing
Acceptance testing conducted at the site at which the product is developed and performed by employees of the supplier organization, to determine whether a component or system satisfies the requirements, normally including hardware as well as software. [24]
Alpha and beta testing
Alpha testing takes place at developers' sites and involves testing of the operational system by internal staff, before it is released to external customers. Beta testing takes place at customers' sites and involves testing by a group of customers who use the system at their own locations and provide feedback, before the system is released to other customers. The latter is often called "field testing". [25]

Acceptance criteria

According to the Project Management Institute, acceptance criteria is a "set of conditions that is required to be met before deliverables are accepted." [26] Requirements found in acceptance criteria for a given component of the system are usually very detailed. [27]

List of acceptance-testing frameworks

See also

Related Research Articles

<span class="mw-page-title-main">Software testing</span> Checking software against a standard

Software testing is the act of checking whether software satisfies expectations.

Software maintenance is the modification of software after delivery.

In the context of software engineering, software quality refers to two related but distinct notions:

In software engineering, a test case is a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test cases underlie testing that is methodical rather than haphazard. A battery of test cases can be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistent regression testing.

<span class="mw-page-title-main">ISO/IEC 9126</span> Former ISO and IEC standard

ISO/IEC 9126Software engineering — Product quality was an international standard for the evaluation of software quality. It has been replaced by ISO/IEC 25010:2011.

Behavior-driven development (BDD) involves naming software tests using domain language to describe the behavior of the code.

Portability testing is the process of determining the degree of ease or difficulty to which a software component or application can be effectively and efficiently transferred from one hardware, software or other operational or usage environment to another. The test results, defined by the individual needs of the system, are some measurement of how easily the component or application will be to integrate into the environment and these results will then be compared to the software system's non-functional requirement of portability for correctness. The levels of correctness are usually measured by the cost to adapt the software to the new environment compared to the cost of redevelopment.

In software development, functional testing is a form of software system testing that verifies whether software matches its design.

<span class="mw-page-title-main">Functional specification</span> Type of document

A functional specification in systems engineering and software development is a document that specifies the functions that a system or component must perform.

<span class="mw-page-title-main">V-model (software development)</span> Software development methodology

In software development, the V-model represents a development process that may be considered an extension of the waterfall model and is an example of the more general V-model. Instead of moving down linearly, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction, respectively.

Verification and validation are independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose. These are critical components of a quality management system such as ISO 9000. The words "verification" and "validation" are sometimes preceded with "independent", indicating that the verification and validation is to be performed by a disinterested third party. "Independent verification and validation" can be abbreviated as "IV&V".

Software quality control is the set of procedures used by organizations to ensure that a software product will meet its quality goals at the best value to the customer, and to continually improve the organization’s ability to produce software products in the future.

<span class="mw-page-title-main">View model</span> Framework for enterprise and system engineering

A view model or viewpoints framework in systems engineering, software engineering, and enterprise engineering is a framework which defines a coherent set of views to be used in the construction of a system architecture, software architecture, or enterprise architecture. A view is a representation of the whole system from the perspective of a related set of concerns.

In software engineering, a software development process or software development life cycle (SDLC) is a process of planning and managing software development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improve design and/or product management. The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application.

<span class="mw-page-title-main">Operational acceptance testing</span>

Operational acceptance testing (OAT) is used to conduct operational readiness (pre-release) of a product, service, or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or operations readiness and assurance testing (OR&A). Functional testing within OAT is limited to those tests which are required to verify the non-functional aspects of the system.

Acceptance test–driven development (ATDD) is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example (SBE), behavior-driven development (BDD), example-driven development (EDD), and support-driven development also called story test–driven development (SDD). All these processes aid developers and testers in understanding the customer's needs prior to implementation and allow customers to be able to converse in their own domain language.

ISO/IEC/IEEE 29119Software and systems engineering -- Software testing is a series of five international standards for software testing. First developed in 2007 and released in 2013, the standard "defines vocabulary, processes, documentation, techniques, and a process assessment model for testing that can be used within any software development lifecycle."

This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to software quality assurance and general application of the test method.

References

  1. "BPTS - Is Business process testing the best name / description". SFIA. Retrieved February 18, 2023.
  2. Black, Rex (August 2009). Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing . Hoboken, NJ: Wiley. ISBN   978-0-470-40415-7.
  3. "acceptance criteria". Innolution, LLC. June 10, 2019.
  4. "Standard Glossary of Terms used in Software Testing, Version 3.2: All Terms" (PDF). ISTQB . Retrieved November 23, 2020.
  5. "User Acceptance Testing (UAT) - Software Testing". GeeksforGeeks. November 24, 2022. Retrieved May 23, 2024.
  6. ISO/IEC/IEEE International Standard - Systems and software engineering. ISO/IEC/IEEE. 2010. pp. vol., no., pp.1–418.
  7. 1 2 3 ISO/IEC/IEEE 29119-1:2013 Software and Systems Engineering - Software Testing - Part 1: Concepts and Definitions. ISO. 2013. Retrieved October 14, 2014.
  8. ISO/IEC/IEEE 29119-4:2013 Software and Systems Engineering - Software Testing - Part 4: Test Techniques. ISO. 2013. Retrieved October 14, 2014.
  9. 1 2 ISO/IEC/IEEE 29119-2:2013 Software and Systems Engineering - Software Testing - Part 2: Test Processes. ISO. 2013. Retrieved May 21, 2014.
  10. Cimperman, Rob (2006). UAT Defined: A Guide to Practical User Acceptance Testing. Pearson Education. pp. Chapter 2. ISBN   9780132702621.
  11. Goethem, Brian; van Hambling, Pauline (2013). User acceptance testing : a step-by-step guide. BCS Learning & Development Limited. ISBN   9781780171678.
  12. "2.6: Systems Testing". Engineering LibreTexts. August 2, 2021. Retrieved February 18, 2023.
  13. Pusuluri, Nageshwar Rao (2006). Software Testing Concepts And Tools. Dreamtech Press. p. 62. ISBN   9788177227123.
  14. "Get Reliable Usability and Avoid Risk with These Testing Scenarios". Panaya. April 25, 2022. Retrieved May 11, 2022.
  15. Elazar, Eyal (April 23, 2018). "What is User Acceptance Testing (UAT) - The Full Process Explained". Panaya. Retrieved February 18, 2023.
  16. Wysocka, Emilia M.; Page, Matthew; Snowden, James; Simpson, T. Ian (December 15, 2022). "Comparison of rule- and ordinary differential equation-based dynamic model of DARPP-32 signalling network". PeerJ. 10. "Table 1: The specifications of the ODE and RB models can be broken down into elements, the number of which can be compared.". doi: 10.7717/peerj.14516 . ISSN   2167-8359. PMC   9760030 . PMID   36540795.
  17. "Factory Acceptance Test (FAT)". TÜV Rheinland. Archived from the original on February 4, 2013. Retrieved September 18, 2012.
  18. Vijay (February 2, 2018). "What is Acceptance Testing (A Complete Guide)". Software Testing Help. Retrieved February 18, 2023.
  19. "Introduction to Acceptance/Customer Tests as Requirements Artifacts". agilemodeling.com. Agile Modeling. Retrieved December 9, 2013.
  20. Wells, Don. "Acceptance Tests". Extremeprogramming.org. Retrieved September 20, 2011.
  21. Prasad, Durga (March 29, 2012). "The Difference Between a FAT and a SAT". Kneat.com. Archived from the original on June 16, 2017. Retrieved July 27, 2016.
  22. Turner, Paul (October 5, 2020). "Operational Readiness". Commissioning and Startup. Retrieved February 18, 2023.
  23. Brosnan, Adeline (January 12, 2021). "Acceptance Testing in Information Technology Contracts | LegalVision". LegalVision. Retrieved February 18, 2023.
  24. "ISTQB Standard glossary of terms used in Software Testing". Archived from the original on November 5, 2018. Retrieved March 15, 2019.
  25. Hamilton, Thomas (April 3, 2020). "Alpha Testing Vs Beta Testing – Difference Between Them". www.guru99.com. Retrieved February 18, 2023.
  26. Project Management Institute 2021, §Glossary Section 3. Definitions.
  27. Project Management Institute 2021, §2.6.2.1 Requirements.

Sources

Further reading