Acceptance testing

Last updated

Acceptance testing of an aircraft catapult US Navy 090407-N-4669J-042 Sailors assigned to the air department of the aircraft carrier USS George H.W. Bush (CVN 77) test the ship's catapult systems during acceptance trials.jpg
Acceptance testing of an aircraft catapult
Six of the primary mirrors of the James Webb Space Telescope being prepared for acceptance testing James Webb Primary Mirror.jpg
Six of the primary mirrors of the James Webb Space Telescope being prepared for acceptance testing

In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests. [1]

Contents

In systems engineering, it may involve black-box testing performed on a system (for example: a piece of software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery. [2]

In software testing, the ISTQB defines acceptance testing as:

Formal testing with respect to user needs, requirements, and business processes conducted to determine whether a system satisfies the acceptance criteria [3] and to enable the user, customers or other authorized entity to determine whether to accept the system.

Standard Glossary of Terms used in Software Testing [4] :2

The final test in the QA lifecycle, user acceptance testing, is conducted just before the final release to assess whether the product or application can handle real-world scenarios. By replicating user behavior, it checks if the system satisfies business requirements and rejects changes if certain criteria are not met.[ citation needed ]

Some forms of acceptance testing are, user acceptance testing (UAT), end-user testing, operational acceptance testing (OAT), acceptance test-driven development (ATDD) and field (acceptance) testing. Acceptance criteria are the criteria that a system or component must satisfy in order to be accepted by a user, customer, or other authorized entity. [5]

Overview

Testing is a set of activities conducted to facilitate discovery and/or evaluation of properties of one or more items under test. [6] Each individual test, known as a test case, exercises a set of predefined test activities, developed to drive the execution of the test item to meet test objectives; including correct implementation, error identification, quality verification and other valued detail. [6] The test environment is usually designed to be identical, or as close as possible, to the anticipated production environment. It includes all facilities, hardware, software, firmware, procedures and/or documentation intended for or used to perform the testing of software. [6]

UAT and OAT test cases are ideally derived in collaboration with business customers, business analysts, testers, and developers. It is essential that these tests include both business logic tests as well as operational environment conditions. The business customers (product owners) are the primary stakeholders of these tests. As the test conditions successfully achieve their acceptance criteria, the stakeholders are reassured the development is progressing in the right direction. [7]

Process

The acceptance test suite may need to be performed multiple times, as all of the test cases may not be executed within a single test iteration. [8]

The acceptance test suite is run using predefined acceptance test procedures to direct the testers which data to use, the step-by-step processes to follow and the expected result following execution. The actual results are retained for comparison with the expected results. [8] If the actual results match the expected results for each test case, the test case is said to pass. If the quantity of non-passing test cases does not breach the project's predetermined threshold, the test suite is said to pass. If it does, the system may either be rejected or accepted on conditions previously agreed between the sponsor and the manufacturer.

The anticipated result of a successful test execution:

The objective is to provide confidence that the developed product meets both the functional and non-functional requirements. The purpose of conducting acceptance testing is that once completed, and provided the acceptance criteria are met, it is expected the sponsors will sign-off on the product development/enhancement as satisfying the defined requirements (previously agreed between business and product provider/developer).

User acceptance testing

User acceptance testing (UAT) consists of a process of verifying that a solution works for the user. [9] It is not system testing (ensuring software does not crash and meets documented requirements) but rather ensures that the solution will work for the user (i.e. tests that the user accepts the solution); software vendors often refer to this as "Beta testing".

This testing should be undertaken by the intended end user, or a subject-matter expert (SME), preferably the owner or client of the solution under test, and provide a summary of the findings for confirmation to proceed after trial or review. In software development, UAT as one of the final stages of a project often occurs before a client or customer accepts the new system. Users of the system perform tests in line with what would occur in real-life scenarios. [10]

It is important that the materials given to the tester be similar to the materials that the end user will have. Testers should be given real-life scenarios such as the three most common or difficult tasks that the users they represent will undertake. [11]

The UAT acts as a final verification of the required business functionality and proper functioning of the system, emulating real-world conditions on behalf of the paying client or a specific large customer. If the software works as required and without issues during normal use, one can reasonably extrapolate the same level of stability in production. [12]

User tests, usually performed by clients or by end-users, do not normally focus on identifying simple cosmetic problems such as spelling errors, nor on showstopper defects, such as software crashes; testers and developers identify and fix these issues during earlier unit testing, integration testing, and system testing phases.

UAT should be executed against test scenarios. [13] [14] Test scenarios usually differ from System or Functional test cases in that they represent a "player" or "user" journey. The broad nature of the test scenario ensures that the focus is on the journey and not on technical or system-specific details, staying away from "click-by-click" test steps to allow for a variance in users' behaviour. Test scenarios can be broken down into logical "days", which are usually where the actor (player/customer/operator) or system (backoffice, front end) changes. [15]

In industry, a common UAT is a factory acceptance test (FAT). This test takes place before installation of the equipment. Most of the time testers not only check that the equipment meets the specification, but also that it is fully functional. A FAT usually includes a check of completeness, a verification against contractual requirements, a proof of functionality (either by simulation or a conventional function test) and a final inspection. [16] The results of these tests give clients confidence in how the system will perform in production. There may also be legal or contractual requirements for acceptance of the system.

Operational acceptance testing

Operational acceptance testing (OAT) is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment. [17]

Acceptance testing in extreme programming

Acceptance testing is a term used in agile software development methodologies, particularly extreme programming, referring to the functional testing of a user story by the software development team during the implementation phase. [18]

The customer specifies scenarios to test when a user story has been correctly implemented. A story can have one or many acceptance tests, whatever it takes to ensure the functionality works. Acceptance tests are black-box system tests. Each acceptance test represents some expected result from the system. Customers are responsible for verifying the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority. Acceptance tests are also used as regression tests prior to a production release. A user story is not considered complete until it has passed its acceptance tests. This means that new acceptance tests must be created for each iteration or the development team will report zero progress. [19]

Types of acceptance testing

Typical types of acceptance testing include the following

User acceptance testing
This may include factory acceptance testing (FAT), i.e. the testing done by a vendor before the product or system is moved to its destination site, after which site acceptance testing (SAT) may be performed by the users at the site. [20]
Operational acceptance testing
Also known as operational readiness testing, this refers to the checking done to a system to ensure that processes and procedures are in place to allow the system to be used and maintained. This may include checks done to back-up facilities, procedures for disaster recovery, training for end users, maintenance procedures, and security procedures. [21]
Contract and regulation acceptance testing
In contract acceptance testing, a system is tested against acceptance criteria as documented in a contract, before the system is accepted. In regulation acceptance testing, a system is tested to ensure it meets governmental, legal and safety standards. [22]
Factory acceptance testing
Acceptance testing conducted at the site at which the product is developed and performed by employees of the supplier organization, to determine whether a component or system satisfies the requirements, normally including hardware as well as software. [23]
Alpha and beta testing
Alpha testing takes place at developers' sites, and involves testing of the operational system by internal staff, before it is released to external customers. Beta testing takes place at customers' sites, and involves testing by a group of customers who use the system at their own locations and provide feedback, before the system is released to other customers. The latter is often called "field testing". [24]

Acceptance criteria

According to the Project Management Institute, acceptance criteria is a "set of conditions that is required to be met before deliverables are accepted." [25] Requirements found in acceptance criteria for a given component of the system are usually very detailed. [26]

List of acceptance-testing frameworks

See also

Related Research Articles

Software testing is the act of examining the artifacts and the behavior of the software under test by verification and validation. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to:

<span class="mw-page-title-main">Systems development life cycle</span> Systems engineering terms

In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.

In the context of software engineering, software quality refers to two related but distinct notions:

In software engineering, a test case is a specification of the inputs, execution conditions, testing procedure, and expected results that define a single test to be executed to achieve a particular software testing objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test cases underlie testing that is methodical rather than haphazard. A battery of test cases can be built to produce the desired coverage of the software being tested. Formally defined test cases allow the same tests to be run repeatedly against successive versions of the software, allowing for effective and consistent regression testing.

<span class="mw-page-title-main">ISO/IEC 9126</span> Former ISO and IEC standard

ISO/IEC 9126Software engineering — Product quality was an international standard for the evaluation of software quality. It has been replaced by ISO/IEC 25010:2011.

In software engineering, behavior-driven development (BDD) is a software development process that encourages collaboration among developers, quality assurance experts, and customer representatives in a software project. It encourages teams to use conversation and concrete examples to formalize a shared understanding of how the application should behave. It emerged from test-driven development (TDD) and can work alongside an agile software development process. Behavior-driven development combines the general techniques and principles of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software development and management teams with shared tools and a shared process to collaborate on software development.

System integration testing (SIT) involves the overall testing of a complete system of many subsystem components or elements. The system under test may be composed of hardware, or software, or hardware with embedded software, or hardware/software with human-in-the-loop testing.

Portability testing is the process of determining the degree of ease or difficulty to which a software component or application can be effectively and efficiently transferred from one hardware, software or other operational or usage environment to another. The test results, defined by the individual needs of the system, are some measurement of how easily the component or application will be to integrate into the environment and these results will then be compared to the software system's non-functional requirement of portability for correctness. The levels of correctness are usually measured by the cost to adapt the software to the new environment compared to the cost of redevelopment.

The function point is a "unit of measurement" to express the amount of business functionality an information system provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost of a single unit is calculated from past projects.

In software development, functional testing is a quality assurance (QA) process and a type of black-box testing that bases its test cases on the specifications of the software component under test. Functions are tested by feeding them input and examining the output, and internal program structure is rarely considered. Functional software testing is conducted to evaluate the compliance of a system or component with specified functional requirements. Functional testing usually describes what the system does.

<span class="mw-page-title-main">Functional specification</span>

A functional specification in systems engineering and software development is a document that specifies the functions that a system or component must perform.

<span class="mw-page-title-main">V-model (software development)</span> Software development methodology

In software development, the V-model represents a development process that may be considered an extension of the waterfall model, and is an example of the more general V-model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction, respectively.

Software quality control is the set of procedures used by organizations to ensure that a software product will meet its quality goals at the best value to the customer, and to continually improve the organization’s ability to produce software products in the future.

<span class="mw-page-title-main">View model</span>

A view model or viewpoints framework in systems engineering, software engineering, and enterprise engineering is a framework which defines a coherent set of views to be used in the construction of a system architecture, software architecture, or enterprise architecture. A view is a representation of the whole system from the perspective of a related set of concerns.

<span class="mw-page-title-main">Operational acceptance testing</span>

Operational acceptance testing (OAT) is used to conduct operational readiness (pre-release) of a product, service, or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or operations readiness and assurance testing (OR&A). Functional testing within OAT is limited to those tests which are required to verify the non-functional aspects of the system.

Acceptance test–driven development (ATDD) is a development methodology based on communication between the business customers, the developers, and the testers. ATDD encompasses many of the same practices as specification by example (SBE), behavior-driven development (BDD), example-driven development (EDD), and support-driven development also called story test–driven development (SDD). All these processes aid developers and testers in understanding the customer's needs prior to implementation and allow customers to be able to converse in their own domain language.

ISO/IEC/IEEE 29119Software and systems engineering -- Software testing is a series of five international standards for software testing. First developed in 2007 and released in 2013, the standard "defines vocabulary, processes, documentation, techniques, and a process assessment model for testing that can be used within any software development lifecycle."

This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to Software Quality Assurance (more widely colloquially known as Quality Assurance and general application of the test method.

References

  1. "BPTS - Is Business process testing the best name / description". SFIA. Retrieved February 18, 2023.
  2. Black, Rex (August 2009). Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing . Hoboken, NJ: Wiley. ISBN   978-0-470-40415-7.
  3. "acceptance criteria". Innolution, LLC. June 10, 2019.
  4. "Standard Glossary of Terms used in Software Testing, Version 3.2: All Terms" (PDF). ISTQB . Retrieved November 23, 2020.
  5. ISO/IEC/IEEE International Standard - Systems and software engineering. ISO/IEC/IEEE. 2010. pp. vol., no., pp.1–418.
  6. 1 2 3 ISO/IEC/IEEE 29119-1:2013 Software and Systems Engineering - Software Testing - Part 1: Concepts and Definitions. ISO. 2013. Retrieved October 14, 2014.
  7. ISO/IEC/IEEE 29119-4:2013 Software and Systems Engineering - Software Testing - Part 4: Test Techniques. ISO. 2013. Retrieved October 14, 2014.
  8. 1 2 ISO/IEC/IEEE 29119-2:2013 Software and Systems Engineering - Software Testing - Part 2: Test Processes. ISO. 2013. Retrieved May 21, 2014.
  9. Cimperman, Rob (2006). UAT Defined: A Guide to Practical User Acceptance Testing. Pearson Education. pp. Chapter 2. ISBN   9780132702621.
  10. Goethem, Brian; van Hambling, Pauline (2013). User acceptance testing : a step-by-step guide. BCS Learning & Development Limited. ISBN   9781780171678.
  11. "2.6: Systems Testing". Engineering LibreTexts. August 2, 2021. Retrieved February 18, 2023.
  12. Pusuluri, Nageshwar Rao (2006). Software Testing Concepts And Tools. Dreamtech Press. p. 62. ISBN   9788177227123.
  13. "Get Reliable Usability and Avoid Risk with These Testing Scenarios". Panaya. April 25, 2022. Retrieved May 11, 2022.
  14. Elazar, Eyal (April 23, 2018). "What is User Acceptance Testing (UAT) - The Full Process Explained". Panaya. Retrieved February 18, 2023.
  15. Wysocka, Emilia M.; Page, Matthew; Snowden, James; Simpson, T. Ian (December 15, 2022). "Comparison of rule- and ordinary differential equation-based dynamic model of DARPP-32 signalling network". PeerJ. 10. "Table 1: The specifications of the ODE and RB models can be broken down into elements, the number of which can be compared.". doi: 10.7717/peerj.14516 . ISSN   2167-8359. PMC   9760030 . PMID   36540795.
  16. "Factory Acceptance Test (FAT)". TÜV Rheinland. Archived from the original on February 4, 2013. Retrieved September 18, 2012.
  17. Vijay (February 2, 2018). "What is Acceptance Testing (A Complete Guide)". Software Testing Help. Retrieved February 18, 2023.
  18. "Introduction to Acceptance/Customer Tests as Requirements Artifacts". agilemodeling.com. Agile Modeling. Retrieved December 9, 2013.
  19. Wells, Don. "Acceptance Tests". Extremeprogramming.org. Retrieved September 20, 2011.
  20. Prasad, Durga (March 29, 2012). "The Difference Between a FAT and a SAT". Kneat.com. Archived from the original on June 16, 2017. Retrieved July 27, 2016.
  21. Turner, Paul (October 5, 2020). "Operational Readiness". Commissioning and Startup. Retrieved February 18, 2023.
  22. Brosnan, Adeline (January 12, 2021). "Acceptance Testing in Information Technology Contracts | LegalVision". LegalVision. Retrieved February 18, 2023.
  23. "ISTQB Standard glossary of terms used in Software Testing". Archived from the original on November 5, 2018. Retrieved March 15, 2019.
  24. Hamilton, Thomas (April 3, 2020). "Alpha Testing Vs Beta Testing – Difference Between Them". www.guru99.com. Retrieved February 18, 2023.
  25. Project Management Institute 2021, §Glossary Section 3. Definitions.
  26. Project Management Institute 2021, §2.6.2.1 Requirements.

Sources

Further reading