Results based testing

Last updated

Results-Based Testing is a business model for software testing. This business model consists of an alternative pricing system which allows companies to pay for the bugs which are detected, instead of for time spent on a project.

Contents

Description

"Results-Based Testing" (RBT) is an alternative pricing system for software testing which allows companies to pay for the bugs which are detected, instead of for time spent on a project. This was adopted in response to dissatisfaction which clients expressed toward the pricing structure employed by most testing companies, and has led to higher customer satisfaction and better accuracy in bug detection.

Results Based Testing usually involves three elements:

  1. A Scope of work
  2. A contractual SLA
  3. A pricing mechanism

RBT is normally utilized when part or all of the software testing process is outsourced to a third party and a core contractual SLA together with a pricing mechanism sets the exact payment made at each SLA level. The pricing mechanism may be a flexible rate for each SLA level or a Penalty/Reward mechanism, all with the goal of creating an incentive for the Testing Supplier to meet the business targets (results) set forth. However, RBT may (and should) also be used for internal testing teams even though in such cases a penalty/reward mechanism is harder to implement. Another good use of RBT is for establishing the necessary framework for measured continuous improvement in which results from previous periods can serve as a baseline for following period's targets.

Usage

Several software testing companies utilize this approach, including QualiTest, who rely heavily on the success that they have encountered through using this model.

QualiTest reports that Results Based Testing has provided benefits because of the following:

When evaluating the level of testing, several Key Process Indicators (KPIs) should be measured. The main focus should be on two main questions:

  1. What percentage of the defects should be found by testing?
  2. What is the cost spent in order to achieve the above goal?

Most organizations fail to measure these two KPIs and are unable to provide accurate visibility into the quality and efficiency of testing.

To measure the percentage of defects found by testing (a type of test Coverage KPI vs. the escaped defects KPI) the organization should utilize the following process:

  1. Reporting of defects – each defect reported by the testing team should be documented in a central defect management system.
  2. All issues or support tickets raised by customers/users of the system should be documented in a centralized system. Usually the support or help-desk team has this information.
  3. Each ticket should be evaluated by the testing team (sometimes the support team filters the tickets and provides only the tickets that result from a defect).
  4. Each ticket related to a defect should have one of the following statuses:

Only defects in the last status (New Defect) are counted for this metric.

The process above is extremely important in case the organization is looking to start implementing RBT and to continually improve the efficiency and effectiveness of the testing process. Measuring the test coverage is done by dividing the amount of defects found by the testing provider with the amount of defects found by the users of the system. Since critical defects carry a different importance to the organization versus less severe defects, each defect is multiplied by its severity. For example, assuming a scale of 1-5 is used, a critical defect (severity = 5) will be counted in the same way as 5 minor defects (severity = 1).

Only defects found in a certain period after release of the system shall be counted (normally this is defined as 3 – 6 months). Once the data is available the following formula is used to calculate the KPI value:

(Σ defects found by testing)/(Σ defects found by testing+Σ real defects found by users)

Related Research Articles

Acceptance testing Test to determine if the requirements of a specification or contract are met

In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests.

Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding failures, and verifying that the software product is fit for use.

A software bug is an error, flaw or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. The process of finding and fixing bugs is termed "debugging" and often uses formal techniques or tools to pinpoint bugs, and since the 1950s, some computer systems have been designed to also deter, detect or auto-correct various computer bugs during operations.

A software company is a company whose primary products are various forms of software, software technology, distribution, and software product development. They make up the software industry.

Point of sale

The point of sale (POS) or point of purchase (POP) is the time and place where a retail transaction is completed. At the point of sale, the merchant calculates the amount owed by the customer, indicates that amount, may prepare an invoice for the customer, and indicates the options for the customer to make payment. It is also the point at which a customer makes a payment to the merchant in exchange for goods or after provision of a service. After receiving payment, the merchant may issue a receipt for the transaction, which is usually printed but can also be dispensed with or sent electronically.

A service-level agreement (SLA) is a commitment between a service provider and a client. Particular aspects of the service – quality, availability, responsibilities – are agreed between the service provider and the service user. The most common component of an SLA is that the services should be provided to the customer as agreed upon in the contract. As an example, Internet service providers and telcos will commonly include service level agreements within the terms of their contracts with customers to define the level(s) of service being sold in plain language terms. In this case the SLA will typically have a technical definition in mean time between failures (MTBF), mean time to repair or mean time to recovery (MTTR); identifying which party is responsible for reporting faults or paying fees; responsibility for various data rates; throughput; jitter; or similar measurable details.

Performance indicator Measurement that evaluates the success of an organization

A performance indicator or key performance indicator (KPI) is a type of performance measurement. KPIs evaluate the success of an organization or of a particular activity in which it engages.

In the context of software engineering, software quality refers to two related but distinct notions:

Lean software development is a translation of lean manufacturing principles and practices to the software development domain. Adapted from the Toyota Production System, it is emerging with the support of a pro-lean subculture within the Agile community. Lean offers a solid conceptual framework, values and principles, as well as good practices, derived from experience, that support agile organizations.

An issue tracking system is a computer software package that manages and maintains lists of issues. Issue tracking systems are generally used in collaborative settings—especially in large or distributed collaborations—but can also be employed by individuals as part of a time management or personal productivity regime. These systems often encompass resource allocation, time accounting, priority management, and oversight workflow in addition to implementing a centralized issue registry.

Software as a service is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted. It is sometimes referred to as "on-demand software", and was formerly referred to as "software plus services" by Microsoft. SaaS applications are also known as on-demand software and Web-based/Web-hosted software.

Software project management is an art and science of planning and leading software projects. It is a sub-discipline of project management in which software projects are planned, implemented, monitored and controlled.

Dashboard (business)

A dashboard is a type of graphical user interface which often provides at-a-glance views of key performance indicators (KPIs) relevant to a particular objective or business process. In other usage, "dashboard" is another name for "progress report" or "report" and considered a form of data visualization.

Reverse semantic traceability (RST) is a quality control method for verification improvement that helps to insure high quality of artifacts by backward translation at each stage of the software development process.

There is considerable variety among software testing writers and consultants about what constitutes responsible software testing. Prominent members of the Context-Driven School of Testing consider much of the writing about software testing to be doctrine, mythology, and folklore. Some contend that this belief directly contradicts standards such as the IEEE 829 test documentation standard, and organizations such as the Food and Drug Administration who promote them. The Context-Driven School's retort is that Lessons Learned in Software Testing includes one lesson supporting the use IEEE 829 and another opposing it; that not all software testing occurs in a regulated environment and that practices appropriate for such environments would be ruinously expensive, unnecessary, and inappropriate for other contexts; and that in any case the FDA generally promotes the principle of the least burdensome approach.

Revenue assurance (RA) telecommunication services, is the use of data quality and process improvement methods that improve profits, revenues and cash flows without influencing demand. This was defined by a TM Forum working group based on research documented in its Revenue Assurance Technical Overview. practically, it can be defined as the process of ensuring that all products and services provided by the Telecom Service Provider are billed as per the commercial agreement with customers', by ensuring network, billing and configuration integrity and accuracy across network platforms and systems. In many telecommunications service providers, revenue assurance is led by a dedicated revenue assurance department.

ITU-T Y.156sam Ethernet Service Activation Test Methodology is a draft recommendation under study by the ITU-T describing a new testing methodology adapted to the multiservice reality of packet-based networks.

ITU-T Y.1564 is an Ethernet service activation test methodology, which is the new ITU-T standard for turning up, installing and troubleshooting Ethernet-based services. It is the only standard test methodology that allows for complete validation of Ethernet service-level agreements (SLAs) in a single test.

KPI driven code analysis is a method of analyzing software source code and source code related IT systems to gain insight into business critical aspects of the development of a software system such as team-performance, time-to-market, risk-management, failure-prediction and much more.

Surrogation is a psychological phenomenon found in business practices whereby a measure of a construct of interest evolves to replace that construct. Research on performance measurement in management accounting identifies surrogation with "the tendency for managers to lose sight of the strategic construct(s) the measures are intended to represent, and subsequently act as though the measures are the constructs". An everyday example of surrogation is a manager tasked with increasing customer satisfaction who begins to believe that the customer satisfaction survey score actually is customer satisfaction.

References