Software reliability is the probability of the software causing a system failure over some specified operating time. Software does not fail due to wear out but does fail due to faulty functionality, timing, sequencing, data, and exception handling. The software fails as a function of operating time as opposed to calendar time. Over 225 models have been developed since early 1970s, however, several of them have similar if not identical assumptions. The models have two basic types - prediction modeling and estimation modeling.
1.0 Overview of Software Reliability Prediction Models
These models are derived from actual historical data from real software projects. The user answers a list of questions which calibrate the historical data to yield a software reliability prediction. The accuracy of the prediction depends on how many parameters (questions) and datasets are in the model, how current the data is, and how confident the user is of their inputs. One of the earliest prediction models was the Rome Laboratory TR-92-52. It was developed in 1987 and last updated in 1992 and was geared towards software in avionics systems. Due to the age of the model and data it's no longer recommended but is the basis for several modern models such as the Shortcut model, Full-scale model, and Neufelder assessment model. There are also lookup tables for software defect density based on the capability maturity or the application type. These are very simple models but are generally not as accurate as the assessment based models. [1]
Model | Number of inputs | Industry supported | Effort required to use the model | Relative accuracy | Year developed/ Last updated |
Industry tables | 1 | Several | Quick | Varies | 1992, 2015 |
CMMI® tables | 1 | Any | Quick | Low at low CMMi® | 1997, 2012 |
Shortcut model | 23 | Any | Moderate | Medium | 1993, 2012 |
Full-scale model | 94-299 | Any | Detailed | Medium-High | 1993, 2012 |
Metric based models | Varies | Any | Varies | Varies | NA |
Historical data | A minimum of 2 | Any | Detailed | High | NA |
Rayleigh model | 3 | Any | Moderate | Medium | NA |
RADC TR-92-52 | 43-222 | Aircraft | Detailed | Obsolete | 1978, 1992 |
Neufelder model | 156 | Any | Detailed | Medium to high | 2015 |
2.0 Overview of Software Reliability Growth (Estimation)Models
Software reliability growth (or estimation) models use failure data from testing to forecast the failure rate or MTBF into the future. The models depend on the assumptions about the fault rate during testing which can either be increasing, peaking, decreasing or some combination of decreasing and increasing. Some models assume that there is a finite and fixed number of inherent defects while others assume that it's infinite. Some models require effort for parameter estimation while others have only a few parameters to estimate. Some models require the exact time in between each failure found in testing, while others only need to have the number of failures found during any given time interval such as a day.
Model name | Inherent defect count | Effort required | Requires exact time between failures |
Increasing fault rate | |||
Weibull | Finite/not fixed | High | NA |
Peak | |||
Shooman Constant Defect Removal Rate Model | Finite/fixed | Low | Yes |
Decreasing fault rate | |||
Shooman Constant Defect Removal Rate Model | Finite/fixed | Low | Yes |
Linearly Decreasing | |||
General exponential models including: · Goel-Okumoto (exponential) [2] · Musa Basic Model · Jelinski-Moranda | Finite/fixed | Medium | Yes |
Shooman Linearly Decreasing Model | Finite/fixed | Low | Yes |
Duane | Infinite | Medium | No |
Non-Linearly Decreasing | |||
Musa-Okumoto (logarithmic) | Infinite | Low | Yes |
Shooman Exponentially Decreasing Model | Finite/fixed | High | Yes |
Log-logistic | Finite/fixed | High | Yes |
Geometric | Infinite | High | Yes |
Increasing and then decreasing | |||
Yamada (Delayed) S-shaped | Infinite | High | Yes |
Weibull | Finite/not fixed | High |
Software reliability tools implementing some of these models include CASRE (Computer-Aided Software Reliability Estimation) and an open source SFRAT (Software Failure and Reliability Assessment Tool).
Psychological statistics is application of formulas, theorems, numbers and laws to psychology. Statistical methods for psychology include development and application statistical theory and methods for modeling psychological data. These methods include psychometrics, factor analysis, experimental designs, and Bayesian statistics. The article also discusses journals in the same field.
Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to:
Software development is the process used to conceive, specify, design, program, document, test, and bug fix in order to create and maintain applications, frameworks, or other software components. Software development involves writing and maintaining the source code, but in a broader sense, it includes all processes from the conception of the desired software through the final manifestation, typically in a planned and structured process often overlapping with software engineering. Software development also includes research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.
In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.
In the context of software engineering, software quality refers to two related but distinct notions:
Failure rate is the frequency with which an engineered system or component fails, expressed in failures per unit of time. It is usually denoted by the Greek letter λ (lambda) and is often used in reliability engineering.
Prognostics is an engineering discipline focused on predicting the time at which a system or a component will no longer perform its intended function. This lack of performance is most often a failure beyond which the system can no longer be used to meet desired performance. The predicted time then becomes the remaining useful life (RUL), which is an important concept in decision making for contingency mitigation. Prognostics predicts the future performance of a component by assessing the extent of deviation or degradation of a system from its expected normal operating conditions. The science of prognostics is based on the analysis of failure modes, detection of early signs of wear and aging, and fault conditions. An effective prognostics solution is implemented when there is sound knowledge of the failure mechanisms that are likely to cause the degradations leading to eventual failures in the system. It is therefore necessary to have initial information on the possible failures in a product. Such knowledge is important to identify the system parameters that are to be monitored. Potential uses for prognostics is in condition-based maintenance. The discipline that links studies of failure mechanisms to system lifecycle management is often referred to as prognostics and health management (PHM), sometimes also system health management (SHM) or—in transportation applications—vehicle health management (VHM) or engine health management (EHM). Technical approaches to building models in prognostics can be categorized broadly into data-driven approaches, model-based approaches, and hybrid approaches.
Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
Video quality is a characteristic of a video passed through a video transmission or processing system that describes perceived video degradation. Video processing systems may introduce some amount of distortion or artifacts in the video signal that negatively impact the user's perception of the system. For many stakeholders in video production and distribution, ensuring video quality is an important task.
Software assurance (SwA) is a critical process in software development that ensures the reliability, safety, and security of software products. It involves a variety of activities, including requirements analysis, design reviews, code inspections, testing, and formal verification. One crucial component of software assurance is secure coding practices, which follow industry-accepted standards and best practices, such as those outlined by the Software Engineering Institute (SEI) in their CERT Secure Coding Standards (SCS).
A hard disk drive failure occurs when a hard disk drive malfunctions and the stored information cannot be accessed with a properly configured computer.
In computer science, fault injection is a testing technique for understanding how computing systems behave when stressed in unusual ways. This can be achieved using physical- or software-based means, or using a hybrid approach. Widely studied physical fault injections include the application of high voltages, extreme temperatures and electromagnetic pulses on electronic components, such as computer memory and central processing units. By exposing components to conditions beyond their intended operating limits, computing systems can be coerced into mis-executing instructions and corrupting critical data.
Stress testing is a software testing activity that determines the robustness of software by testing beyond the limits of normal operation. Stress testing is particularly important for "mission critical" software, but is used for all types of software. Stress tests commonly put a greater emphasis on robustness, availability, and error handling under a heavy load, than on what would be considered correct behavior under normal circumstances.
In software engineering, software aging is the tendency for software to fail or cause a system failure after running continuously for a certain time, or because of ongoing changes in systems surrounding the software. Software aging has several causes, including the inability of old software to adapt to changing needs or changing technology platforms, and the tendency of software patches to introduce further errors. As the software gets older it becomes less well-suited to its purpose and will eventually stop functioning as it should. Rebooting or reinstalling the software can act as a short-term fix. A proactive fault management method to deal with the software aging incident is software rejuvenation. This method can be classified as an environment diversity technique that usually is implemented through software rejuvenation agents (SRA).
Psychometric software is software that is used for psychometric analysis of data from tests, questionnaires, or inventories reflecting latent psychoeducational variables. While some psychometric analyses can be performed with standard statistical software like SPSS, most analyses require specialized tools.
In computer programming and software development, debugging is the process of finding and resolving bugs within computer programs, software, or systems.
Accelerated life testing is the process of testing a product by subjecting it to conditions in excess of its normal service parameters in an effort to uncover faults and potential modes of failure in a short amount of time. By analyzing the product's response to such tests, engineers can make predictions about the service life and maintenance intervals of a product.
Physics of failure is a technique under the practice of reliability design that leverages the knowledge and understanding of the processes and mechanisms that induce failure to predict reliability and improve product performance.
Software reliability testing is a field of software-testing that relates to testing a software's ability to function, given environmental conditions, for a particular amount of time. Software reliability testing helps discover many problems in the software design and functionality.
Design of robust and reliable networks and network services relies on an understanding of the traffic characteristics of the network. Throughout history, different models of network traffic have been developed and used for evaluating existing and proposed networks and services.
{{cite journal}}
: Cite journal requires |journal=
(help)