Robustness testing

Last updated

Robustness testing is any quality assurance methodology focused on testing the robustness of software. Robustness testing has also been used to describe the process of verifying the robustness (i.e. correctness) of test cases in a test process. ANSI and IEEE have defined robustness as the degree to which a system or component can function correctly in the presence of invalid inputs or stressful environmental conditions. [1]

Contents

The term "robustness testing" was first used by the Ballista project at Carnegie Mellon University. They performed testing of operating systems for dependability based on the data types of POSIX API, producing complete system crashes in some systems. [2] The term was also used by OUSPG and VTT researchers taking part in the PROTOS project in the context of software security testing. [3] Eventually the term fuzzing (which security people use for mostly non-intelligent and random robustness testing) extended to also cover model-based robustness testing.

Methods

Fault injection

Fault injection is a testing method that can be used for checking robustness of systems. During the process, testing engineers inject faults into systems and observe the system's resiliency. [4] Test engineers can develop efficient methods which aid fault injection to find critical faults in the system. [5] [6]

See also

Related Research Articles

<span class="mw-page-title-main">Systems engineering</span> Interdisciplinary field of engineering

Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design, integrate, and manage complex systems over their life cycles. At its core, systems engineering utilizes systems thinking principles to organize this body of knowledge. The individual outcome of such efforts, an engineered system, can be defined as a combination of components that work in synergy to collectively perform a useful function.

In computer science, formal methods are mathematically rigorous techniques for the specification, development, and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.

Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems such as integrated circuits and printed circuit boards. The tools work together in a design flow that chip designers use to design and analyze entire semiconductor chips. Since a modern semiconductor chip can have billions of components, EDA tools are essential for their design; this article in particular describes EDA specifically with respect to integrated circuits (ICs).

<span class="mw-page-title-main">Computer-aided engineering</span> Use of software for engineering design and analysis

Computer-aided engineering (CAE) is the broad usage of computer software to aid in engineering analysis tasks. It includes finite element analysis (FEA), computational fluid dynamics (CFD), multibody dynamics (MBD), durability and optimization. It is included with computer-aided design (CAD) and computer-aided manufacturing (CAM) in the collective abbreviation "CAx".

Process engineering is the understanding and application of the fundamental principles and laws of nature that allow humans to transform raw material and energy into products that are useful to society, at an industrial level. By taking advantage of the driving forces of nature such as pressure, temperature and concentration gradients, as well as the law of conservation of mass, process engineers can develop methods to synthesize and purify large quantities of desired chemical products. Process engineering focuses on the design, operation, control, optimization and intensification of chemical, physical, and biological processes. Process engineering encompasses a vast range of industries, such as agriculture, automotive, biotechnical, chemical, food, material development, mining, nuclear, petrochemical, pharmaceutical, and software development. The application of systematic computer-based methods to process engineering is "process systems engineering".

In systems engineering, dependability is a measure of a system's availability, reliability, maintainability, and in some cases, other characteristics such as durability, safety and security. In real-time computing, dependability is the ability to provide services that can be trusted within a time-period. The service guarantees must hold even when the system is subject to attacks or natural failures.

In software project management, software testing, and software engineering, verification and validation (V&V) is the process of checking that a software system meets specifications and requirements so that it fulfills its intended purpose. It may also be referred to as software quality control. It is normally the responsibility of software testers as part of the software development lifecycle. In simple terms, software verification is: "Assuming we should build X, does our software achieve its goals without any bugs or gaps?" On the other hand, software validation is: "Was X what we should have built? Does X meet the high-level requirements?"

System testing is testing conducted on a complete integrated system to evaluate the system's compliance with its specified requirements.

A Byzantine fault is a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the "Byzantine generals problem", developed to describe a situation in which, in order to avoid catastrophic failure of the system, the system's actors must agree on a concerted strategy, but some of these actors are unreliable.

Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of one or more faults within some of its components. If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system, in which even a small failure can cause total breakdown. Fault tolerance is particularly sought after in high-availability, mission-critical, or even life-critical systems. The ability of maintaining functionality when portions of a system break down is referred to as graceful degradation.

A system architecture is the conceptual model that defines the structure, behavior, and more views of a system. An architecture description is a formal description and representation of a system, organized in a way that supports reasoning about the structures and behaviors of the system.

Fault coverage refers to the percentage of some type of fault that can be detected during the test of any engineered system. High fault coverage is particularly valuable during manufacturing test, and techniques such as Design For Test (DFT) and automatic test pattern generation are used to increase it.

In computer science, fault injection is a testing technique for understanding how computing systems behave when stressed in unusual ways. This can be achieved using physical- or software-based means, or using a hybrid approach. Widely studied physical fault injections include the application of high voltages, extreme temperatures and electromagnetic pulses on electronic components, such as computer memory and central processing units. By exposing components to conditions beyond their intended operating limits, computing systems can be coerced into mis-executing instructions and corrupting critical data.

Stress testing is a software testing activity that determines the robustness of software by testing beyond the limits of normal operation. Stress testing is particularly important for "mission critical" software, but is used for all types of software. Stress tests commonly put a greater emphasis on robustness, availability, and error handling under a heavy load, than on what would be considered correct behavior under normal circumstances.

Verification and validation are independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose. These are critical components of a quality management system such as ISO 9000. The words "verification" and "validation" are sometimes preceded with "independent", indicating that the verification and validation is to be performed by a disinterested third party. "Independent verification and validation" can be abbreviated as "IV&V".

The Oulu University Secure Programming Group (OUSPG) is a research group at the University of Oulu that studies, evaluates and develops methods of implementing and testing application and system software in order to prevent, discover and eliminate implementation level security vulnerabilities in a pro-active fashion. The focus is on implementation level security issues and software security testing.

In computer science, robustness is the ability of a computer system to cope with errors during execution and cope with erroneous input. Robustness can encompass many areas of computer science, such as robust programming, robust machine learning, and Robust Security Network. Formal techniques, such as fuzz testing, are essential to showing robustness since this type of testing involves invalid or unexpected inputs. Alternatively, fault injection can be used to test robustness. Various commercial products perform robustness testing of software analysis.

Chaos engineering is the discipline of experimenting on a system in order to build confidence in the system's capability to withstand turbulent conditions in production.

Reliability verification or reliability testing is a method to evaluate the reliability of the product in all environments such as expected use, transportation, or storage during the specified lifespan. It is to expose the product to natural or artificial environmental conditions to undergo its action to evaluate the performance of the product under the environmental conditions of actual use, transportation, and storage, and to analyze and study the degree of influence of environmental factors and their mechanism of action. Through the use of various environmental test equipment to simulate the high temperature, low temperature, and high humidity, and temperature changes in the climate environment, to accelerate the reaction of the product in the use environment, to verify whether it reaches the expected quality in R&D, design, and manufacturing.

References

  1. "Standard Glossary of Software Engineering Terminology (ANSI)". The Institute of Electrical and Electronics Engineers Inc. 1991.
  2. Kropp, Koopman, Siewiorek. 1998. Automated Robustness Testing of Off-the_Shelf Software Components. Proceedings of FTCS'98. http://www.ece.cmu.edu/~koopman/ballista/ftcs98/ftcs98.pdf
  3. Kaksonen, Rauli. 2001. A Functional Method for Assessing Protocol Implementation Security (Licentiate thesis). Espoo. Technical Research Centre of Finland, VTT Publications 448. 128 p. + app. 15 p. ISBN   951-38-5873-1 (soft back ed.) ISBN   951-38-5874-X (on-line ed.). https://www.ee.oulu.fi/research/ouspg/PROTOS_VTT2001-functional
  4. Moradi, Mehrdad; Van Acker, Bert; Vanherpen, Ken; Denil, Joachim (2019). Chamberlain, Roger; Taha, Walid; Törngren, Martin (eds.). "Model-Implemented Hybrid Fault Injection for Simulink (Tool Demonstrations)". Cyber Physical Systems. Model-Based Design. Lecture Notes in Computer Science. Cham: Springer International Publishing. 11615: 71–90. doi:10.1007/978-3-030-23703-5_4. ISBN   978-3-030-23703-5. S2CID   195769468.
  5. "Optimizing fault injection in FMI co-simulation through sensitivity partitioning | Proceedings of the 2019 Summer Simulation Conference". dl.acm.org. Retrieved 2020-06-15.
  6. Moradi, Mehrdad, Bentley James Oakes, Mustafa Saraoglu, Andrey Morozov, Klaus Janschek, and Joachim Denil. "Exploring Fault Parameter Space Using Reinforcement Learning-based Fault Injection." (2020).