Complex systems |
---|
Topics |
In computer science, robustness is the ability of a computer system to cope with errors during execution [1] [2] and cope with erroneous input. [2] Robustness can encompass many areas of computer science, such as robust programming, robust machine learning, and Robust Security Network. Formal techniques, such as fuzz testing, are essential to showing robustness since this type of testing involves invalid or unexpected inputs. Alternatively, fault injection can be used to test robustness. Various commercial products perform robustness testing of software analysis. [3]
In general, building robust systems that encompass every point of possible failure is difficult because of the vast quantity of possible inputs and input combinations. [4] Since all inputs and input combinations would require too much time to test, developers cannot run through all cases exhaustively. Instead, the developer will try to generalize such cases. [5] For example, imagine inputting some integer values. Some selected inputs might consist of a negative number, zero, and a positive number. When using these numbers to test software in this way, the developer generalizes the set of all reals into three numbers. This is a more efficient and manageable method, but more prone to failure. Generalizing test cases is an example of just one technique to deal with failure—specifically, failure due to invalid user input. Systems generally may also fail due to other reasons as well, such as disconnecting from a network.
Regardless, complex systems should still handle any errors encountered gracefully. There are many examples of such successful systems. Some of the most robust systems are evolvable and can be easily adapted to new situations. [4]
Programs and software are tools focused on a very specific task, and thus are not generalized and flexible. [4] However, observations in systems such as the internet or biological systems demonstrate adaptation to their environments. One of the ways biological systems adapt to environments is through the use of redundancy. [4] Many organs are redundant in humans. The kidney is one such example. Humans generally only need one kidney, but having a second kidney allows room for failure. This same principle may be taken to apply to software, but there are some challenges. When applying the principle of redundancy to computer science, blindly adding code is not suggested. Blindly adding code introduces more errors, makes the system more complex, and renders it harder to understand. [6] Code that does not provide any reinforcement to the already existing code is unwanted. The new code must instead possess equivalent functionality, so that if a function is broken, another providing the same function can replace it, using manual or automated software diversity. To do so, the new code must know how and when to accommodate the failure point. [4] This means more logic needs to be added to the system. But as a system adds more logic, components, and increases in size, it becomes more complex. Thus, when making a more redundant system, the system also becomes more complex and developers must consider balancing redundancy with complexity.
Currently, computer science practices do not focus on building robust systems. [4] Rather, they tend to focus on scalability and efficiency. One of the main reasons why there is no focus on robustness today is because it is hard to do in a general way. [4]
Robust programming is a style of programming that focuses on handling unexpected termination and unexpected actions. [7] It requires code to handle these terminations and actions gracefully by displaying accurate and unambiguous error messages. These error messages allow the user to more easily debug the program.
Robust machine learning typically refers to the robustness of machine learning algorithms. For a machine learning algorithm to be considered robust, either the testing error has to be consistent with the training error, or the performance is stable after adding some noise to the dataset. [8] Recently, consistently with their rise in popularity, there has been an increasing interest in the robustness of neural networks. This is particularly due their vulnerability to adverserial attacks. [9]
Robust network design is the study of network design in the face of variable or uncertain demands. [10] In a sense, robustness in network design is broad just like robustness in software design because of the vast possibilities of changes or inputs.
There exist algorithms that tolerate errors in the input. [11]
Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic.
Software testing is the act of checking whether software satisfies expectations.
A software bug is bug in computer software.
Defensive programming is a form of defensive design intended to develop programs that are capable of detecting potential security abnormalities and make predetermined responses. It ensures the continuing function of a piece of software under unforeseen circumstances. Defensive programming practices are often used where high availability, safety, or security is needed.
Unit testing, a.k.a. component or module testing, is a form of software testing by which isolated source code is tested to validate expected behavior.
Software development is the process used to create software. Programming and maintaining the source code is the central step of this process, but it also includes conceiving the project, evaluating its feasibility, analyzing the business requirements, software design, testing, to release. Software engineering, in addition to development, also includes project management, employee management, and other overhead functions. Software development may be sequential, in which each step is complete before the next begins, but iterative development methods where multiple steps can be executed at once and earlier steps can be revisited have also been devised to improve flexibility, efficiency, and scheduling.
In-circuit emulation (ICE) is the use of a hardware device or in-circuit emulator used to debug the software of an embedded system. It operates by using a processor with the additional ability to support debugging operations, as well as to carry out the main function of the system. Particularly for older systems, with limited processors, this usually involved replacing the processor temporarily with a hardware emulator: a more powerful although more expensive version. It was historically in the form of bond-out processor which has many internal signals brought out for the purpose of debugging. These signals provide information about the state of the processor.
A game programmer is a software engineer, programmer, or computer scientist who primarily develops codebases for video games or related software, such as game development tools. Game programming has many specialized disciplines, all of which fall under the umbrella term of "game programmer". A game programmer should not be confused with a game designer, who works on game design.
Test-driven development (TDD) is a way of writing code that involves writing an automated unit-level test case that fails, then writing just enough code to make the test pass, then refactoring both the test code and the production code, then repeating with another new test case.
A race condition or race hazard is the condition of an electronics, software, or other system where the system's substantive behavior is dependent on the sequence or timing of other uncontrollable events, leading to unexpected or inconsistent results. It becomes a bug when one or more of the possible behaviors is undesirable.
Game programming, a subset of game development, is the software development of video games. Game programming requires substantial skill in software engineering and computer programming in a given language, as well as specialization in one or more of the following areas: simulation, computer graphics, artificial intelligence, physics, audio programming, and input. For multiplayer games, knowledge of network programming is required. In some genres, e.g. fighting games, advanced network programming is often demanded, as the netcode and its properties are considered by players and critics to be some of the most important metrics of the game's quality. For massively multiplayer online games (MMOGs), even further knowledge of database programming and advanced networking programming are required. Though often engaged in by professional game programmers, there is a thriving scene of independent developers who lack a relationship with a publishing company.
Hard coding is the software development practice of embedding data directly into the source code of a program or other executable object, as opposed to obtaining the data from external sources or generating it at runtime.
Fault tolerance is the ability of a system to maintain proper operation in the event of failures or faults in one or more of its components. Any decrease in operating quality is proportional to the severity of the failure, unlike a naively designed system in which even a small failure can lead to total breakdown. Fault tolerance is particularly sought after in high-availability, mission-critical, or even life-critical systems. The ability to maintain functionality when portions of a system break down is referred to as graceful degradation.
Coding best practices or programming best practices are a set of informal, sometimes personal, rules that many software developers, in computer programming follow to improve software quality. Many computer programs require being robust and reliable for long periods of time, so any rules need to facilitate both initial development and subsequent maintenance of source code by people other than the original authors.
In computer science, fault injection is a testing technique for understanding how computing systems behave when stressed in unusual ways. This can be achieved using physical- or software-based means, or using a hybrid approach. Widely studied physical fault injections include the application of high voltages, extreme temperatures and electromagnetic pulses on electronic components, such as computer memory and central processing units. By exposing components to conditions beyond their intended operating limits, computing systems can be coerced into mis-executing instructions and corrupting critical data.
Stress testing is a software testing activity that determines the robustness of software by testing beyond the limits of normal operation. Stress testing is particularly important for "mission critical" software, but is used for all types of software. Stress tests commonly put a greater emphasis on robustness, availability, and error handling under a heavy load, than on what would be considered correct behavior under normal circumstances.
In engineering, debugging is the process of finding the root cause of and workarounds and possible fixes for bugs.
Chaos engineering is the discipline of experimenting on a system in order to build confidence in the system's capability to withstand turbulent conditions in production.
This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to Software Quality Assurance (more widely colloquially known as Quality Assurance and general application of the test method.
This glossary of computer science is a list of definitions of terms and concepts used in computer science, its sub-disciplines, and related fields, including terms relevant to software, data science, and computer programming.