This article includes a list of general references, but it lacks sufficient corresponding inline citations .(October 2010) |
A Fagan inspection is a process of trying to find defects in documents (such as source code or formal specifications) during various phases of the software development process. It is named after Michael Fagan, who is credited with the invention of formal software inspections.
Fagan inspection defines[ citation needed ] a process as a certain activity with pre-specified entry and exit criteria. In every process for which entry and exit criteria are specified, Fagan inspections can be used to validate if the output of the process complies with the exit criteria specified for the process. Fagan inspection uses a group review method to evaluate the output of a given process.[ citation needed ]
Examples of activities for which Fagan inspection can be used are:
The software development process is a typical application of Fagan inspection. As the costs to remedy a defect are up to 10 to 100 times less in the early operations compared to fixing a defect in the maintenance phase, [1] it is essential to find defects as close to the point of insertion as possible. This is done by inspecting the output of each operation and comparing that to the output requirements, or exit criteria, of that operation.
Entry criteria are the criteria or requirements which must be met to enter a specific process. [2] For example, for Fagan inspections the high- and low-level documents must comply with specific entry criteria before they can be used for a formal inspection process.
Exit criteria are the criteria or requirements which must be met to complete a specific process. For example, for Fagan inspections the low-level document must comply with specific exit criteria (as specified in the high-level document) before the development process can be taken to the next phase.
The exit criteria are specified in a high-level document, which is then used as the standard to which the operation result (low-level document) is compared during the inspection. Any failure of the low-level document to satisfy the high-level requirements specified in the high-level document are called defects [2] (and can be further categorized as major or minor defects). Minor defects do not threaten the correct functioning of the software, but may be small errors such as spelling mistakes or unaesthetic positioning of controls in a graphical user interface.
A typical Fagan inspection consists of the following operations: [2]
In the follow-up phase of a Fagan inspection, defects fixed in the rework phase should be verified. The moderator is usually responsible for verifying rework. Sometimes fixed work can be accepted without being verified, such as when the defect was trivial. In non-trivial cases, a full re-inspection is performed by the inspection team (not only the moderator). In this phase all the participants agree that the defects are addressed adequately.
If verification fails, go back to the rework process.
The inspection process is normally performed by members of the same team that is implementing the project. The participants fulfill different roles within the inspection process: [3] [4]
By using inspections the number of errors in the final product can significantly decrease, creating a higher quality product. In the future the team will even be able to avoid errors as the inspection sessions give them insight into the most frequently made errors in both design and coding providing avoidance of error at the root of their occurrence. By continuously improving the inspection process these insights can even further be used. [2]
Together with the qualitative benefits mentioned above major "cost improvements" can be reached as the avoidance and earlier detection of errors will reduce the amount of resources needed for debugging in later phases of the project.
In practice very positive results have been reported by large corporations such as IBM,[ citation needed ] indicating that 80% to 90% of defects can be found and savings in resources up to 25% can be reached. [2]
Although the Fagan inspection method has been proved to be very effective,[ citation needed ] improvements have been suggested by multiple researchers. M. Genuchten [5] for example has been researching the usage of an Electronic Meeting System (EMS) to improve the productivity of the meetings with positive results [6]
Other researchers propose the usage of software that keeps a database of detected errors and automatically scans program code for these common errors. [7] This again should result in improved productivity.
Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to:
Inspection in software engineering, refers to peer review of any work product by trained individuals who look for defects using a well defined process. An inspection might also be referred to as a Fagan inspection after Michael Fagan, the creator of a very popular software inspection process.
An inspection is, most generally, an organized examination or formal evaluation exercise. In engineering activities inspection involves the measurements, tests, and gauges applied to certain characteristics in regard to an object or activity. The results are usually compared to specified requirements and standards for determining whether the item or activity is in line with these targets, often with a Standard Inspection Procedure in place to ensure consistent checking. Inspections are usually non-destructive.
In software project management, software testing, and software engineering, verification and validation (V&V) is the process of checking that a software system meets specifications and requirements so that it fulfills its intended purpose. It may also be referred to as software quality control. It is normally the responsibility of software testers as part of the software development lifecycle. In simple terms, software verification is: "Assuming we should build X, does our software achieve its goals without any bugs or gaps?" On the other hand, software validation is: "Was X what we should have built? Does X meet the high-level requirements?"
White-box testing is a method of software testing that tests internal structures or workings of an application, as opposed to its functionality. In white-box testing, an internal perspective of the system is used to design test cases. The tester chooses inputs to exercise paths through the code and determine the expected outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT). White-box testing can be applied at the unit, integration and system levels of the software testing process. Although traditional testers tended to think of white-box testing as being done at the unit level, it is used for integration and system testing more frequently today. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it has the potential to miss unimplemented parts of the specification or missing requirements. Where white-box testing is design-driven, that is, driven exclusively by agreed specifications of how each component of software is required to behave, white-box test techniques can accomplish assessment for unimplemented or missing requirements.
Exit criteria are the criteria or requirements which must be met to complete a specific task or process as used in some fields of business or science, such as software engineering.
Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
DO-178B, Software Considerations in Airborne Systems and Equipment Certification is a guideline dealing with the safety of safety-critical software used in certain airborne systems. It was jointly developed by the safety-critical working group RTCA SC-167 of the Radio Technical Commission for Aeronautics (RTCA) and WG-12 of the European Organisation for Civil Aviation Equipment (EUROCAE). RTCA published the document as RTCA/DO-178B, while EUROCAE published the document as ED-12B. Although technically a guideline, it was a de facto standard for developing avionics software systems until it was replaced in 2012 by DO-178C.
Environmental stress screening (ESS) refers to the process of exposing a newly manufactured or repaired product or component to stresses such as thermal cycling and vibration in order to force latent defects to manifest themselves by permanent or catastrophic failure during the screening process. The surviving population, upon completion of screening, can be assumed to have a higher reliability than a similar unscreened population.
Extreme programming (XP) is an agile software development methodology used to implement software systems. This article details the practices used in this methodology. Extreme programming has 12 practices, grouped into four areas, derived from the best practices of software engineering.
In software development, peer review is a type of software review in which a work product is examined by author's colleagues, in order to evaluate the work product's technical content and quality.
A software review is "a process or meeting during which a software product is examined by a project personnel, managers, users, customers, user representatives, or other interested parties for comment or approval".
Software security assurance is a process that helps design and implement software that protects the data and resources contained in and controlled by that software. Software is itself a resource and thus must be afforded appropriate security.
In software development, the V-model represents a development process that may be considered an extension of the waterfall model, and is an example of the more general V-model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represent time or project completeness (left-to-right) and level of abstraction, respectively.
Reverse semantic traceability (RST) is a quality control method for verification improvement that helps to insure high quality of artifacts by backward translation at each stage of the software development process.
In engineering, technical peer review is a well defined review process for finding and correcting defects conducted by a team of peers with assigned roles. Technical peer reviews are carried out by peers representing areas of life cycle affected by material being reviewed. Technical peer reviews are held within development phases, between milestone reviews, on completed products, or on completed portions of products. A technical peer review may also be called an engineering peer review, a product peer review, a peer review/inspection or an inspection.
DO-178C, Software Considerations in Airborne Systems and Equipment Certification is the primary document by which the certification authorities such as FAA, EASA and Transport Canada approve all commercial software-based aerospace systems. The document is published by RTCA, Incorporated, in a joint effort with EUROC and replaces DO-178B. The new document is called DO-178C/ED-12C and was completed in November 2011 and approved by the RTCA in December 2011. It became available for sale and use in January 2012.
Software construction is a software engineering discipline. It is the detailed creation of working meaningful software through a combination of coding, verification, unit testing, integration testing, and debugging. It is linked to all the other software engineering disciplines, most strongly to software design and software testing.
Informal methods of validation and verification are some of the more frequently used in modeling and simulation. They are called informal because they are more qualitative than quantitative. While many methods of validation or verification rely on numerical results, informal methods tend to rely on the opinions of experts to draw a conclusion. While numerical results are not the primary focus, this does not mean that the numerical results are completely ignored. There are several reasons why an informal method might be chosen. In some cases, informal methods offer the convenience of quick testing to see if a model can be validated. In other instances, informal methods are simply the best available option. Informal methods are not less effective than formal methods and should be performed with the same discipline and structure that one would expect in "formal" methods. When executed properly, solid conclusions can be made.
This article discusses a set of tactics useful in software testing. It is intended as a comprehensive list of tactical approaches to Software Quality Assurance (more widely colloquially known as Quality Assurance and general application of the test method.
Ron Radice, High Quality, Low Cost Software Inspections, Paradoxicon Publishing (September 21, 2001)