Software quality

Last updated

In the context of software engineering, software quality refers to two related but distinct notions:[ citation needed ]

Contents

Many aspects of structural quality can be evaluated only statically through the analysis of the software inner structure, its source code (see Software metrics), [3] at the unit level, and at the system level (sometimes referred to as end-to-end testing [4] ), which is in effect how its architecture adheres to sound principles of software architecture outlined in a paper on the topic by Object Management Group (OMG). [5]

However some structural qualities, such as usability, can be assessed only dynamically (users or others acting on their behalf interact with the software or, at least, some prototype or partial implementation; even the interaction with a mock version made in cardboard represents a dynamic test because such version can be considered a prototype). Other aspects, such as reliability, might involve not only the software but also the underlying hardware, therefore, it can be assessed both statically and dynamically (stress test).[ citation needed ]

Functional quality is typically assessed dynamically but it is also possible to use static tests (such as software reviews).[ citation needed ]

Historically, the structure, classification and terminology of attributes and metrics applicable to software quality management have been derived or extracted from the ISO 9126 and the subsequent ISO/IEC 25000 standard. [6] Based on these models (see Models), the Consortium for IT Software Quality (CISQ) has defined five major desirable structural characteristics needed for a piece of software to provide business value: [7] Reliability, Efficiency, Security, Maintainability and (adequate) Size. [8] [9] [10]

Software quality measurement quantifies to what extent a software program or system rates along each of these five dimensions. An aggregated measure of software quality can be computed through a qualitative or a quantitative scoring scheme or a mix of both and then a weighting system reflecting the priorities. This view of software quality being positioned on a linear continuum is supplemented by the analysis of "critical programming errors" that under specific circumstances can lead to catastrophic outages or performance degradations that make a given system unsuitable for use regardless of rating based on aggregated measurements. Such programming errors found at the system level represent up to 90 percent of production issues, whilst at the unit-level, even if far more numerous, programming errors account for less than 10 percent of production issues (see also Ninety–ninety rule). As a consequence, code quality without the context of the whole system, as W. Edwards Deming described it, has limited value.[ citation needed ]

To view, explore, analyze, and communicate software quality measurements, concepts and techniques of information visualization provide visual, interactive means useful, in particular, if several software quality measures have to be related to each other or to components of a software or system. For example, software maps represent a specialized approach that "can express and combine information about software development, software quality, and system dynamics". [11]

Software quality also plays a role in the release phase of a software project. Specifically, the quality and establishment of the release processes (also patch processes), [12] [13] configuration management [14] are important parts of an overall software engineering process. [15] [16] [17]

Motivation

Software quality is motivated by at least two main perspectives:

Definitions

ISO

Software quality is the "capability of a software product to conform to requirements." [35] [36] while for others it can be synonymous with customer- or value-creation [37] [38] or even defect level. [39] Software quality measurements can be split into three parts: process quality, product quality which includes internal and external properties and lastly, quality in use, which is the effect of the software. [40]

ASQ

ASQ uses the following definition: Software quality describes the desirable attributes of software products. There are two main approaches exist: defect management and quality attributes. [41]

NIST

Software Assurance (SA) covers both the property and the process to achieve it: [42]

PMI

The Project Management Institute's PMBOK Guide "Software Extension" defines not "Software quality" itself, but Software Quality Assurance (SQA) as "a continuous process that audits other software processes to ensure that those processes are being followed (includes for example a software quality management plan)." whereas Software Quality Control (SCQ) means "taking care of applying methods, tools, techniques to ensure satisfaction of the work products towards quality requirements for a software under development or modification." [43]

Other general and historic

The first definition of quality history remembers is from Shewhart in the beginning of 20th century: "There are two common aspects of quality: one of them has to do with the consideration of the quality of a thing as an objective reality independent of the existence of man. The other has to do with what we think, feel or sense as a result of the objective reality. In other words, there is a subjective side of quality." [44]

Kitchenham and Pfleeger, further reporting the teachings of David Garvin, identify five different perspectives on quality: [45] [46]

The problem inherent in attempts to define the quality of a product, almost any product, were stated by the master Walter A. Shewhart. The difficulty in defining quality is to translate future needs of the user into measurable characteristics, so that a product can be designed and turned out to give satisfaction at a price that the user will pay. This is not easy, and as soon as one feels fairly successful in the endeavor, he finds that the needs of the consumer have changed, competitors have moved in, etc. [50]

Quality is a customer determination, not an engineer's determination, not a marketing determination, nor a general management determination. It is based on the customer's actual experience with the product or service, measured against his or her requirements -- stated or unstated, conscious or merely sensed, technically operational or entirely subjective -- and always representing a moving target in a competitive market. [51]

The word quality has multiple meanings. Two of these meanings dominate the use of the word: 1. Quality consists of those product features which meet the need of customers and thereby provide product satisfaction. 2. Quality consists of freedom from deficiencies. Nevertheless, in a handbook such as this it is convenient to standardize on a short definition of the word quality as "fitness for use". [52]

Tom DeMarco has proposed that "a product's quality is a function of how much it changes the world for the better."[ citation needed ] This can be interpreted as meaning that functional quality and user satisfaction are more important than structural quality in determining software quality.

Another definition, coined by Gerald Weinberg in Quality Software Management: Systems Thinking, is "Quality is value to some person." [53] [54] This definition stresses that quality is inherently subjective—different people will experience the quality of the same software differently. One strength of this definition is the questions it invites software teams to consider, such as "Who are the people we want to value our software?" and "What will be valuable to them?".

Other meanings and controversies

One of the challenges in defining quality is that "everyone feels they understand it" [55] and other definitions of software quality could be based on extending the various descriptions of the concept of quality used in business.

Software quality also often gets mixed-up with Quality Assurance or Problem Resolution Management [56] or Quality Control [57] or DevOps. It does over-lap with before mentioned areas (see also PMI definitions), but is distinctive as it does not solely focus on testing but also on processes, management, improvements, assessments, etc. [57]

Measurement

Although the concepts presented in this section are applicable to both structural and functional software quality, measurement of the latter is essentially performed through testing [see main article: Software testing]. [58] However, testing isn't enough: According to a study, individual programmers are less than 50% efficient at finding bugs in their own software. And most forms of testing are only 35% efficient. This makes it difficult to determine [software] quality. [59]

Introduction

Relationship between software desirable characteristics (right) and measurable attributes (left) SoftwareQualityCharacteristicAttributeRelationship.png
Relationship between software desirable characteristics (right) and measurable attributes (left)

Software quality measurement is about quantifying to what extent a system or software possesses desirable characteristics. This can be performed through qualitative or quantitative means or a mix of both. In both cases, for each desirable characteristic, there are a set of measurable attributes the existence of which in a piece of software or system tend to be correlated and associated with this characteristic. For example, an attribute associated with portability is the number of target-dependent statements in a program. More precisely, using the Quality Function Deployment approach, these measurable attributes are the "hows" that need to be enforced to enable the "whats" in the Software Quality definition above.

The structure, classification and terminology of attributes and metrics applicable to software quality management have been derived or extracted from the ISO 9126-3 and the subsequent ISO/IEC 25000:2005 quality model. The main focus is on internal structural quality. Subcategories have been created to handle specific areas like business application architecture and technical characteristics such as data access and manipulation or the notion of transactions.

The dependence tree between software quality characteristics and their measurable attributes is represented in the diagram on the right, where each of the 5 characteristics that matter for the user (right) or owner of the business system depends on measurable attributes (left):

Correlations between programming errors and production defects unveil that basic code errors account for 92 percent of the total errors in the source code. These numerous code-level issues eventually count for only 10 percent of the defects in production. Bad software engineering practices at the architecture levels account for only 8 percent of total defects, but consume over half the effort spent on fixing problems, and lead to 90 percent of the serious reliability, security, and efficiency issues in production. [60] [61]

Code-based analysis

Many of the existing software measures count structural elements of the application that result from parsing the source code for such individual instructions [62] tokens [63] control structures (Complexity), and objects. [64]

Software quality measurement is about quantifying to what extent a system or software rates along these dimensions. The analysis can be performed using a qualitative or quantitative approach or a mix of both to provide an aggregate view [using for example weighted average(s) that reflect relative importance between the factors being measured].

This view of software quality on a linear continuum has to be supplemented by the identification of discrete Critical Programming Errors. These vulnerabilities may not fail a test case, but they are the result of bad practices that under specific circumstances can lead to catastrophic outages, performance degradations, security breaches, corrupted data, and myriad other problems [65] that make a given system de facto unsuitable for use regardless of its rating based on aggregated measurements. A well-known example of vulnerability is the Common Weakness Enumeration, [66] a repository of vulnerabilities in the source code that make applications exposed to security breaches.

The measurement of critical application characteristics involves measuring structural attributes of the application's architecture, coding, and in-line documentation, as displayed in the picture above. Thus, each characteristic is affected by attributes at numerous levels of abstraction in the application and all of which must be included calculating the characteristic's measure if it is to be a valuable predictor of quality outcomes that affect the business. The layered approach to calculating characteristic measures displayed in the figure above was first proposed by Boehm and his colleagues at TRW (Boehm, 1978) [67] and is the approach taken in the ISO 9126 and 25000 series standards. These attributes can be measured from the parsed results of a static analysis of the application source code. Even dynamic characteristics of applications such as reliability and performance efficiency have their causal roots in the static structure of the application.

Structural quality analysis and measurement is performed through the analysis of the source code, the architecture, software framework, database schema in relationship to principles and standards that together define the conceptual and logical architecture of a system. This is distinct from the basic, local, component-level code analysis typically performed by development tools which are mostly concerned with implementation considerations and are crucial during debugging and testing activities.

Reliability

The root causes of poor reliability are found in a combination of non-compliance with good architectural and coding practices. This non-compliance can be detected by measuring the static quality attributes of an application. Assessing the static attributes underlying an application's reliability provides an estimate of the level of business risk and the likelihood of potential application failures and defects the application will experience when placed in operation.

Assessing reliability requires checks of at least the following software engineering best practices and technical attributes:

  • Application Architecture Practices
  • Coding Practices
  • Complexity of algorithms
  • Complexity of programming practices
  • Compliance with Object-Oriented and Structured Programming best practices (when applicable)
  • Component or pattern re-use ratio
  • Dirty programming
  • Error & Exception handling (for all layers - GUI, Logic & Data)
  • Multi-layer design compliance
  • Resource bounds management
  • Software avoids patterns that will lead to unexpected behaviors
  • Software manages data integrity and consistency
  • Transaction complexity level

Depending on the application architecture and the third-party components used (such as external libraries or frameworks), custom checks should be defined along the lines drawn by the above list of best practices to ensure a better assessment of the reliability of the delivered software.

Efficiency

As with Reliability, the causes of performance inefficiency are often found in violations of good architectural and coding practice which can be detected by measuring the static quality attributes of an application. These static attributes predict potential operational performance bottlenecks and future scalability problems, especially for applications requiring high execution speed for handling complex algorithms or huge volumes of data.

Assessing performance efficiency requires checking at least the following software engineering best practices and technical attributes:

Security

Software quality includes software security. [69] Many security vulnerabilities result from poor coding and architectural practices such as SQL injection or cross-site scripting. [70] [71] These are well documented in lists maintained by CWE, [72] and the SEI/Computer Emergency Center (CERT) at Carnegie Mellon University. [68]

Assessing security requires at least checking the following software engineering best practices and technical attributes:

Maintainability

Maintainability includes concepts of modularity, understandability, changeability, testability, reusability, and transferability from one development team to another. These do not take the form of critical issues at the code level. Rather, poor maintainability is typically the result of thousands of minor violations with best practices in documentation, complexity avoidance strategy, and basic programming practices that make the difference between clean and easy-to-read code vs. unorganized and difficult-to-read code. [78]

Assessing maintainability requires checking the following software engineering best practices and technical attributes:

  • Application Architecture Practices
  • Architecture, Programs and Code documentation embedded in source code
  • Code readability
  • Code smells
  • Complexity level of transactions
  • Complexity of algorithms
  • Complexity of programming practices
  • Compliance with Object-Oriented and Structured Programming best practices (when applicable)
  • Component or pattern re-use ratio
  • Controlled level of dynamic coding
  • Coupling ratio
  • Dirty programming
  • Documentation
  • Hardware, OS, middleware, software components and database independence
  • Multi-layer design compliance
  • Portability
  • Programming Practices (code level)
  • Reduced duplicate code and functions
  • Source code file organization cleanliness

Maintainability is closely related to Ward Cunningham's concept of technical debt, which is an expression of the costs resulting of a lack of maintainability. Reasons for why maintainability is low can be classified as reckless vs. prudent and deliberate vs. inadvertent, [79] [80] and often have their origin in developers' inability, lack of time and goals, their carelessness and discrepancies in the creation cost of and benefits from documentation and, in particular, maintainable source code. [81]

Size

Measuring software size requires that the whole source code be correctly gathered, including database structure scripts, data manipulation source code, component headers, configuration files etc. There are essentially two types of software sizes to be measured, the technical size (footprint) and the functional size:

The function point analysis sizing standard is supported by the International Function Point Users Group (IFPUG). It can be applied early in the software development life-cycle and it is not dependent on lines of code like the somewhat inaccurate Backfiring method. The method is technology agnostic and can be used for comparative analysis across organizations and across industries.

Since the inception of Function Point Analysis, several variations have evolved and the family of functional sizing techniques has broadened to include such sizing measures as COSMIC, NESMA, Use Case Points, FP Lite, Early and Quick FPs, and most recently Story Points. However, Function Points has a history of statistical accuracy, and has been used as a common unit of work measurement in numerous application development management (ADM) or outsourcing engagements, serving as the "currency" by which services are delivered and performance is measured.

One common limitation to the Function Point methodology is that it is a manual process and therefore it can be labor-intensive and costly in large scale initiatives such as application development or outsourcing engagements. This negative aspect of applying the methodology may be what motivated industry IT leaders to form the Consortium for IT Software Quality focused on introducing a computable metrics standard for automating the measuring of software size while the IFPUG keep promoting a manual approach as most of its activity rely on FP counters certifications.

CISQ defines Sizing as to estimate the size of software to support cost estimating, progress tracking or other related software project management activities. Two standards are used: Automated Function Points to measure the functional size of software and Automated Enhancement Points to measure the size of both functional and non-functional code in one measure. [82]

Identifying critical programming errors

Critical Programming Errors are specific architectural and/or coding bad practices that result in the highest, immediate or long term, business disruption risk. [83]

These are quite often technology-related and depend heavily on the context, business objectives and risks. Some may consider respect for naming conventions while others – those preparing the ground for a knowledge transfer for example – will consider it as absolutely critical.

Critical Programming Errors can also be classified per CISQ Characteristics. Basic example below:

Operationalized quality models

Newer proposals for quality models such as Squale and Quamoco [84] propagate a direct integration of the definition of quality attributes and measurement. By breaking down quality attributes or even defining additional layers, the complex, abstract quality attributes (such as reliability or maintainability) become more manageable and measurable. Those quality models have been applied in industrial contexts but have not received widespread adoption.

Trivia

See also

Further reading

Related Research Articles

Maintainability is the ease of maintaining or providing maintenance for a functioning product or service. Depending on the field, it can have slightly different meanings.

<span class="mw-page-title-main">Software architecture</span> High level structures of a software system

Software architecture is the set of structures needed to reason about a software system and the discipline of creating such structures and systems. Each structure comprises software elements, relations among them, and properties of both elements and relations.

Software maintenance in software engineering is the modification of a software product after delivery to correct faults, to improve performance or other attributes.

<span class="mw-page-title-main">ISO/IEC 9126</span> Former ISO and IEC standard

ISO/IEC 9126Software engineering — Product quality was an international standard for the evaluation of software quality. It has been replaced by ISO/IEC 25010:2011.

Software quality assurance (SQA) is a means and practice of monitoring all software engineering processes, methods, and work products to ensure compliance against defined standards. It may include ensuring conformance to standards or models, such as ISO/IEC 9126, SPICE or CMMI.

In systems engineering and requirements engineering, a non-functional requirement (NFR) is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviours. They are contrasted with functional requirements that define specific behavior or functions. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture, because they are usually architecturally significant requirements.

Software assurance (SwA) is a critical process in software development that ensures the reliability, safety, and security of software products. It involves a variety of activities, including requirements analysis, design reviews, code inspections, testing, and formal verification. One crucial component of software assurance is secure coding practices, which follow industry-accepted standards and best practices, such as those outlined by the Software Engineering Institute (SEI) in their CERT Secure Coding Standards (SCS).

IEC 61508 is an international standard published by the International Electrotechnical Commission (IEC) consisting of methods on how to apply, design, deploy and maintain automatic protection systems called safety-related systems. It is titled Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems.

The function point is a "unit of measurement" to express the amount of business functionality an information system provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost of a single unit is calculated from past projects.

The International Function Point Users Group (IFPUG) is a US-based organization with worldwide chapters of Function point analysis metric software users. It is a non-profit, member-governed organization founded in 1986.

Software sizing or software size estimation is an activity in software engineering that is used to determine or estimate the size of a software application or component in order to be able to implement other software project management activities. Size is an inherent characteristic of a piece of software just like weight is an inherent characteristic of a tangible material.

Software measurement is a quantified attribute of a characteristic of a software product or the software process. It is a discipline within software engineering. The process of software measurement is defined and governed by ISO Standard ISO 15939.

SNAP is the acronym for "Software Non-functional Assessment Process," a measurement of the size of the software derived by quantifying the non-functional user requirements for the software. The SNAP sizing method complements ISO/IEC 20926:2009, which defines a method for the sizing of functional software user requirements. SNAP is a product of the International Function Point Users Group (IFPUG), and is sized using the “Software Non-functional Assessment Process (SNAP) Assessment Practices Manual” (APM) now in version 2.4. Reference “IEEE 2430-2019-IEEE Trial-Use Standard for Non-Functional Sizing Measurements,” published October 19, 2019. Also reference ISO standard “Software engineering — Trial use standard for software non-functional sizing measurements,”, published October 2021. For more information about SNAP please visit YouTube and search for "IFPUG SNAP;" this will provide a series of videos overviewing the SNAP methodology.

Bill Curtis is a software engineer best known for leading the development of the Capability Maturity Model and the People CMM in the Software Engineering Institute at Carnegie Mellon University, and for championing the spread of software process improvement and software measurement globally. In 2007 he was elected a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for his contributions to software process improvement and measurement. He was named to the 2022 class of ACM Fellows, "for contributions to software process, software measurement, and human factors in software engineering".

Software Intelligence is insight into the inner workings and structural condition of software assets produced by software designed to analyze database structure, software framework and source code to better understand and control complex software systems in Information Technology environments. Similarly to Business Intelligence (BI), Software Intelligence is produced by a set of software tools and techniques for the mining of data and the software's inner-structure. Results are automatically produced and feed a knowledge base containing technical documentation and blueprints of the innerworking of applications, and make it available to all to be used by business and software stakeholders to make informed decisions, measure the efficiency of software development organizations, communicate about the software health, prevent software catastrophes.

COSMIC functional size measurement is a method to measure a standard functional size of a piece of software. COSMIC is an acronym of COmmon Software Measurement International Consortium, a voluntary organization that has developed the method and is still expanding its use to more software domains.

References

Notes

  1. "Learning from history: The case of Software Requirements Engineering – Requirements Engineering Magazine". Learning from history: The case of Software Requirements Engineering – Requirements Engineering Magazine. Retrieved 2021-02-25.
  2. Pressman, Roger S. (2005). Software Engineering: A Practitioner's Approach (Sixth International ed.). McGraw-Hill Education. p. 388. ISBN   0071267824.
  3. "About the Automated Source Code Quality Measures Specification Version 1.0". www.omg.org. Retrieved 2021-02-25.
  4. "How to Perform End-to-End Testing". smartbear.com. Retrieved 2021-02-25.
  5. "How to Deliver Resilient, Secure, Efficient, and Easily Changed IT Systems in Line with CISQ Recommendations" (PDF). Archived (PDF) from the original on 2013-12-28. Retrieved 2013-10-18.
  6. "ISO/IEC 25010:2011". ISO. Retrieved 2021-02-23.
  7. 1 2 Armour, Phillip G. (2012-06-01). "A measure of control". Communications of the ACM. 55 (6): 26–28. doi:10.1145/2184319.2184329. ISSN   0001-0782. S2CID   6059054.
  8. Voas, J. (November 2011). "Software's secret sauce: the "-ilities" [software quality]". IEEE Software. 21 (6): 14–15. doi:10.1109/MS.2004.54. ISSN   1937-4194.
  9. "Code Quality Standards | CISQ - Consortium for Information & Software Quality". www.it-cisq.org. Retrieved 2021-02-25.
  10. "Software Sizing Standards | CISQ - Consortium for Information & Software Quality". www.it-cisq.org. Retrieved 2021-02-25.
  11. J. Bohnet, J. Döllner Archived 2014-04-27 at the Wayback Machine , "Monitoring Code Quality and Development Activity by Software Maps". Proceedings of the IEEE ACM ICSE Workshop on Managing Technical Debt, pp. 9-16, 2011.
  12. "IIA - Global Technology Audit Guide: IT Change Management: Critical for Organizational Success". na.theiia.org. Retrieved 2021-02-26.
  13. Boursier, Jérôme (2018-01-11). "Meltdown and Spectre fallout: patching problems persist". Malwarebytes Labs. Retrieved 2021-02-26.
  14. "Best practices for software updates - Configuration Manager". docs.microsoft.com. Retrieved 2021-02-26.
  15. Wright, Hyrum K. (2009-08-25). "Release engineering processes, models, and metrics". Proceedings of the doctoral symposium for ESEC/FSE on Doctoral symposium. ESEC/FSE Doctoral Symposium '09. Amsterdam, the Netherlands: Association for Computing Machinery. pp. 27–28. doi:10.1145/1595782.1595793. ISBN   978-1-60558-731-8. S2CID   10483918.
  16. van der Hoek, André; Hall, Richard S.; Heimbigner, Dennis; Wolf, Alexander L. (November 1997). "Software release management". ACM SIGSOFT Software Engineering Notes. 22 (6): 159–175. doi: 10.1145/267896.267909 . ISSN   0163-5948.
  17. Sutton, Mike; Moore, Tym (2008-07-30). "7 Ways to Improve Your Software Release Management". CIO. Retrieved 2021-02-26.
  18. Clark, Mitchell (2021-02-24). "iRobot says it'll be a few weeks until it can clean up its latest Roomba software update mess". The Verge. Retrieved 2021-02-25.
  19. "Top 25 Software Errors". www.sans.org. Retrieved 2021-02-25.
  20. "'Turn it Off and On Again Every 149 Hours' Is a Concerning Remedy for a $300 Million Airbus Plane's Software Bug". Gizmodo. 30 July 2019. Retrieved 2021-02-25.
  21. "MISRA C, Toyota and the Death of Task X" . Retrieved 2021-02-25.
  22. "An Update on Toyota and Unintended Acceleration « Barr Code". embeddedgurus.com. Retrieved 2021-02-25.
  23. Medical Devices: The Therac-25* Archived 2008-02-16 at the Wayback Machine , Nancy Leveson, University of Washington
  24. Embedded Software Archived 2010-07-05 at the Wayback Machine , Edward A. Lee, To appear in Advances in Computers (Marvin Victor Zelkowitz, editor), Vol. 56, Academic Press, London, 2002, Revised from UCB ERL Memorandum M01/26 University of California, Berkeley, CA 94720, USA, November 1, 2001
  25. "Aircraft Certification Software and Airborne Electronic Hardware". Archived from the original on 4 October 2014. Retrieved 28 September 2014.
  26. "The Cost of Poor Software Quality in the US: A 2020 Report | CISQ - Consortium for Information & Software Quality". www.it-cisq.org. Retrieved 2021-02-25.
  27. "What is Waste? | Agile Alliance". Agile Alliance |. 20 April 2016. Retrieved 2021-02-25.
  28. Matteson, Scott (January 26, 2018). "Report: Software failure caused $1.7 trillion in financial losses in 2017". TechRepublic. Retrieved 2021-02-25.
  29. Cohane, Ryan (2017-11-16). "Financial Cost of Software Bugs". Medium. Retrieved 2021-02-25.
  30. Eloff, Jan; Bella, Madeleine Bihina (2018), "Software Failures: An Overview", Software Failure Investigation, Cham: Springer International Publishing, pp. 7–24, doi:10.1007/978-3-319-61334-5_2, ISBN   978-3-319-61333-8 , retrieved 2021-02-25
  31. "Poor software quality cost businesses $2 trillion last year and put security at risk". CIO Dive. Retrieved 2021-02-26.
  32. "Synopsys-Sponsored CISQ Research Estimates Cost of Poor Software Quality in the US $2.08 Trillion in 2020". finance.yahoo.com. Retrieved 2021-02-26.
  33. "What Does a Data Breach Cost in 2020?". Digital Guardian. 2020-08-06. Retrieved 2021-03-08.
  34. "Cost of a Data Breach Report 2020 | IBM". www.ibm.com. 2020. Retrieved 2021-03-08.
  35. "ISO - ISO 9000 family — Quality management". ISO. Retrieved 2021-02-24.
  36. "ISO/IEC/IEEE 24765:2017". ISO. Retrieved 2021-02-24.
  37. 1 2 "Mastering automotive software". www.mckinsey.com. Retrieved 2021-02-25.
  38. "ISO/IEC 25010:2011". ISO. Retrieved 2021-02-24.
  39. Wallace, D.R. (2002). "Practical software reliability modeling". Proceedings 26th Annual NASA Goddard Software Engineering Workshop. Greenbelt, MD, USA: IEEE Comput. Soc. pp. 147–155. doi:10.1109/SEW.2001.992668. ISBN   978-0-7695-1456-7. S2CID   57382117.
  40. "ISO/IEC 25023:2016". ISO. Retrieved 2023-11-06.
  41. "What is Software Quality? | ASQ". asq.org. Retrieved 2021-02-24.
  42. "SAMATE - Software Assurance Metrics And Tool Evaluation project main page". NIST. 3 February 2021. Retrieved 2021-02-26.
  43. Software extension to the PMBOK guide. Project Management Institute (5th ed.). Newtown Square, Pennsylvania. 2013. ISBN   978-1-62825-041-1. OCLC   959513383.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: others (link)
  44. Shewart, Walter A. (2015). Economioc control of quality of manufactured product. [Place of publication not identified]: Martino Fine Books. ISBN   978-1-61427-811-5. OCLC   1108913766.
  45. Kitchenham, B.; Pfleeger, S. L. (January 1996). "Software quality: the elusive target [special issues section]". IEEE Software. 13 (1): 12–21. doi:10.1109/52.476281. ISSN   1937-4194.
  46. Garvin, David A. (1988). Managing quality : the strategic and competitive edge. New York: Free Press. ISBN   0-02-911380-6. OCLC   16005388.
  47. 1 2 B. Kitchenham and S. Pfleeger, "Software quality: the elusive target", IEEE Software, vol. 13, no. 1, pp. 12–21, 1996.
  48. Kan, Stephen H. (2003). Metrics and models in software quality engineering (2nd ed.). Boston: Addison-Wesley. ISBN   0-201-72915-6. OCLC   50149641.
  49. International Organization for Standardization, "ISO/IEC 9001: Quality management systems -- Requirements," 1999.
  50. W. E. Deming, "Out of the crisis: quality, productivity and competitive position". Cambridge University Press, 1988.
  51. A. V. Feigenbaum, "Total Quality Control", McGraw-Hill, 1983.
  52. J.M. Juran, "Juran's Quality Control Handbook", McGraw-Hill, 1988.
  53. Weinberg, Gerald M. (1991). Quality software management: Volume 1, Systems Thinking. New York, N.Y.: Dorset House. ISBN   0-932633-22-6. OCLC   23870230.
  54. Weinberg, Gerald M. (1993). Quality software management: Volume 2, First-Order Measurement. New York, N.Y.: Dorset House. ISBN   0-932633-22-6. OCLC   23870230.
  55. Crosby, P., Quality is Free, McGraw-Hill, 1979
  56. "SUP.9 – Problem Resolution Management - Kugler Maag Cie". www.kuglermaag.com. Retrieved 2021-02-25.
  57. 1 2 Hoipt (2019-11-29). "Organizations often use the terms 'Quality Assurance' (QA) vs 'Quality Control' (QC)…". Medium. Retrieved 2021-02-25.
  58. Wallace, D.; Watson, A. H.; Mccabe, T. J. (1996-08-01). "Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric". NIST.
  59. Bellairs, Richard. "What Is Code Quality? And How to Improve Code Quality". Perforce Software. Retrieved 2021-02-28.
  60. "OMG Whitepaper | CISQ - Consortium for Information & Software Quality". www.it-cisq.org. Retrieved 2021-02-26.
  61. "How to Deliver Resilient, Secure, Efficient and Agile IT Systems in Line with CISQ Recommendations - Whitepaper | Object Management Group" (PDF). Archived (PDF) from the original on 2013-12-28. Retrieved 2013-10-18.
  62. "Software Size Measurement: A Framework for Counting Source Statements". resources.sei.cmu.edu. 31 August 1992. Retrieved 2021-02-24.
  63. Halstead, Maurice H. (1977). Elements of Software Science (Operating and programming systems series). USA: Elsevier Science Inc. ISBN   978-0-444-00205-1.
  64. Chidamber, S. R.; Kemerer, C. F. (June 1994). "A metrics suite for object oriented design". IEEE Transactions on Software Engineering. 20 (6): 476–493. doi:10.1109/32.295895. hdl: 1721.1/48424 . ISSN   1939-3520. S2CID   9493847.
  65. Nygard, Michael (2007). Release It!. an O'Reilly Media Company Safari (1st ed.). ISBN   978-0978739218. OCLC   1102387436.
  66. "CWE - Common Weakness Enumeration". cwe.mitre.org. Archived from the original on 2016-05-10. Retrieved 2016-05-20.
  67. Boehm, B., Brown, J.R., Kaspar, H., Lipow, M., MacLeod, G.J., & Merritt, M.J. (1978). Characteristics of Software Quality. North-Holland.
  68. 1 2 3 "SEI CERT Coding Standards - CERT Secure Coding - Confluence". wiki.sei.cmu.edu. Retrieved 2021-02-24.
  69. "Code quality and code security: How are they related? | Synopsys". Software Integrity Blog. 2019-05-24. Retrieved 2021-03-09.
  70. "Cost of a Data Breach Report 2020 | IBM". www.ibm.com. 2020. Retrieved 2021-03-09.
  71. "Key Takeaways from the 2020 Cost of a Data Breach Report". Bluefin. 2020-08-27. Retrieved 2021-03-09.
  72. "CWE - Common Weakness Enumeration". Cwe.mitre.org. Archived from the original on 2013-10-14. Retrieved 2013-10-18.
  73. Security in Development: The IBM Secure Engineering Framework | IBM Redbooks. 2016-09-30.
  74. Enterprise Security Architecture Using IBM Tivoli Security Solutions | IBM Redbooks. 2016-09-30.
  75. "Secure Architecture Design Definitions | CISA". us-cert.cisa.gov. Retrieved 2021-03-09.
  76. "OWASP Foundation | Open Source Foundation for Application Security". owasp.org. Retrieved 2021-02-24.
  77. "CWE's Top 25". Sans.org. Retrieved 2013-10-18.
  78. IfSQ Level-2 A Foundation-Level Standard for Computer Program Source Code Archived 2011-10-27 at the Wayback Machine , Second Edition August 2008, Graham Bolton, Stuart Johnston, IfSQ, Institute for Software Quality.
  79. Fowler, Martin (October 14, 2009). "TechnicalDebtQuadrant". Archived from the original on February 2, 2013. Retrieved February 4, 2013.
  80. "Code quality: a concern for businesses, bottom lines, and empathetic programmers". Stack Overflow. 2021-10-18. Retrieved 2023-12-05.
  81. Prause, Christian; Durdik, Zoya (June 3, 2012). "Architectural design and documentation: Waste in agile development?". 2012 International Conference on Software and System Process (ICSSP). IEEE Computer Society. pp. 130–134. doi:10.1109/ICSSP.2012.6225956. ISBN   978-1-4673-2352-9. S2CID   15216552.
  82. "Software Sizing Standards | CISQ - Consortium for Information & Software Quality". www.it-cisq.org. Retrieved 2021-01-28.
  83. "Why Software fails". IEEE Spectrum: Technology, Engineering, and Science News. 2 September 2005. Retrieved 2021-03-20.
  84. Wagner, Stefan; Goeb, Andreas; Heinemann, Lars; Kläs, Michael; Lampasona, Constanza; Lochmann, Klaus; Mayr, Alois; Plösch, Reinhold; Seidl, Andreas (2015). "Operationalised product quality models and assessment: The Quamoco approach" (PDF). Information and Software Technology. 62: 101–123. arXiv: 1611.09230 . doi:10.1016/j.infsof.2015.02.009. S2CID   10992384.
  85. Ebert, Christof (2010). Software Measurement: Establish - Extract - Evaluate - Execute. Springer. ISBN   9783642090806. OCLC   941931829.
  86. 1 2 "Managing the Unmanageable: More Rules of Thumb". www.managingtheunmanageable.net. Retrieved 2021-02-26.
  87. Suryanarayana, Girish (2015). "Software Process versus Design Quality: Tug of War?". IEEE Software. 32 (4): 7–11. doi: 10.1109/MS.2015.87 . S2CID   9226051.
  88. "Software Quality Professional | ASQ". asq.org. Retrieved 2021-01-28.
  89. "Software Quality Journal". Springer. Retrieved 2021-01-28.

Bibliography