Software quality

Last updated

In the context of software engineering, software quality refers to two related but distinct notions:

Software engineering is the application of engineering to the development of software in a systematic method.

Contents

In marketing, a product is a system made available for consumer use; it is anything that can be offered to a market to satisfy the desire or need of a customer. In retailing, products are often referred to as merchandise, and in manufacturing, products are bought as raw materials and then sold as finished goods. A service is also regarded to as a type of product.

Many aspects of structural quality can be evaluated only statically through the analysis of the software inner structure, its source code, at the unit level, the technology level and the system level, which is in effect how its architecture adheres to sound principles of software architecture outlined in a paper on the topic by OMG. [2] But some structural qualities, such as usability, can be assessed only dynamically (users or others acting in their behalf interact with the software or, at least, some prototype or partial implementation; even the interaction with a mock version made in cardboard represents a dynamic test because such version can be considered a prototype). Other aspects, such as reliability, might involve not only the software but also the underlying hardware, therefore, it can be assessed both statically and dynamically (stress test).

Software architecture refers to the high level structures of a software system and the discipline of creating such structures and systems. Each structure comprises software elements, relations among them, and properties of both elements and relations. The architecture of a software system is a metaphor, analogous to the architecture of a building. It functions as a blueprint for the system and the developing project, laying out the tasks necessary to be executed by the design teams.

Usability the ease of use and learnability of a human-made object such as a tool or device; the degree to which a software can be used by consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use

Usability is the ease of use and learnability of a human-made object such as a tool or device. In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use.

Dynamic testing is a term used in software engineering to describe the testing of the dynamic behavior of code. That is, dynamic analysis refers to the examination of the physical response from the system to variables that are not constant and change with time. In dynamic testing the software must actually be compiled and run. It involves working with the software, giving input values and checking if the output is as expected by executing specific test cases which can be done manually or with the use of an automated process. This is in contrast to static testing. Unit tests, integration tests, system tests and acceptance tests utilize dynamic testing. Usability tests involving a mock version made in paper or cardboard can be classified as static tests when taking into account that no program has been executed; or, as dynamic ones when considering the interaction between users and such mock version is effectively the most basic form of a prototype.

Functional quality is typically assessed dynamically but it is also possible to use static tests (such as software reviews).

A software review is "A process or meeting during which a software product is examined by a project personnel, managers, users, customers, user representatives, or other interested parties for comment or approval".

Historically, the structure, classification and terminology of attributes and metrics applicable to software quality management have been derived or extracted from the ISO 9126-3 and the subsequent ISO 25000:2005 [3] quality model, also known as SQuaRE. [4] Based on these models, the Consortium for IT Software Quality (CISQ) has defined five major desirable structural characteristics needed for a piece of software to provide business value: Reliability, Efficiency, Security, Maintainability and (adequate) Size.

Software quality management (SQM) is a management process that aims to develop and manage the quality of software in such a way so as the best ensure the product meets the quality standards expected by the customer while also meeting any necessary regulatory and developer requirements, if any. Software quality managers require software to be tested before it is released to the market, and they do this using a cyclical process-based quality assessment in order to reveal and fix bugs before release. Their job is not only to ensure their software is in good shape for the consumer but also to encourage a culture of quality throughout the enterprise.

The Consortium for IT Software Quality (CISQ) is an IT industry group comprising IT executives from the Global 2000, systems integrators, outsourced service providers, and software technology vendors committed to making improvements in the quality of IT application software.

In management, business value is an informal term that includes all forms of value that determine the health and well-being of the firm in the long run. Business value expands concept of value of the firm beyond economic value to include other forms of value such as employee value, customer value, supplier value, channel partner value, alliance partner value, managerial value, and societal value. Many of these forms of value are not directly measured in monetary terms.

Software quality measurement quantifies to what extent a software program or system rates along each of these five dimensions. An aggregated measure of software quality can be computed through a qualitative or a quantitative scoring scheme or a mix of both and then a weighting system reflecting the priorities. This view of software quality being positioned on a linear continuum is supplemented by the analysis of "critical programming errors" that under specific circumstances can lead to catastrophic outages or performance degradations that make a given system unsuitable for use regardless of rating based on aggregated measurements. Such programming errors found at the system level represent up to 90% of production issues, whilst at the unit-level, even if far more numerous, programming errors account for less than 10% of production issues. As a consequence, code quality without the context of the whole system, as W. Edwards Deming described it, has limited value.

W. Edwards Deming American professor, author, and consultant

William Edwards Deming was an American engineer, statistician, professor, author, lecturer, and management consultant. Educated initially as an electrical engineer and later specializing in mathematical physics, he helped develop the sampling techniques still used by the U.S. Department of the Census and the Bureau of Labor Statistics. In his book, The New Economics for Industry, Government, and Education, Deming championed the work of Walter Shewhart, including statistical process control, operational definitions, and what Deming called the "Shewhart Cycle" which had evolved into Plan-Do-Study-Act (PDSA). This was in response to the growing popularity of PDCA, which Deming viewed as tampering with the meaning of Shewhart's original work. Deming is best known for his work in Japan after WWII, particularly his work with the leaders of Japanese industry. That work began in July and August 1950, in Tokyo and at the Hakone Convention Center, when Deming delivered speeches on what he called "Statistical Product Quality Administration". Many in Japan credit Deming as one of the inspirations for what has become known as the Japanese post-war economic miracle of 1950 to 1960, when Japan rose from the ashes of war on the road to becoming the second-largest economy in the world through processes partially influenced by the ideas Deming taught:

  1. Better design of products to improve service
  2. Higher level of uniform product quality
  3. Improvement of product testing in the workplace and in research centers
  4. Greater sales through side [global] markets

To view, explore, analyze, and communicate software quality measurements, concepts and techniques of information visualization provide visual, interactive means useful, in particular, if several software quality measures have to be related to each other or to components of a software or system. For example, software maps represent a specialized approach that "can express and combine information about software development, software quality, and system dynamics". [5]

Motivation

"A science is as mature as its measurement tools," (Louis Pasteur in Ebert & Dumke , p. 91). Measuring software quality is motivated by at least two reasons:

However, the distinction between measuring and improving software quality in an embedded system (with emphasis on risk management) and software quality in business software (with emphasis on cost and maintainability management) is becoming somewhat irrelevant. Embedded systems now often include a user interface and their designers are as much concerned with issues affecting usability and user productivity as their counterparts who focus on business applications. The latter are in turn looking at ERP or CRM system as a corporate nervous system whose uptime and performance are vital to the well-being of the enterprise. This convergence is most visible in mobile computing: a user who accesses an ERP application on their smartphone is depending on the quality of software across all types of software layers.

Both types of software now use multi-layered technology stacks and complex architecture so software quality analysis and measurement have to be managed in a comprehensive and consistent manner, decoupled from the software's ultimate purpose or use. In both cases, engineers and management need to be able to make rational decisions based on measurement and fact-based analysis in adherence to the precept "In God (we) trust. All others bring data". ((mis-)attributed to W. Edwards Deming and others).

Definitions

There are many different definitions of quality. For some it is the "capability of a software product to conform to requirements." (ISO/IEC 9001, [10] commented by [11] ) while for others it can be synonymous with "customer value" (Highsmith, 2002) or even defect level.

The first definition of quality History remembers is from Shewhart in the beginning of 20th century: There are two common aspects of quality: one of them has to do with the consideration of the quality of a thing as an objective reality independent of the existence of man. The other has to do with what we think, feel or sense as a result of the objective reality. In other words, there is a subjective side of quality. (Shewhart [12] )

Kitchenham, Pfleeger, and Garvin's five perspectives on quality

Kitchenham and Pfleeger, [13] further reporting the teachings of David Garvin, [14] identify five different perspectives on quality:

Software quality according to Deming

The problem inherent in attempts to define the quality of a product, almost any product, were stated by the master Walter A. Shewhart. The difficulty in defining quality is to translate future needs of the user into measurable characteristics, so that a product can be designed and turned out to give satisfaction at a price that the user will pay. This is not easy, and as soon as one feels fairly successful in the endeavor, he finds that the needs of the consumer have changed, competitors have moved in, etc. [16]

Software quality according to Feigenbaum

Quality is a customer determination, not an engineer's determination, not a marketing determination, nor a general management determination. It is based on the customer's actual experience with the product or service, measured against his or her requirements -- stated or unstated, conscious or merely sensed, technically operational or entirely subjective -- and always representing a moving target in a competitive market. [17]

Software quality according to Juran

The word quality has multiple meanings. Two of these meanings dominate the use of the word: 1. Quality consists of those product features which meet the need of customers and thereby provide product satisfaction. 2. Quality consists of freedom from deficiencies. Nevertheless, in a handbook such as this it is convenient to standardize on a short definition of the word quality as "fitness for use". [18]

CISQ's quality model

Even though "quality is a perceptual, conditional and somewhat subjective attribute and may be understood differently by different people" (as noted in the article on quality in business), software structural quality characteristics have been clearly defined by the Consortium for IT Software Quality (CISQ). Under the guidance of Bill Curtis, co-author of the Capability Maturity Model framework and CISQ's first Director; and Capers Jones, CISQ's Distinguished Advisor, CISQ has defined five major desirable characteristics of a piece of software needed to provide business value. [19] In the House of Quality model, these are "Whats" that need to be achieved:

Reliability
An attribute of resiliency and structural solidity. Reliability measures the level of risk and the likelihood of potential application failures. It also measures the defects injected due to modifications made to the software (its "stability" as termed by ISO). The goal for checking and monitoring Reliability is to reduce and prevent application downtime, application outages and errors that directly affect users, and enhance the image of IT and its impact on a company's business performance.
Efficiency
The source code and software architecture attributes are the elements that ensure high performance once the application is in run-time mode. Efficiency is especially important for applications in high execution speed environments such as algorithmic or transactional processing where performance and scalability are paramount. An analysis of source code efficiency and scalability provides a clear picture of the latent business risks and the harm they can cause to customer satisfaction due to response-time degradation.
Security
A measure of the likelihood of potential security breaches due to poor coding practices and architecture. This quantifies the risk of encountering critical vulnerabilities that damage the business. [20]
Maintainability
Maintainability includes the notion of adaptability, portability and transferability (from one development team to another). Measuring and monitoring maintainability is a must for mission-critical applications where change is driven by tight time-to-market schedules and where it is important for IT to remain responsive to business-driven changes. It is also essential to keep maintenance costs under control.
Size
While not a quality attribute per se, the sizing of source code is a software characteristic that obviously impacts maintainability. Combined with the above quality characteristics, software size can be used to assess the amount of work produced and to be done by teams, as well as their productivity through correlation with time-sheet data, and other SDLC-related metrics.

Software functional quality is defined as conformance to explicitly stated functional requirements, identified for example using Voice of the Customer analysis (part of the Design for Six Sigma toolkit and/or documented through use cases) and the level of satisfaction experienced by end-users. The latter is referred as to as usability and is concerned with how intuitive and responsive the user interface is, how easily simple and complex operations can be performed, and how useful error messages are. Typically, software testing practices and tools ensure that a piece of software behaves in compliance with the original design, planned user experience and desired testability, i.e. a piece of software's disposition to support acceptance criteria.

The dual structural/functional dimension of software quality is consistent with the model proposed in Steve McConnell's Code Complete which divides software characteristics into two pieces: internal and external quality characteristics. External quality characteristics are those parts of a product that face its users, where internal quality characteristics are those that do not. [21]

Alternative approaches

One of the challenges in defining quality is that "everyone feels they understand it" [22] and other definitions of software quality could be based on extending the various descriptions of the concept of quality used in business.

Dr. Tom DeMarco has proposed that "a product's quality is a function of how much it changes the world for the better." [23] This can be interpreted as meaning that functional quality and user satisfaction are more important than structural quality in determining software quality.

Another definition, coined by Gerald Weinberg in Quality Software Management: Systems Thinking, is "Quality is value to some person." [24] [25] This definition stresses that quality is inherently subjective—different people will experience the quality of the same software differently. One strength of this definition is the questions it invites software teams to consider, such as "Who are the people we want to value our software?" and "What will be valuable to them?".

Measurement

Although the concepts presented in this section are applicable to both structural and functional software quality, measurement of the latter is essentially performed through testing [see main article: Software testing].

Introduction

Relationship between software desirable characteristics (right) and measurable attributes (left). SoftwareQualityCharacteristicAttributeRelationship.png
Relationship between software desirable characteristics (right) and measurable attributes (left).

Software quality measurement is about quantifying to what extent a system or software possesses desirable characteristics. This can be performed through qualitative or quantitative means or a mix of both. In both cases, for each desirable characteristic, there are a set of measurable attributes the existence of which in a piece of software or system tend to be correlated and associated with this characteristic. For example, an attribute associated with portability is the number of target-dependent statements in a program. More precisely, using the Quality Function Deployment approach, these measurable attributes are the "hows" that need to be enforced to enable the "whats" in the Software Quality definition above.

The structure, classification and terminology of attributes and metrics applicable to software quality management have been derived or extracted from the ISO 9126-3 and the subsequent ISO/IEC 25000:2005 quality model. The main focus is on internal structural quality. Subcategories have been created to handle specific areas like business application architecture and technical characteristics such as data access and manipulation or the notion of transactions.

The dependence tree between software quality characteristics and their measurable attributes is represented in the diagram on the right, where each of the 5 characteristics that matter for the user (right) or owner of the business system depends on measurable attributes (left):

Correlations between programming errors and production defects unveil that basic code errors account for 92% of the total errors in the source code. These numerous code-level issues eventually count for only 10% of the defects in production. Bad software engineering practices at the architecture levels account for only 8% of total defects, but consume over half the effort spent on fixing problems, and lead to 90% of the serious reliability, security, and efficiency issues in production. [26]

Code-based analysis

Many of the existing software measures count structural elements of the application that result from parsing the source code for such individual instructions (Park, 1992), [27] tokens (Halstead, 1977), [28] control structures (McCabe, 1976), and objects (Chidamber & Kemerer, 1994). [29]

Software quality measurement is about quantifying to what extent a system or software rates along these dimensions. The analysis can be performed using a qualitative or quantitative approach or a mix of both to provide an aggregate view [using for example weighted average(s) that reflect relative importance between the factors being measured].

This view of software quality on a linear continuum has to be supplemented by the identification of discrete Critical Programming Errors. These vulnerabilities may not fail a test case, but they are the result of bad practices that under specific circumstances can lead to catastrophic outages, performance degradations, security breaches, corrupted data, and myriad other problems (Nygard, 2007) [30] that make a given system de facto unsuitable for use regardless of its rating based on aggregated measurements. A well-known example of vulnerability is the Common Weakness Enumeration, [31] a repository of vulnerabilities in the source code that make applications exposed to security breaches.

The measurement of critical application characteristics involves measuring structural attributes of the application's architecture, coding, and in-line documentation, as displayed in the picture above. Thus, each characteristic is affected by attributes at numerous levels of abstraction in the application and all of which must be included calculating the characteristic's measure if it is to be a valuable predictor of quality outcomes that affect the business. The layered approach to calculating characteristic measures displayed in the figure above was first proposed by Boehm and his colleagues at TRW (Boehm, 1978) [32] and is the approach taken in the ISO 9126 and 25000 series standards. These attributes can be measured from the parsed results of a static analysis of the application source code. Even dynamic characteristics of applications such as reliability and performance efficiency have their causal roots in the static structure of the application.

Structural quality analysis and measurement is performed through the analysis of the source code, the architecture, software framework, database schema in relationship to principles and standards that together define the conceptual and logical architecture of a system. This is distinct from the basic, local, component-level code analysis typically performed by development tools which are mostly concerned with implementation considerations and are crucial during debugging and testing activities.

Reliability

The root causes of poor reliability are found in a combination of non-compliance with good architectural and coding practices. This non-compliance can be detected by measuring the static quality attributes of an application. Assessing the static attributes underlying an application's reliability provides an estimate of the level of business risk and the likelihood of potential application failures and defects the application will experience when placed in operation.

Assessing reliability requires checks of at least the following software engineering best practices and technical attributes:

  • Application Architecture Practices
  • Coding Practices
  • Complexity of algorithms
  • Complexity of programming practices
  • Compliance with Object-Oriented and Structured Programming best practices (when applicable)
  • Component or pattern re-use ratio
  • Dirty programming
  • Error & Exception handling (for all layers - GUI, Logic & Data)
  • Multi-layer design compliance
  • Resource bounds management
  • Software avoids patterns that will lead to unexpected behaviors
  • Software manages data integrity and consistency
  • Transaction complexity level

Depending on the application architecture and the third-party components used (such as external libraries or frameworks), custom checks should be defined along the lines drawn by the above list of best practices to ensure a better assessment of the reliability of the delivered software.

Efficiency

As with Reliability, the causes of performance inefficiency are often found in violations of good architectural and coding practice which can be detected by measuring the static quality attributes of an application. These static attributes predict potential operational performance bottlenecks and future scalability problems, especially for applications requiring high execution speed for handling complex algorithms or huge volumes of data.

Assessing performance efficiency requires checking at least the following software engineering best practices and technical attributes:

Security

Most security vulnerabilities result from poor coding and architectural practices such as SQL injection or cross-site scripting. These are well documented in lists maintained by CWE, [33] and the SEI/Computer Emergency Center (CERT) at Carnegie Mellon University.

Assessing security requires at least checking the following software engineering best practices and technical attributes:

Maintainability

Maintainability includes concepts of modularity, understandability, changeability, testability, reusability, and transferability from one development team to another. These do not take the form of critical issues at the code level. Rather, poor maintainability is typically the result of thousands of minor violations with best practices in documentation, complexity avoidance strategy, and basic programming practices that make the difference between clean and easy-to-read code vs. unorganized and difficult-to-read code. [35]

Assessing maintainability requires checking the following software engineering best practices and technical attributes:

  • Application Architecture Practices
  • Architecture, Programs and Code documentation embedded in source code
  • Code readability
  • Complexity level of transactions
  • Complexity of algorithms
  • Complexity of programming practices
  • Compliance with Object-Oriented and Structured Programming best practices (when applicable)
  • Component or pattern re-use ratio
  • Controlled level of dynamic coding
  • Coupling ratio
  • Dirty programming
  • Documentation
  • Hardware, OS, middleware, software components and database independence
  • Multi-layer design compliance
  • Portability
  • Programming Practices (code level)
  • Reduced duplicate code and functions
  • Source code file organization cleanliness

Maintainability is closely related to Ward Cunningham's concept of technical debt, which is an expression of the costs resulting of a lack of maintainability. Reasons for why maintainability is low can be classified as reckless vs. prudent and deliberate vs. inadvertent, [36] and often have their origin in developers' inability, lack of time and goals, their carelessness and discrepancies in the creation cost of and benefits from documentation and, in particular, maintainable source code. [37]

Size

Measuring software size requires that the whole source code be correctly gathered, including database structure scripts, data manipulation source code, component headers, configuration files etc. There are essentially two types of software sizes to be measured, the technical size (footprint) and the functional size:

The function point analysis sizing standard is supported by the International Function Point Users Group (IFPUG). It can be applied early in the software development life-cycle and it is not dependent on lines of code like the somewhat inaccurate Backfiring method. The method is technology agnostic and can be used for comparative analysis across organizations and across industries.

Since the inception of Function Point Analysis, several variations have evolved and the family of functional sizing techniques has broadened to include such sizing measures as COSMIC, NESMA, Use Case Points, FP Lite, Early and Quick FPs, and most recently Story Points. However, Function Points has a history of statistical accuracy, and has been used as a common unit of work measurement in numerous application development management (ADM) or outsourcing engagements, serving as the "currency" by which services are delivered and performance is measured.

One common limitation to the Function Point methodology is that it is a manual process and therefore it can be labor-intensive and costly in large scale initiatives such as application development or outsourcing engagements. This negative aspect of applying the methodology may be what motivated industry IT leaders to form the Consortium for IT Software Quality focused on introducing a computable metrics standard for automating the measuring of software size while the IFPUG keep promoting a manual approach as most of its activity rely on FP counters certifications.

CISQ announced the availability of its first metric standard, Automated Function Points,to the CISQ membership, in CISQ Technical. These recommendations have been developed in OMG's Request for Comment format and submitted to OMG's process for standardization.[ citation needed ]

Identifying critical programming errors

Critical Programming Errors are specific architectural and/or coding bad practices that result in the highest, immediate or long term, business disruption risk.

These are quite often technology-related and depend heavily on the context, business objectives and risks. Some may consider respect for naming conventions while others – those preparing the ground for a knowledge transfer for example – will consider it as absolutely critical.

Critical Programming Errors can also be classified per CISQ Characteristics. Basic example below:

Operationalized quality models

Newer proposals for quality models such as Squale and Quamoco [38] propagate a direct integration of the definition of quality attributes and measurement. By breaking down quality attributes or even defining additional layers, the complex, abstract quality attributes (such as reliability or maintainability) become more manageable and measurable. Those quality models have been applied in industrial contexts but have not received widespread adoption.

See also

Further reading

Related Research Articles

Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs, and verifying that the software product is fit for use.

In engineering, maintainability is the ease with which a product can be maintained in order to:

Software maintenance in software engineering is the modification of a software product after delivery to correct faults, to improve performance or other attributes.

A software requirements specification (SRS) is a description of a software system to be developed. It is modeled after business requirements specification(CONOPS), also known as a stakeholder requirements specification (StRS). The software requirements specification lays out functional and non-functional requirements, and it may include a set of use cases that describe user interactions that the software must provide.

ISO/IEC 9126

ISO/IEC 9126Software engineering — Product quality was an international standard for the evaluation of software quality. It has been replaced by ISO/IEC 25010:2011.

In systems engineering and requirements engineering, a non-functional requirement (NFR) is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviors. They are contrasted with functional requirements that define specific behavior or functions. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture, because they are usually architecturally significant requirements.

IEC 61508 is an international standard published by the International Electrotechnical Commission consisting of methods on how to apply, design, deploy and maintain automatic protection systems called safety-related systems. It is titled Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems.

A function point is a "unit of measurement" to express the amount of business functionality an information system provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost of a single unit is calculated from past projects.

The International Function Point Users Group IFPUG is a US-based worldwide organization of Function point analysis metric software users.

Software sizing/ Software Size Estimation is an activity in software engineering that is used to determine or estimate the size of a software application or component in order to be able to implement other software project management activities. Size is an inherent characteristic of a piece of software just like weight is an inherent characteristic of a tangible material.

Quality engineering is the discipline of engineering concerned with the principles and practice of product and service quality assurance and control. In the software development, it is the management, development, operation and maintenance of IT systems and enterprise architectures with a high quality standard.

Software measurement is a quantified attribute of a characteristic of a software product or the software process. It is a discipline within software engineering. The content of software measurement is defined and governed by ISO Standard ISO 15939.

View model

A view model or viewpoints framework in systems engineering, software engineering, and enterprise engineering is a framework which defines a coherent set of views to be used in the construction of a system architecture, software architecture, or enterprise architecture. A view is a representation of a whole system from the perspective of a related set of concerns.

SQALE is a method to support the evaluation of a software application source code. It is a generic method, independent of the language and source code analysis tools, licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported license. Software editors can freely use and implement the SQALE method.

SNAP is the acronym for “Software Non-functional Assessment Process,” a measurement of non-functional software size. SNAP point sizing is a complement to a function point sizing, which measures functional software size. SNAP is a product of the International Function Point Users Group (IFPUG), and is sized using the Software Non-functional Assessment Practices Manual, (APM) now in version 2.2.

Bill Curtis is a software engineer is best known for leading the development of the Capability Maturity Model and the People CMM in the Software Engineering Institute at Carnegie Mellon University, and for championing the spread of software process improvement and software measurement globally. In 2007 he was elected a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for his contributions to software process improvement and measurement.

COSMIC Functional Size Measurement is a method to measure a standard functional size of a piece of software

References

Notes

  1. Pressman, Roger S. (2005). Software Engineering: A Practitioner's Approach (Sixth International ed.). McGraw-Hill Education. p. 388. ISBN   0071267824.
  2. "How to Deliver Resilient, Secure, Efficient, and Easily Changed IT Systems in Line with CISQ Recommendations" (PDF). Archived (PDF) from the original on 2013-12-28. Retrieved 2013-10-18.
  3. "ISO 25000:2005" (PDF). Archived (PDF) from the original on 2013-04-14. Retrieved 2013-10-18.
  4. "ISO/IEC 25010:2011". ISO. Archived from the original on 14 March 2016. Retrieved 14 March 2016.
  5. J. Bohnet, J. Döllner Archived 2014-04-27 at the Wayback Machine , "Monitoring Code Quality and Development Activity by Software Maps". Proceedings of the IEEE ACM ICSE Workshop on Managing Technical Debt, pp. 9-16, 2011.
  6. Medical Devices: The Therac-25* Archived 2008-02-16 at the Wayback Machine , Nancy Leveson, University of Washington
  7. Embedded Software Archived 2010-07-05 at the Wayback Machine , Edward A. Lee, To appear in Advances in Computers (M. Zelkowitz, editor), Vol. 56, Academic Press, London, 2002, Revised from UCB ERL Memorandum M01/26 University of California, Berkeley, CA 94720, USA, November 1, 2001
  8. "Aircraft Certification Software and Airborne Electronic Hardware". Archived from the original on 4 October 2014. Retrieved 28 September 2014.
  9. Improving Quality Through Better Requirements (Slideshow) Archived 2012-03-26 at the Wayback Machine , Dr. Ralph R. Young, 24/01/2004, Northrop Grumman Information Technology
  10. 1 2 International Organization for Standardization, "ISO/IEC 9001: Quality management systems -- Requirements," 1999.
  11. International Organization for Standardization, "ISO/IEC 24765: Systems and software engineering – Vocabulary," 2010.
  12. W. A. Shewhart, Economic control of quality of manufactured product. Van Nostrand, 1931.
  13. 1 2 3 B. Kitchenham and S. Pfleeger, "Software quality: the elusive target", IEEE Software, vol. 13, no. 1, pp. 12–21, 1996.
  14. D. A. Garvin, Managing Quality - the strategic and competitive edge. New York, NY: Free Press [u.a.], 1988.
  15. S. H. Kan, "Metrics and Models in Software Quality Engineering", 2nd ed. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc., 2002.
  16. W. E. Deming, "Out of the crisis: quality, productivity and competitive position". Cambridge University Press, 1988.
  17. A. V. Feigenbaum, "Total Quality Control", McGraw-Hill, 1983.
  18. J.M. Juran, "Juran's Quality Control Handbook", McGraw-Hill, 1988.
  19. McGraw Gary (2004), Software security, 11-17
  20. McConnell, Steve (1993), Code Complete (First ed.), Microsoft Press]
  21. Crosby, P., Quality is Free, McGraw-Hill, 1979
  22. DeMarco, T., Management Can Make Quality (Im)possible, Cutter IT Summit, Boston, April 1999
  23. Weinberg, Gerald M. (1992), Quality Software Management: Volume 1, Systems Thinking, New York, NY: Dorset House Publishing, p. 7
  24. Weinberg, Gerald M. (1993), Quality Software Management: Volume 2, First-Order Measurement, New York, NY: Dorset House Publishing, p. 108
  25. "Archived copy" (PDF). Archived (PDF) from the original on 2013-12-28. Retrieved 2013-10-18.CS1 maint: Archived copy as title (link)
  26. Park, R.E. (1992). Software Size Measurement: A Framework for Counting Source Statements. (CMU/SEI-92-TR-020). Software Engineering Institute, Carnegie Mellon University
  27. Halstead, M.E. (1977). Elements of Software Science. Elsevier North-Holland.
  28. Chidamber, S. & C. Kemerer. C. (1994). A Metrics Suite for Object Oriented Design. IEEE Transactions on Software Engineering, 20 (6), 476-493
  29. Nygard, M.T. (2007). Release It! Design and Deploy Production Ready Software. The Pragmatic Programmers.
  30. "CWE - Common Weakness Enumeration". cwe.mitre.org. Archived from the original on 2016-05-10. Retrieved 2016-05-20.
  31. Boehm, B., Brown, J.R., Kaspar, H., Lipow, M., MacLeod, G.J., & Merritt, M.J. (1978). Characteristics of Software Quality. North-Holland.
  32. "CWE - Common Weakness Enumeration". Cwe.mitre.org. Archived from the original on 2013-10-14. Retrieved 2013-10-18.
  33. "CWE's Top 25". Sans.org. Retrieved 2013-10-18.
  34. IfSQ Level-2 A Foundation-Level Standard for Computer Program Source Code Archived 2011-10-27 at the Wayback Machine , Second Edition August 2008, Graham Bolton, Stuart Johnston, IfSQ, Institute for Software Quality.
  35. Fowler, Martin (October 14, 2009). "TechnicalDebtQuadrant". Archived from the original on February 2, 2013. Retrieved February 4, 2013.
  36. Prause, Christian; Durdik, Zoya (June 3, 2012). "Architectural design and documentation: Waste in agile development?". IEEE Computer Society. Retrieved February 4, 2013.
  37. Wagner, Stefan; Goeb, Andreas; Heinemann, Lars; Kläs, Michael; Lampasona, Constanza; Lochmann, Klaus; Mayr, Alois; Plösch, Reinhold; Seidl, Andreas (2015). "Operationalised product quality models and assessment: The Quamoco approach" (PDF). Information and Software Technology. 62: 101–123. doi:10.1016/j.infsof.2015.02.009.
  38. Suryanarayana, Girish (2015). "Software Process versus Design Quality: Tug of War?". IEEE Software. 32 (4): 7–11. doi:10.1109/MS.2015.87.

Bibliography