Software metric

Last updated

In software engineering and development, a software metric is a standard of measure of a degree to which a software system or process possesses some property. [1] [2] Even if a metric is not a measurement (metrics are functions, while measurements are the numbers obtained by the application of metrics), often the two terms are used as synonyms. Since quantitative measurements are essential in all sciences, there is a continuous effort by computer science practitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance, testing, software debugging, software performance optimization, and optimal personnel task assignments.

Contents

Common software measurements

Common software measurements include:

Limitations

As software development is a complex process, with high variance on both methodologies and objectives, it is difficult to define or measure software qualities and quantities and to determine a valid and concurrent measurement metric, especially when making such a prediction prior to the detail design. Another source of difficulty and debate is in determining which metrics matter, and what they mean. [8] [9] The practical utility of software measurements has therefore been limited to the following domains:

A specific measurement may target one or more of the above aspects, or the balance between them, for example as an indicator of team motivation or project performance.

Additionally metrics vary between static and dynamic program code, as well as for object oriented software (systems). [10] [11]

Acceptance and public opinion

Some software development practitioners point out that simplistic measurements can cause more harm than good. [12] Others have noted that metrics have become an integral part of the software development process. [8] Impact of measurement on programmer psychology have raised concerns for harmful effects to performance due to stress, performance anxiety, and attempts to cheat the metrics, while others find it to have positive impact on developers value towards their own work, and prevent them being undervalued. Some argue that the definition of many measurement methodologies are imprecise, and consequently it is often unclear how tools for computing them arrive at a particular result, [13] while others argue that imperfect quantification is better than none (“You can’t control what you can't measure.”). [14] Evidence shows that software metrics are being widely used by government agencies, the US military, NASA, [15] IT consultants, academic institutions, [16] and commercial and academic development estimation software.

Further reading

See also

Related Research Articles

Source lines of code (SLOC), also known as lines of code (LOC), is a software metric used to measure the size of a computer program by counting the number of lines in the text of the program's source code. SLOC is typically used to predict the amount of effort that will be required to develop a program, as well as to estimate programming productivity or maintainability once the software is produced.

Cyclomatic complexity is a software metric used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program's source code. It was developed by Thomas J. McCabe, Sr. in 1976.

The Personal Software Process (PSP) is a structured software development process that is designed to help software engineers better understand and improve their performance by bringing discipline to the way they develop software and tracking their predicted and actual development of the code. It clearly shows developers how to manage the quality of their products, how to make a sound plan, and how to make commitments. It also offers them the data to justify their plans. They can evaluate their work and suggest improvement direction by analyzing and reviewing development time, defects, and size data. The PSP was created by Watts Humphrey to apply the underlying principles of the Software Engineering Institute's (SEI) Capability Maturity Model (CMM) to the software development practices of a single developer. It claims to give software engineers the process skills necessary to work on a team software process (TSP) team.

In the context of software engineering, software quality refers to two related but distinct notions:

This is an alphabetical list of articles pertaining specifically to software engineering.

Programming complexity is a term that includes software properties that affect internal interactions. Several commentators distinguish between the terms "complex" and "complicated". Complicated implies being difficult to understand, but ultimately knowable. Complex, by contrast, describes the interactions between entities. As the number of entities increases, the number of interactions between them increases exponentially, making it impossible to know and understand them all. Similarly, higher levels of complexity in software increase the risk of unintentionally interfering with interactions, thus increasing the risk of introducing defects when changing the software. In more extreme cases, it can make modifying the software virtually impossible.

Software assurance (SwA) is a critical process in software development that ensures the reliability, safety, and security of software products. It involves a variety of activities, including requirements analysis, design reviews, code inspections, testing, and formal verification. One crucial component of software assurance is secure coding practices, which follow industry-accepted standards and best practices, such as those outlined by the Software Engineering Institute (SEI) in their CERT Secure Coding Standards (SCS).

SEER for Software (SEER-SEM) is a project management application used to estimate resources required for software development.

The function point is a "unit of measurement" to express the amount of business functionality an information system provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost of a single unit is calculated from past projects.

<span class="mw-page-title-main">GQM</span>

GQM, the initialism for goal, question, metric, is an established goal-oriented approach to software metrics to improve and measure software quality.

Software sizing or software size estimation is an activity in that is used to determine or estimate the size of a software application or component in order to be able to implement other software project management activities. Size is an inherent characteristic of a piece of software just like weight is an inherent characteristic of a tangible material.

Halstead complexity measures are software metrics introduced by Maurice Howard Halstead in 1977 as part of his treatise on establishing an empirical science of software development. Halstead made the observation that metrics of the software should reflect the implementation or expression of algorithms in different languages, but be independent of their execution on a specific platform. These metrics are therefore computed statically from the code.

In software development, effort estimation is the process of predicting the most realistic amount of effort required to develop or maintain software based on incomplete, uncertain and noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets, investment analyses, pricing processes and bidding rounds.

A Software Defect Indicator is a pattern that can be found in source code that is strongly correlated with a software defect, an error or omission in the source code of a computer program that may cause it to malfunction. When inspecting the source code of computer programs, it is not always possible to identify defects directly, but there are often patterns, sometimes called anti-patterns, indicating that defects are present.

NDepend is a static analysis tool for .NET managed code. The tool proposes a large number features, from dependency visualization to Quality Gates and Smart Technical Debt Estimation. For that reasons the community refers to it as the "Swiss Army Knife" for .NET Developers.

Weighted Micro Function Points (WMFP) is a modern software sizing algorithm which is a successor to solid ancestor scientific methods as COCOMO, COSYSMO, maintainability index, cyclomatic complexity, function points, and Halstead complexity. It produces more accurate results than traditional software sizing methodologies, while requiring less configuration and knowledge from the end user, as most of the estimation is based on automatic measurements of an existing source code.

Software construction is a software engineering discipline. It is the detailed creation of working meaningful software through a combination of coding, verification, unit testing, integration testing, and debugging. It is linked to all the other software engineering disciplines, most strongly to software design and software testing.

In software engineering, basis path testing, or structured testing, is a white box method for designing test cases. The method analyzes the control-flow graph of a program to find a set of linearly independent paths of execution. The method normally uses McCabe cyclomatic complexity to determine the number of linearly independent paths and then generates test cases for each path thus obtained. Basis path testing guarantees complete branch coverage, but achieves that without covering all possible paths of the control-flow graph – the latter is usually too costly. Basis path testing has been widely used and studied.

<span class="mw-page-title-main">Parasoft C/C++test</span> Integrated set of tools

Parasoft C/C++test is an integrated set of tools for testing C and C++ source code that software developers use to analyze, test, find defects, and measure the quality and security of their applications. It supports software development practices that are part of development testing, including static code analysis, dynamic code analysis, unit test case generation and execution, code coverage analysis, regression testing, runtime error detection, requirements traceability, and code review. It's a commercial tool that supports operation on Linux, Windows, and Solaris platforms as well as support for on-target embedded testing and cross compilers.

Bill Curtis is a software engineer best known for leading the development of the Capability Maturity Model and the People CMM in the Software Engineering Institute at Carnegie Mellon University, and for championing the spread of software process improvement and software measurement globally. In 2007 he was elected a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) for his contributions to software process improvement and measurement. He was named to the 2022 class of ACM Fellows, "for contributions to software process, software measurement, and human factors in software engineering".

References

  1. Fenton, Norman E. (2014). Software metrics : a rigorous and practical approach. James Bieman (3rd ed.). Boca Raton, FL. ISBN   978-1-4398-3823-5. OCLC   834978252.{{cite book}}: CS1 maint: location missing publisher (link)
  2. Timóteo, Aline Lopes; Álvaro, Re; Almeida, Eduardo Santana De; De, Silvio Romero; Meira, Lemos. Software Metrics: A Survey. CiteSeerX   10.1.1.544.2164 .
  3. "Descriptive Information (DI) Metric Thresholds". Land Software Engineering Centre. Archived from the original on 6 July 2011. Retrieved 19 October 2010.
  4. Gill, G. K.; Kemerer, C. F. (December 1991). "Cyclomatic complexity density and software maintenance productivity". IEEE Transactions on Software Engineering. 17 (12): 1284–1288. doi:10.1109/32.106988. ISSN   1939-3520.
  5. "maintainability - Does it make sense to compute cyclomatic complexity/lines of code ratio?". Software Engineering Stack Exchange. Retrieved 2021-03-01.
  6. "OMG Adopts Automated Function Point Specification". Omg.org. 2013-01-17. Retrieved 2013-05-19.
  7. Amit, Idan; Feitelson, Dror G. (2020-07-21). "The Corrective Commit Probability Code Quality Metric". arXiv: 2007.10912 [cs.SE].
  8. 1 2 Binstock, Andrew (March 2010). "Integration Watch: Using metrics effectively". SD Times. BZ Media. Retrieved 19 October 2010.
  9. Kolawa, Adam (7 August 2008). "When, Why, and How: Code Analysis". The Code Project. Retrieved 14 February 2021.
  10. Gosain, Anjana; Sharma, Ganga (2015). "Dynamic Software Metrics for Object Oriented Software: A Review". In Mandal, J. K.; Satapathy, Suresh Chandra; Kumar Sanyal, Manas; Sarkar, Partha Pratim; Mukhopadhyay, Anirban (eds.). Information Systems Design and Intelligent Applications. Advances in Intelligent Systems and Computing. Vol. 340. New Delhi: Springer India. pp. 579–589. doi:10.1007/978-81-322-2247-7_59. ISBN   978-81-322-2247-7.
  11. S, Parvinder Singh; Singh, Gurdev. Dynamic Metrics for Polymorphism in Object Oriented Systems. CiteSeerX   10.1.1.193.4307 .
  12. Kaner, Dr. Cem (2004), Software Engineer Metrics: What do they measure and how do we know?, CiteSeerX   10.1.1.1.2542
  13. Lincke, Rüdiger; Lundberg, Jonas; Löwe, Welf (2008), "Comparing software metrics tools" (PDF), International Symposium on Software Testing and Analysis 2008, pp. 131–142
  14. DeMarco, Tom (1982). Controlling Software Projects: Management, Measurement and Estimation. Yourdon Press. ISBN   0-13-171711-1.
  15. "NASA Metrics Planning and Reporting Working Group (MPARWG)". Earthdata.nasa.gov. Archived from the original on 2011-10-22. Retrieved 2013-05-19.
  16. "USC Center for Systems and Software Engineering". Sunset.usc.edu. Retrieved 2013-05-19.
  17. Savola, Reijo M. (2013-09-01). "Quality of security metrics and measurements". Computers & Security. 37: 78–90. doi:10.1016/j.cose.2013.05.002. ISSN   0167-4048.