Weighted Micro Function Points

Last updated

Weighted Micro Function Points (WMFP) is a modern software sizing algorithm which is a successor to solid ancestor scientific methods as COCOMO, COSYSMO, maintainability index, cyclomatic complexity, function points, and Halstead complexity. It produces more accurate results than traditional software sizing methodologies, [1] while requiring less configuration and knowledge from the end user, as most of the estimation is based on automatic measurements of an existing source code.

Contents

As many ancestor measurement methods use source lines of code (SLOC) to measure software size, WMFP uses a parser to understand the source code breaking it down into micro functions and derive several code complexity and volume metrics, which are then dynamically interpolated into a final effort score. In addition to compatibility with the waterfall software development life cycle methodology, WMFP is also compatible with newer methodologies, such as Six Sigma, Boehm spiral, and Agile (AUP/Lean/XP/DSDM) methodologies, due to its differential analysis capability made possible by its higher-precision measurement elements. [2]

Measured elements

The WMFP measured elements are several different software metrics deduced from the source code by the WMFP algorithm analysis. They are represented as percentage of the whole unit (project or file) effort, and are translated into time.

Flow complexity (FC) – Measures the complexity of a programs' flow control path in a similar way to the traditional cyclomatic complexity, with higher accuracy by using weights and relations calculation.
Object vocabulary (OV) – Measures the quantity of unique information contained by the programs' source code, similar to the traditional Halstead vocabulary with dynamic language compensation.
Object conjuration (OC) – Measures the quantity of usage done by information contained by the programs' source code.
Arithmetic intricacy (AI) – Measures the complexity of arithmetic calculations across the program
Data transfer (DT) – Measures the manipulation of data structures inside the program
Code structure (CS) – Measures the amount of effort spent on the program structure such as separating code into classes and functions
Inline data (ID) – Measures the amount of effort spent on the embedding hard coded data
Comments (CM) – Measures the amount of effort spent on writing program comments

Calculation

The WMFP algorithm uses a three-stage process: function analysis, APPW transform, and result translation. A dynamic algorithm balances and sums the measured elements and produces a total effort score. The basic formula is:

Σ(WiMi)ΠDq
M = the source metrics value measured by the WMFP analysis stage
W = the adjusted weight assigned to metric M by the APPW model
N = the count of metric types
i = the current metric type index (iteration)
D = the cost drivers factor supplied by the user input
q = the current cost driver index (iteration)
K = the count of cost drivers

This score is then transformed into time by applying a statistical model called average programmer profile weights (APPW) which is a proprietary successor to COCOMO II 2000 and COSYSMO. The resulting time in programmer work hours is then multiplied by a user defined cost per hour of an average programmer, to produce an average project cost, translated to the user currency.

Downsides

The basic elements of WMFP, when compared to traditional sizing models such as COCOMO, are more complex to a degree that they cannot realistically be evaluated by hand, even on smaller projects, and require a software to analyze the source code. As a result, it can only be used with analogy based cost predictions, and not theoretical educated guesses.

See also

Related Research Articles

In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on the usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.

A software metric is a standard of measure of a degree to which a software system or process possesses some property. Even if a metric is not a measurement, often the two terms are used as synonyms. Since quantitative measurements are essential in all sciences, there is a continuous effort by computer science practitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance, testing, software debugging, software performance optimization, and optimal personnel task assignments.

Source lines of code (SLOC), also known as lines of code (LOC), is a software metric used to measure the size of a computer program by counting the number of lines in the text of the program's source code. SLOC is typically used to predict the amount of effort that will be required to develop a program, as well as to estimate programming productivity or maintainability once the software is produced.

The Constructive Cost Model (COCOMO) is a procedural software cost estimation model developed by Barry W. Boehm. The model parameters are derived from fitting a regression formula using data from historical projects.

Cyclomatic complexity is a software metric used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program's source code. It was developed by Thomas J. McCabe, Sr. in 1976.

In the context of software engineering, software quality refers to two related but distinct notions:

Cost estimation in software engineering is typically concerned with the financial spend on the effort to develop and test the software, this can also include requirements review, maintenance, training, managing and buying extra equipment, servers and software. Many methods have been developed for estimating software costs for a given project.

Programming complexity is a term that includes many properties of a piece of software, all of which affect internal interactions. According to several commentators, there is a distinction between the terms complex and complicated. Complicated implies being difficult to understand but with time and effort, ultimately knowable. Complex, on the other hand, describes the interactions between a number of entities. As the number of entities increases, the number of interactions between them would increase exponentially, and it would get to a point where it would be impossible to know and understand all of them. Similarly, higher levels of complexity in software increase the risk of unintentionally interfering with interactions and so increases the chance of introducing defects when making changes. In more extreme cases, it can make modifying the software virtually impossible. The idea of linking software complexity to the maintainability of the software has been explored extensively by Professor Manny Lehman, who developed his Laws of Software Evolution from his research. He and his co-Author Les Belady explored numerous possible Software Metrics in their oft-cited book, that could be used to measure the state of the software, eventually reaching the conclusion that the only practical solution would be to use one that uses deterministic complexity models.

The function point is a "unit of measurement" to express the amount of business functionality an information system provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost of a single unit is calculated from past projects.

Software sizing or Software size estimation is an activity in software engineering that is used to determine or estimate the size of a software application or component in order to be able to implement other software project management activities. Size is an inherent characteristic of a piece of software just like weight is an inherent characteristic of a tangible material.

Programming productivity describes the degree of the ability of individual programmers or development teams to build and evolve software systems. Productivity traditionally refers to the ratio between the quantity of software produced and the cost spent for it. Here the delicacy lies in finding a reasonable way to define software quantity.

LDRA Testbed provides the core static and dynamic analysis engines for both host and embedded software. LDRA Testbed is made by Liverpool Data Research Associates (LDRA) . LDRA Testbed provides the means to enforce compliance with coding standards such as MISRA, JSF++ AV, CERT C, CWE and provides visibility of software flaws that might typically pass through the standard build and test process to become latent problems. In addition, test effectiveness feedback is provided through structural coverage analysis reporting facilities which support the requirements of the DO-178B standard up to and including Level-A.

Software measurement is a quantified attribute of a characteristic of a software product or the software process. It is a discipline within software engineering. The process of software measurement is defined and governed by ISO Standard ISO 15939.

In project management, accurate estimates are the basis of sound project planning. Many processes have been developed to aid engineers in making accurate estimates, such as

Halstead complexity measures are software metrics introduced by Maurice Howard Halstead in 1977 as part of his treatise on establishing an empirical science of software development. Halstead made the observation that metrics of the software should reflect the implementation or expression of algorithms in different languages, but be independent of their execution on a specific platform. These metrics are therefore computed statically from the code.

In software development, effort estimation is the process of predicting the most realistic amount of effort required to develop or maintain software based on incomplete, uncertain and noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets, investment analyses, pricing processes and bidding rounds.

Unified Code Count (UCC)

The Unified Code Counter (UCC) is a comprehensive software lines of code counter produced by the USC Center for Systems and Software Engineering. It is available to the general public as open source code and can be compiled with any standard ANSI C++ compiler.

SNAP is the acronym for "Software Non-functional Assessment Process", a measurement of non-functional software size. The SNAP sizing method complements ISO/IEC 20926:2009, which defines a method for the sizing of functional user requirements. SNAP is a product of the International Function Point Users Group (IFPUG), and is sized using the “Software Non-functional Assessment Practices Manual” (APM) now in version 2.4. The SNAP methodology has the IEEE standard IEEE2430-2019.

Object points are an approach used in software development effort estimation under some models such as COCOMO II.

The ABC software metric was introduced by Jerry Fitzpatrick in 1997 to overcome the drawbacks of the LOC. The metric defines an ABC score as a triplet of values that represent the size of a set of source code statements. An ABC score is calculated by counting the number of assignments (A), number of branches (B), and number of conditionals (C) in a program. ABC score can be applied to individual methods, functions, classes, modules or files within a program.

References

  1. Capers Jones (October 2009) "Software Engineering Best Practices": pages 318–320
  2. TickIT Quarterly publication (2009) "Quarter 1, 2009": page 13