The Simple Function Point method

Last updated

The Simple Function Point (SFP) method [1] is a lightweight Functional Measurement Method.

Contents

The Simple Function Point method was designed by Roberto Meli in 2010 to be compliant with the ISO14143-1 standard and compatible with the International Function Points User Group (IFPUG) Function Point Analysis (FPA) method. The original method (SiFP) was presented for the first time in a public conference in Rome (SMEF2011)

The method was subsequently described in a manual produced by the Simple Function Point Association: the Simple Function Point Functional Size Measurement Method Reference Manual, available under the Creatives Commons Attribution-NoDerivatives 4.0 International Public License.

Adoption by IFPUG

In 2019, the Simple Function Points Method was acquired by the IFPUG, to provide its user community with a simplified Function Point counting method, to make functional size measurement easier yet reliable in the early stages of software projects. The short name became SFP. The SPM (Simple Function Point Practices Manual) was published by IFPUG in late 2021.

Basic concept

When the SFP method was proposed, the most widely used software functional size measurement method was IFPUG FPA. [2] However, IFPUG FPA had (and still has) a few shortcomings:

To overcome at least some of these problems, the SFP method was defined to provide the following characteristics:

The sought characteristics were achieved as follows:

IFPUG FPA requires that [2]

  1. logical data files and transactions are identified,
  2. logical data files are classified into Internal Logical Files (ILF) and External Interface Files (EIF),
  3. every transaction is classified as External Input (EI), External Output (EO), External Query (EQ),
  4. every ILF and EIF is weighted, based on its Record Element Types (RET) and Data Element Types (DET),
  5. every EI, EO and EQ is weighted, based on its File Types Referenced (FTR) and DET exchanged through the borders of the application being measured.

Of these activities, SFP requires only the first two, i.e., the identification of logical data files and transactions. Activities 4) and 5) are the most time consuming, since they require that every data file and transaction is examined in detail: skipping these phases makes the SFP method both quicker and easier to apply than IFPUG FPA. In addition, most of the subjective interpretation is due to activities 4) and 5), and partly also to activity 3): skipping these activities makes the SFP method also less prone to subjective interpretation.

The concepts used in the definition of SFP are a small subset of those used in the definition of IFPUG FPA, therefore learning SFP is easier than learning IFPUG FPA, and it is immediate for those who already know IFPUG FPA. In practice, only the concepts of logical data file and transaction have to be known.

Finally, the weights assigned to data files and transactions make the size in SFP very close to the size expressed in Function Points, on average.

Definition

The logical data files are named Logical Files (LF) in the SFP method. Similarly, transactions are named Elementary Process (EP). Unlike in IFPUG FPA, there is no classification or weighting of the Base Functional Components (BFC as defined in ISO14143-1 standard).

The size of an EP is 4.6 SFP, while the size of a LF is 7.0 SFP. Therefore the size expressed in SFP is based on the number of data files (#LF) and the number of transactions (#EP). Belonging to the software application being measured:

Empirical evaluation of the SFP method

Empirical studies have been carried out, aiming at

Convertibility between SFP and FPA measures

Comparison of sizes expressed in Unadjusted Function Points (UFP) and Simple Function Points (SiFP) for an ISBSG dataset. The blue line represents perfect equivalence
S
i
z
e
[
S
i
F
P
]
=
S
i
z
e
[
U
F
P
]
{\displaystyle Size_{[SiFP]}=Size_{[UFP]}}
. UFP SiFP IWSM 2014.png
Comparison of sizes expressed in Unadjusted Function Points (UFP) and Simple Function Points (SiFP) for an ISBSG dataset. The blue line represents perfect equivalence .

In the original proposal of the SiFP method, a dataset from the ISBSG, including data from 768 projects, was used to evaluate the convertibility among UFP and SiFP measures. This study [1] showed that on average .

Another study [5] also used an ISBSG dataset to evaluate the convertibility among UFP and SiFP measures. The dataset included data from 766 software applications. Via ordinary least square regression, it was found that .

Based on these empirical studies, [5] [1] it seems that (note that this approximate equivalence holds on average: in both studies an average relative error around 12% was observed).

However, a third study [6] found . This study used data from only 25 Web applications, so it is possible that the conversion rate is affected by the specific application type or by the relatively small size of the dataset.

In 2017, a study evaluated the convertibility between UFP and SiFP measures using seven different datasets. [7] Every dataset was characterized by a specific conversion rate. Specifically, it was found that , with . Noticeably, for a dataset, no linear model could be found; instead the statistically significant model was found.

In conclusion, available evidence shows that one SiFP is approximately equivalent to one UFP, but this equivalence depends on the data being considered, besides being true only on average.

Considering that the IFPUG SFP basic elements (EP, LF) are totally equivalent to the original SiFP elements (UGEP, UGDG), the previous results hold for the IFPUG SFP method as well.

Using SFP for software development effort estimation

Boxplots of relative effort estimation errors from UFP-based and SiFP-based models. Outliers are not shown. Rel effort abserr no outl IWSM2014.png
Boxplots of relative effort estimation errors from UFP-based and SiFP-based models. Outliers are not shown.

IFPUG FPA is mainly used for estimating software development effort. Therefore, any alternative method that aims at measuring the functional size of software should support effort estimation with the same level of accuracy as IFPUG FPA. In other words, it is necessary to verify that effort estimates based on SFP are at least as good as the estimates based on UFP.

To perform this verification, an ISBSG dataset was analyzed, and models of effort vs. size were derived, using ordinary least squares regression, after log-log transformations. [5] The effort estimation errors were then compared. It turned out that the two models yielded extremely similar estimation accuracy.

A following study analyzed a dataset containing data from 25 Web applications. [6] Ordinary least squares regression was used to derive UFP-based and SiFP-based effort models. Also in this case, no statistically significant estimation differences could be observed.

Related Research Articles

<span class="mw-page-title-main">Stratified sampling</span> Sampling from a population which can be partitioned into subpopulations

In statistics, stratified sampling is a method of sampling from a population which can be partitioned into subpopulations.

Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. For example, it is possible that variations in six observed variables mainly reflect the variations in two unobserved (underlying) variables. Factor analysis searches for such joint variations in response to unobserved latent variables. The observed variables are modelled as linear combinations of the potential factors plus "error" terms, hence factor analysis can be thought of as a special case of errors-in-variables models.

<span class="mw-page-title-main">Deconvolution</span> Reconstruction of a filtered signal

In mathematics, deconvolution is the operation inverse to convolution. Both operations are used in signal processing and image processing. For example, it may be possible to recover the original signal after a filter (convolution) by using a deconvolution method with a certain degree of accuracy. Due to the measurement error of the recorded signal or image, it can be demonstrated that the worse the signal-to-noise ratio (SNR), the worse the reversing of a filter will be; hence, inverting a filter is not always a good solution as the error amplifies. Deconvolution offers a solution to this problem.

Source lines of code (SLOC), also known as lines of code (LOC), is a software metric used to measure the size of a computer program by counting the number of lines in the text of the program's source code. SLOC is typically used to predict the amount of effort that will be required to develop a program, as well as to estimate programming productivity or maintainability once the software is produced.

<span class="mw-page-title-main">Kriging</span> Method of interpolation

In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging gives the best linear unbiased prediction (BLUP) at unsampled locations. Interpolating methods based on other criteria such as smoothness may not yield the BLUP. The method is widely used in the domain of spatial analysis and computer experiments. The technique is also known as Wiener–Kolmogorov prediction, after Norbert Wiener and Andrey Kolmogorov.

This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design.

<span class="mw-page-title-main">Jaccard index</span> Measure of similarity and diversity between sets

The Jaccard index, also known as the Jaccard similarity coefficient, is a statistic used for gauging the similarity and diversity of sample sets.

Robust statistics are statistics which maintain their properties even if the underlying distributional assumptions are incorrect. Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from a parametric distribution. For example, robust methods work well for mixtures of two normal distributions with different standard deviations; under this model, non-robust methods like a t-test work poorly.

In statistics, a generalized additive model (GAM) is a generalized linear model in which the linear response variable depends linearly on unknown smooth functions of some predictor variables, and interest focuses on inference about these smooth functions.

SEER for Software (SEER-SEM) is a project management application used to estimate resources required for software development.

Nanoindentation, also called instrumented indentation testing, is a variety of indentation hardness tests applied to small volumes. Indentation is perhaps the most commonly applied means of testing the mechanical properties of materials. The nanoindentation technique was developed in the mid-1970s to measure the hardness of small volumes of material.

The function point is a "unit of measurement" to express the amount of business functionality an information system provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost of a single unit is calculated from past projects.

A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. They are particularly useful in settings where repeated measurements are made on the same statistical units, or where measurements are made on clusters of related statistical units. Mixed models are often preferred over traditional analysis of variance regression models because of their flexibility in dealing with missing values and uneven spacing of repeated measurements.The Mixed model analysis allows measurements to be explicitly modeled in a wider variety of correlation and variance-covariance structures.

Bootstrapping is any test or metric that uses random sampling with replacement, and falls under the broader class of resampling methods. Bootstrapping assigns measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.

Software sizing or software size estimation is an activity in software engineering that is used to determine or estimate the size of a software application or component in order to be able to implement other software project management activities. Size is an inherent characteristic of a piece of software just like weight is an inherent characteristic of a tangible material.

<span class="mw-page-title-main">Errors-in-variables models</span> Regression models accounting for possible errors in independent variables

In statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses.

The MK II Method is one of the software sizing methods in functional point group of measurements. This is a method for analysis and measurement of information processing applications based on end user functional view of the system. The MK II Method is one of five currently recognized ISO standards for Functionally sizing software.

Electronic systems’ power consumption has been a real challenge for Hardware and Software designers as well as users especially in portable devices like cell phones and laptop computers. Power consumption also has been an issue for many industries that use computer systems heavily such as Internet service providers using servers or companies with many employees using computers and other computational devices. Many different approaches have been discovered by researchers to estimate power consumption efficiently. This survey paper focuses on the different methods where power consumption can be estimated or measured in real-time.

SNAP is the acronym for "Software Non-functional Assessment Process," a measurement of the size of the software derived by quantifying the non-functional user requirements for the software. The SNAP sizing method complements ISO/IEC 20926:2009, which defines a method for the sizing of functional software user requirements. SNAP is a product of the International Function Point Users Group (IFPUG), and is sized using the “Software Non-functional Assessment Process (SNAP) Assessment Practices Manual” (APM) now in version 2.4. Reference “IEEE 2430-2019-IEEE Trial-Use Standard for Non-Functional Sizing Measurements,” published October 19, 2019. Also reference ISO standard “Software engineering — Trial use standard for software non-functional sizing measurements,”, published October 2021. For more information about SNAP please visit YouTube and search for "IFPUG SNAP;" this will provide a series of videos overviewing the SNAP methodology.

COSMIC functional size measurement is a method to measure a standard functional size of a piece of software. COSMIC is an acronym of COmmon Software Measurement International Consortium, a voluntary organization that has developed the method and is still expanding its use to more software domains.

References

  1. 1 2 3 Meli, Roberto (2011). "Simple function point: a new functional size measurement method fully compliant with IFPUG 4.x". Software Measurement European Forum. 2011.
  2. 1 2 International Function Point Users Group (IFPUG) (2010). Function point counting practices manual, release 4.3.1.
  3. Jones, Capers (2008). "A new business model for function point metrics" . Retrieved 1 February 2022.
  4. Total Metrics (2007). "Methods for Software Sizing – How to Decide which Method to Use" (PDF). Retrieved 1 February 2022.
  5. 1 2 3 4 5 Lavazza, Luigi; Meli, Roberto (2014). "An Evaluation of Simple Function Point as a Replacement of IFPUG Function Point". 2014 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement. IEEE. pp. 196–206. doi:10.1109/iwsm.mensura.2014.28. S2CID   2702811.
  6. 1 2 Ferrucci, Filomena; Gravino, Carmine; Lavazza, Luigi (2016-04-04). "Simple function points for effort estimation". Proceedings of the 31st Annual ACM Symposium on Applied Computing. New York, NY, USA: ACM. pp. 1428–1433. doi:10.1145/2851613.2851779. ISBN   9781450337397. S2CID   16199405.
  7. Abualkishik, Abedallah Zaid; Ferrucci, Filomena; Gravino, Carmine; Lavazza, Luigi; Liu, Geng; Meli, Roberto; Robiolo, Gabriela (2017). "A study on the statistical convertibility of IFPUG Function Point, COSMIC Function Point and Simple Function Point". Information and Software Technology. 86: 1–19. doi:10.1016/j.infsof.2017.02.005. ISSN   0950-5849.

The introduction to Simple Function Points (SFP) from IFPUG.