SEER-SEM

Last updated
SEER-SEM
Developer(s) Galorath
Stable release
8.4 / 2020
Operating system Microsoft Windows
Type Project management software
License EULA
Website SEER-SEM Homepage

SEER for Software (SEER-SEM) is a project management application used to estimate resources required for software development.

Contents

History

1966 System Development Corporation Model based on regressions. [1]

1980 Don Reifer and Dan Galorath paper which prompted the building of the JPL Softcost model. This model, an early example of software estimation, allows for automated and performed risk analysis. Softcost was later made a commercial product by Reifer Consultants. [2]

1984 Computer Economics JS-2 and Galorath Designed System-3 based on the Jensen model. [3]

The Jensen-inspired System-3, and other modeling systems like Barry Boehm's COCOMO and early works by the Doty Associates can be seen as direct and indirect contributors to the software suite that would be developed by Galorath in the late 1980s.

In 1988, Galorath Incorporated began work on the initial version of SEER-SEM. [4]

Group of Models

SEER for Software (SEER-SEM) is composed of a group of models working together to provide estimates of effort, duration, staffing, and defects. These models can be briefly described by the questions they answer:

Software Sizing

Software size is a key input to any estimating model and across most software parametric models. Supported sizing metrics include source lines of code (SLOC), function points, function-based sizing (FBS) and a range of other measures. They are translated for internal use into effective size (). is a form of common currency within the model and enables new, reused, and even commercial off-the-shelf code to be mixed for an integrated analysis of the software development process. The generic calculation for is:

As indicated, increases in direct proportion to the amount of new software being developed. increases by a lesser amount as preexisting code is reused in a project. The extent of this increase is governed by the amount of rework (redesign, re-implementation, and retest) required to reuse the code.

Function-Based Sizing

While SLOC is an accepted way of measuring the absolute size of code from the developer's perspective, metrics such as function points capture software size functionally from the user's perspective. The function-based sizing (FBS) metric extends function points so that hidden parts of software such as complex algorithms can be sized more readily. FBS is translated directly into unadjusted function points (UFP).

In SEER-SEM, all size metrics are translated to , including those entered using FBS. This is not a simple conversion, i.e., not a language-driven adjustment as is done with the much-derided backfiring method. Rather, the model incorporates factors, including phase at estimate, operating environment, application type, and application complexity. All these considerations significantly affect the mapping between functional size and . After FBS is translated into function points, it is then converted into as:

where,

Effort and Duration Calculations

A project's effort and duration are interrelated, as is reflected in their calculation within the model. Effort drives duration, notwithstanding productivity-related feedback between duration constraints and effort. The basic effort equation is:

where,

Once effort is obtained, duration is solved using the following equation:

The duration equation is derived from key formulaic relationships. Its 0.4 exponent indicates that as a project's size increases, duration also increases, though less than proportionally. This size-duration relationship is also used in component-level scheduling algorithms with task overlaps computed to fall within total estimated project duration.

Related Research Articles

<span class="mw-page-title-main">Entropy (information theory)</span> Expected amount of information needed to specify the output of a stochastic data source

In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variable , which takes values in the set and is distributed according to , the entropy is where denotes the sum over the variable's possible values. The choice of base for , the logarithm, varies for different applications. Base 2 gives the unit of bits, while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable.

In software engineering and development, a software metric is a standard of measure of a degree to which a software system or process possesses some property. Even if a metric is not a measurement, often the two terms are used as synonyms. Since quantitative measurements are essential in all sciences, there is a continuous effort by computer science practitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance, testing, software debugging, software performance optimization, and optimal personnel task assignments.

Source lines of code (SLOC), also known as lines of code (LOC), is a software metric used to measure the size of a computer program by counting the number of lines in the text of the program's source code. SLOC is typically used to predict the amount of effort that will be required to develop a program, as well as to estimate programming productivity or maintainability once the software is produced.

The Constructive Cost Model (COCOMO) is a procedural software cost estimation model developed by Barry W. Boehm. The model parameters are derived from fitting a regression formula using data from historical projects.

Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations.

In statistics and related fields, a similarity measure or similarity function or similarity metric is a real-valued function that quantifies the similarity between two objects. Although no single definition of a similarity exists, usually such measures are in some sense the inverse of distance metrics: they take on large values for similar objects and either zero or a negative value for very dissimilar objects. Though, in more broad terms, a similarity function may also satisfy metric axioms.

Cyclomatic complexity is a software metric used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program's source code. It was developed by Thomas J. McCabe, Sr. in 1976.

In information theory, the cross-entropy between two probability distributions and , over the same underlying set of events, measures the average number of bits needed to identify an event drawn from the set when the coding scheme used for the set is optimized for an estimated probability distribution , rather than the true distribution .

Cost estimation in software engineering is typically concerned with the financial spend on the effort to develop and test the software, this can also include requirements review, maintenance, training, managing and buying extra equipment, servers and software. Many methods have been developed for estimating software costs for a given project.

<span class="mw-page-title-main">Systolic geometry</span> Form of differential geometry

In mathematics, systolic geometry is the study of systolic invariants of manifolds and polyhedra, as initially conceived by Charles Loewner and developed by Mikhail Gromov, Michael Freedman, Peter Sarnak, Mikhail Katz, Larry Guth, and others, in its arithmetical, ergodic, and topological manifestations. See also Introduction to systolic geometry.

The Constructive Systems Engineering Cost Model (COSYSMO) was created by Ricardo Valerdi while at the University of Southern California Center for Software Engineering. It gives an estimate of the number of person-months it will take to staff systems engineering resources on hardware and software projects. Initially developed in 2002, the model now contains a calibration data set of more than 50 projects provided by major aerospace and defense companies such as Raytheon, Northrop Grumman, Lockheed Martin, SAIC, General Dynamics, and BAE Systems.

The function point is a "unit of measurement" to express the amount of business functionality an information system provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost of a single unit is calculated from past projects.

The Putnam model is an empirical software effort estimation model. The original paper by Lawrence H. Putnam published in 1978 is seen as pioneering work in the field of software process modelling. As a group, empirical models work by collecting software project data and fitting a curve to the data. Future effort estimates are made by providing size and calculating the associated effort using the equation which fit the original data.

Software sizing or software size estimation is an activity in software engineering that is used to determine or estimate the size of a software application or component in order to be able to implement other software project management activities. Size is an inherent characteristic of a piece of software just like weight is an inherent characteristic of a tangible material.

Halstead complexity measures are software metrics introduced by Maurice Howard Halstead in 1977 as part of his treatise on establishing an empirical science of software development. Halstead made the observation that metrics of the software should reflect the implementation or expression of algorithms in different languages, but be independent of their execution on a specific platform. These metrics are therefore computed statically from the code.

In software development, effort estimation is the process of predicting the most realistic amount of effort required to develop or maintain software based on incomplete, uncertain and noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets, investment analyses, pricing processes and bidding rounds.

Weighted Micro Function Points (WMFP) is a modern software sizing algorithm which is a successor to solid ancestor scientific methods as COCOMO, COSYSMO, maintainability index, cyclomatic complexity, function points, and Halstead complexity. It produces more accurate results than traditional software sizing methodologies, while requiring less configuration and knowledge from the end user, as most of the estimation is based on automatic measurements of an existing source code.

SNAP is the acronym for "Software Non-functional Assessment Process," a measurement of the size of non-functional software. The SNAP sizing method complements ISO/IEC 20926:2009, which defines a method for the sizing of functional software. SNAP is a product of the International Function Point Users Group (IFPUG), and is sized using the “Software Non-functional Assessment Process (SNAP) Assessment Practices Manual” (APM) now in version 2.4. Reference “IEEE 2430-2019-IEEE Trial-Use Standard for Non-Functional Sizing Measurements,” published October 19, 2019. Also reference ISO standard “Software engineering — Trial use standard for software non-functional sizing measurements,”, published October 2021. For more information about SNAP please visit YouTube and search for "IFPUG SNAP;" this will provide a series of videos overviewing the SNAP methodology.

The ABC software metric was introduced by Jerry Fitzpatrick in 1997 to overcome the drawbacks of the LOC. The metric defines an ABC score as a triplet of values that represent the size of a set of source code statements. An ABC score is calculated by counting the number of assignments (A), number of branches (B), and number of conditionals (C) in a program. ABC score can be applied to individual methods, functions, classes, modules or files within a program.

The Simple Function Point (SFP) method is a lightweight Functional Measurement Method.

References

  1. B. Mazel The Role of Computer Simulation in Corporate Management: An Overview, Page 8, December 1975,
  2. Dan Galorath Why SEER Got Started August 18, 2008
  3. Dan Galorath Why SEER Got Started August 18, 2008
  4. Galorath, D & Evans M. (2006) Software Sizing, Estimation, and Risk Management ISBN   0-8493-3593-0 Page xxii