Source lines of code

Last updated

Source lines of code (SLOC), also known as lines of code (LOC), is a software metric used to measure the size of a computer program by counting the number of lines in the text of the program's source code. SLOC is typically used to predict the amount of effort that will be required to develop a program, as well as to estimate programming productivity or maintainability once the software is produced.

Contents

Measurement methods

Many useful comparisons involve only the order of magnitude of lines of code in a project. Using lines of code to compare a 10,000-line project to a 100,000-line project is far more useful than when comparing a 20,000-line project with a 21,000-line project. While it is debatable exactly how to measure lines of code, discrepancies of an order of magnitude can be clear indicators of software complexity or man-hours.

There are two major types of SLOC measures: physical SLOC (LOC) and logical SLOC (LLOC). Specific definitions of these two measures vary, but the most common definition of physical SLOC is a count of lines in the text of the program's source code excluding comment lines. [1]

Logical SLOC attempts to measure the number of executable "statements", but their specific definitions are tied to specific computer languages (one simple logical SLOC measure for C-like programming languages is the number of statement-terminating semicolons). It is much easier to create tools that measure physical SLOC, and physical SLOC definitions are easier to explain. However, physical SLOC measures are more sensitive to logically irrelevant formatting and style conventions than logical SLOC. However, SLOC measures are often stated without giving their definition, and logical SLOC can often be significantly different from physical SLOC.

Consider this snippet of C code as an example of the ambiguity encountered when determining SLOC:

for(i=0;i<100;i++)printf("hello");/* How many lines of code is this? */

In this example we have:

Depending on the programmer and coding standards, the above "line" of code could be written on many separate lines:

/* Now how many lines of code is this? */for(i=0;i<100;i++){printf("hello");}

In this example we have:

Even the "logical" and "physical" SLOC values can have a large number of varying definitions. Robert E. Park (while at the Software Engineering Institute) and others developed a framework for defining SLOC values, to enable people to carefully explain and define the SLOC measure used in a project. For example, most software systems reuse code, and determining which (if any) reused code to include is important when reporting a measure.

Origins

At the time when SLOC was introduced as a metric, the most commonly used languages, such as FORTRAN and assembly language, were line-oriented languages. These languages were developed at the time when punched cards were the main form of data entry for programming. One punched card usually represented one line of code. It was one discrete object that was easily counted. It was the visible output of the programmer, so it made sense to managers to count lines of code as a measurement of a programmer's productivity, even referring to such as "card images". Today, the most commonly used computer languages allow a lot more leeway for formatting. Text lines are no longer limited to 80 or 96 columns, and one line of text no longer necessarily corresponds to one line of code.

Usage of SLOC measures

SLOC measures are somewhat controversial, particularly in the way that they are sometimes misused. Experiments have repeatedly confirmed that effort is highly correlated with SLOC[ citation needed ], that is, programs with larger SLOC values take more time to develop. Thus, SLOC can be effective in estimating effort. However, functionality is less well correlated with SLOC: skilled developers may be able to develop the same functionality with far less code, so one program with fewer SLOC may exhibit more functionality than another similar program. Counting SLOC as productivity measure has its caveats, since a developer can develop only a few lines and yet be far more productive in terms of functionality than a developer who ends up creating more lines (and generally spending more effort). Good developers may merge multiple code modules into a single module, improving the system yet appearing to have negative productivity because they remove code. Furthermore, inexperienced developers often resort to code duplication, which is highly discouraged as it is more bug-prone and costly to maintain, but it results in higher SLOC.

SLOC counting exhibits further accuracy issues at comparing programs written in different languages unless adjustment factors are applied to normalize languages. Various computer languages balance brevity and clarity in different ways; as an extreme example, most assembly languages would require hundreds of lines of code to perform the same task as a few characters in APL. The following example shows a comparison of a "hello world" program written in BASIC, C, and COBOL (a language known for being particularly verbose).

BASICCCOBOL
PRINT"hello, world"
#include<stdio.h>intmain(){printf("hello, world\n");}
identificationdivision.program-id.hello.proceduredivision.display "hello, world"goback .endprogramhello.
Lines of code: 1
(no whitespace)
Lines of code: 4
(excluding whitespace)
Lines of code: 6
(excluding whitespace)

Another increasingly common problem in comparing SLOC metrics is the difference between auto-generated and hand-written code. Modern software tools often have the capability to auto-generate enormous amounts of code with a few clicks of a mouse. For instance, graphical user interface builders automatically generate all the source code for a graphical control elements simply by dragging an icon onto a workspace. The work involved in creating this code cannot reasonably be compared to the work necessary to write a device driver, for instance. By the same token, a hand-coded custom GUI class could easily be more demanding than a simple device driver; hence the shortcoming of this metric.

There are several cost, schedule, and effort estimation models which use SLOC as an input parameter, including the widely used Constructive Cost Model (COCOMO) series of models by Barry Boehm et al., PRICE Systems True S and Galorath's SEER-SEM. While these models have shown good predictive power, they are only as good as the estimates (particularly the SLOC estimates) fed to them. Many [2] have advocated the use of function points instead of SLOC as a measure of functionality, but since function points are highly correlated to SLOC (and cannot be automatically measured) this is not a universally held view.

Example

According to Vincent Maraia, [3] the SLOC values for various operating systems in Microsoft's Windows NT product line are as follows:

YearOperating systemSLOC (million)
1993Windows NT 3.14–5 [3]
1994Windows NT 3.57–8 [3]
1996Windows NT 4.011–12 [3]
2000Windows 2000more than 29 [3]
2001Windows XP45 [4] [5]
2003Windows Server 200350 [3]

David A. Wheeler studied the Red Hat distribution of the Linux operating system, and reported that Red Hat Linux version 7.1 [6] (released April 2001) contained over 30 million physical SLOC. He also extrapolated that, had it been developed by conventional proprietary means, it would have required about 8,000 person-years of development effort and would have cost over $1 billion (in year 2000 U.S. dollars).

A similar study was later made of Debian GNU/Linux version 2.2 (also known as "Potato"); this operating system was originally released in August 2000. This study found that Debian GNU/Linux 2.2 included over 55 million SLOC, and if developed in a conventional proprietary way would have required 14,005 person-years and cost US$1.9 billion to develop. Later runs of the tools used report that the following release of Debian had 104 million SLOC, and as of year 2005, the newest release is going to include over 213 million SLOC.

YearOperating systemSLOC (million)
2000Debian 2.255–59 [7] [8]
2002Debian 3.0104 [8]
2005Debian 3.1215 [8]
2007Debian 4.0283 [8]
2009Debian 5.0324 [8]
2012Debian 7.0419 [9]
2009 OpenSolaris 9.7
FreeBSD 8.8
2005 Mac OS X 10.486 [10] [n 1]
1991 Linux kernel 0.010.010239
2001Linux kernel 2.4.22.4 [6]
2003Linux kernel 2.6.05.2
2009Linux kernel 2.6.2911.0
2009Linux kernel 2.6.3212.6 [11]
2010Linux kernel 2.6.3513.5 [12]
2012Linux kernel 3.615.9 [13]
2015-06-30Linux kernel pre-4.220.2 [14]

Utility

Advantages

  1. Scope for automation of counting: since line of code is a physical entity, manual counting effort can be easily eliminated by automating the counting process. Small utilities may be developed for counting the LOC in a program. However, a logical code counting utility developed for a specific language cannot be used for other languages due to the syntactical and structural differences among languages. Physical LOC counters, however, have been produced which count dozens of languages.
  2. An intuitive metric: line of code serves as an intuitive metric for measuring the size of software because it can be seen, and the effect of it can be visualized. Function points are said to be more of an objective metric which cannot be imagined as being a physical entity, it exists only in the logical space. This way, LOC comes in handy to express the size of software among programmers with low levels of experience.
  3. Ubiquitous measure: LOC measures have been around since the earliest days of software. [15] As such, it is arguable that more LOC data is available than any other size measure.

Disadvantages

  1. Lack of accountability: lines-of-code measure suffers from some fundamental problems. Some [ who? ] think that it isn't useful to measure the productivity of a project using only results from the coding phase, which usually accounts for only 30% to 35% of the overall effort.[ citation needed ]
  2. Lack of cohesion with functionality: though experiments [ by whom? ] have repeatedly confirmed that while effort is highly correlated with LOC, functionality is less well correlated with LOC. That is, skilled developers may be able to develop the same functionality with far less code, so one program with less LOC may exhibit more functionality than another similar program. In particular, LOC is a poor productivity measure of individuals, because a developer who develops only a few lines may still be more productive than a developer creating more lines of code – even more: some good refactoring like "extract method" to get rid of redundant code and keep it clean will mostly reduce the lines of code.
  3. Adverse impact on estimation: because of the fact presented under point #1, estimates based on lines of code can adversely go wrong, in all possibility.
  4. Developer's experience: implementation of a specific logic differs based on the level of experience of the developer. Hence, number of lines of code differs from person to person. An experienced developer may implement certain functionality in fewer lines of code than another developer of relatively less experience does, though they use the same language.
  5. Difference in languages: consider two applications that provide the same functionality (screens, reports, databases). One of the applications is written in C++ and the other application written in a language like COBOL. The number of function points would be exactly the same, but aspects of the application would be different. The lines of code needed to develop the application would certainly not be the same. As a consequence, the amount of effort required to develop the application would be different (hours per function point). Unlike lines of code, the number of function points will remain constant.
  6. Advent of GUI tools: with the advent of GUI-based programming languages and tools such as Visual Basic, programmers can write relatively little code and achieve high levels of functionality. For example, instead of writing a program to create a window and draw a button, a user with a GUI tool can use drag-and-drop and other mouse operations to place components on a workspace. Code that is automatically generated by a GUI tool is not usually taken into consideration when using LOC methods of measurement. This results in variation between languages; the same task that can be done in a single line of code (or no code at all) in one language may require several lines of code in another.
  7. Problems with multiple languages: in today's software scenario, software is often developed in more than one language. Very often, a number of languages are employed depending on the complexity and requirements. Tracking and reporting of productivity and defect rates poses a serious problem in this case, since defects cannot be attributed to a particular language subsequent to integration of the system. Function point stands out to be the best measure of size in this case.
  8. Lack of counting standards: there is no standard definition of what a line of code is. Do comments count? Are data declarations included? What happens if a statement extends over several lines? – These are the questions that often arise. Though organizations like SEI and IEEE have published some guidelines in an attempt to standardize counting, it is difficult to put these into practice especially in the face of newer and newer languages being introduced every year.
  9. Psychology: a programmer whose productivity is being measured in lines of code will have an incentive to write unnecessarily verbose code. The more management is focusing on lines of code, the more incentive the programmer has to expand his code with unneeded complexity. This is undesirable, since increased complexity can lead to increased cost of maintenance and increased effort required for bug fixing.

In the PBS documentary Triumph of the Nerds , Microsoft executive Steve Ballmer criticized the use of counting lines of code:

In IBM there's a religion in software that says you have to count K-LOCs, and a K-LOC is a thousand lines of code. How big a project is it? Oh, it's sort of a 10K-LOC project. This is a 20K-LOCer. And this is 50K-LOCs. And IBM wanted to sort of make it the religion about how we got paid. How much money we made off OS/2, how much they did. How many K-LOCs did you do? And we kept trying to convince them – hey, if we have – a developer's got a good idea and he can get something done in 4K-LOCs instead of 20K-LOCs, should we make less money? Because he's made something smaller and faster, less K-LOC. K-LOCs, K-LOCs, that's the methodology. Ugh! Anyway, that always makes my back just crinkle up at the thought of the whole thing.

According to the Computer History Museum Apple Developer Bill Atkinson in 1982 found problems with this practice:

When the Lisa team was pushing to finalize their software in 1982, project managers started requiring programmers to submit weekly forms reporting on the number of lines of code they had written. Bill Atkinson thought that was silly. For the week in which he had rewritten QuickDraw’s region calculation routines to be six times faster and 2000 lines shorter, he put “-2000″ on the form. After a few more weeks the managers stopped asking him to fill out the form, and he gladly complied. [16] [17]

See also

Notes

  1. Possibly including the whole iLife suite, not just the operating system and usually bundled applications.

Related Research Articles

<span class="mw-page-title-main">Free software</span> Software licensed to be freely used, modified and distributed

Free software, libre software, or libreware is computer software distributed under terms that allow users to run the software for any purpose as well as to study, change, and distribute it and any adapted versions. Free software is a matter of liberty, not price; all users are legally free to do what they want with their copies of a free software regardless of how much is paid to obtain the program. Computer programs are deemed "free" if they give end-users ultimate control over the software and, subsequently, over their devices.

A fourth-generation programming language (4GL) is a high-level computer programming language that belongs to a class of languages envisioned as an advancement upon third-generation programming languages (3GL). Each of the programming language generations aims to provide a higher level of abstraction of the internal computer hardware details, making the language more programmer-friendly, powerful, and versatile. While the definition of 4GL has changed over time, it can be typified by operating more with large collections of information at once rather than focusing on just bits and bytes. Languages claimed to be 4GL may include support for database management, report generation, mathematical optimization, GUI development, or web development. Some researchers state that 4GLs are a subset of domain-specific languages.

An integrated development environment (IDE) is a software application that provides comprehensive facilities for software development. An IDE normally consists of at least a source-code editor, build automation tools, and a debugger. Some IDEs, such as IntelliJ IDEA, Eclipse and Lazarus contain the necessary compiler, interpreter or both; others, such as SharpDevelop and NetBeans, do not.

In software engineering and development, a software metric is a standard of measure of a degree to which a software system or process possesses some property. Even if a metric is not a measurement, often the two terms are used as synonyms. Since quantitative measurements are essential in all sciences, there is a continuous effort by computer science practitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance, testing, software debugging, software performance optimization, and optimal personnel task assignments.

<span class="mw-page-title-main">Delphi (software)</span> General-purpose programming language and a software product

Delphi is a general-purpose programming language and a software product that uses the Delphi dialect of the Object Pascal programming language and provides an integrated development environment (IDE) for rapid application development of desktop, mobile, web, and console software, currently developed and maintained by Embarcadero Technologies.

In the context of software engineering, software quality refers to two related but distinct notions:

In computer programming, Intentional Programming is a programming paradigm developed by Charles Simonyi that encodes in software source code the precise intention which programmers have in mind when conceiving their work. By using the appropriate level of abstraction at which the programmer is thinking, creating and maintaining computer programs become easier. By separating the concerns for intentions and how they are being operated upon, the software becomes more modular and allows for more reusable software code.

<span class="mw-page-title-main">LAMP (software bundle)</span> Acronym for a common web hosting solution

LAMP is an acronym denoting one of the most common software stacks for the web's most popular applications. Its generic software stack model has largely interchangeable components.

SEER for Software (SEER-SEM) is a project management application used to estimate resources required for software development.

The function point is a "unit of measurement" to express the amount of business functionality an information system provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost of a single unit is calculated from past projects.

Coding best practices or programming best practices are a set of informal, sometimes personal, rules that many software developers, in computer programming follow to improve software quality. Many computer programs require being robust and reliable for long periods of time, so any rules need to facilitate both initial development and subsequent maintenance of source code by people other than the original authors.

Coding conventions are a set of guidelines for a specific programming language that recommend programming style, practices, and methods for each aspect of a program written in that language. These conventions usually cover file organization, indentation, comments, declarations, statements, white space, naming conventions, programming practices, programming principles, programming rules of thumb, architectural best practices, etc. These are guidelines for software structural quality. Software programmers are highly recommended to follow these guidelines to help improve the readability of their source code and make software maintenance easier. Coding conventions are only applicable to the human maintainers and peer reviewers of a software project. Conventions may be formalized in a documented set of rules that an entire team or company follows, or may be as informal as the habitual coding practices of an individual. Coding conventions are not enforced by compilers.

Software sizing or software size estimation is an activity in software engineering that is used to determine or estimate the size of a software application or component in order to be able to implement other software project management activities. Size is an inherent characteristic of a piece of software just like weight is an inherent characteristic of a tangible material.

Programming productivity describes the degree of the ability of individual programmers or development teams to build and evolve software systems. Productivity traditionally refers to the ratio between the quantity of software produced and the cost spent for it. Here the delicacy lies in finding a reasonable way to define software quantity.

Software measurement is a quantified attribute of a characteristic of a software product or the software process. It is a discipline within software engineering. The process of software measurement is defined and governed by ISO Standard ISO 15939.

Quilt is a software utility for managing a series of changes to the source code of any computer program. Such changes are often referred to as "patches" or "patch sets". Quilt can take an arbitrary number of patches as input and condense them into a single patch. In doing so, Quilt makes it easier for many programmers to test and evaluate the different changes amongst patches before they are permanently applied to the source code.

Weighted Micro Function Points (WMFP) is a modern software sizing algorithm which is a successor to solid ancestor scientific methods as COCOMO, COSYSMO, maintainability index, cyclomatic complexity, function points, and Halstead complexity. It produces more accurate results than traditional software sizing methodologies, while requiring less configuration and knowledge from the end user, as most of the estimation is based on automatic measurements of an existing source code.

The following outline is provided as an overview of and topical guide to the Perl programming language:

perf is a performance analyzing tool in Linux, available from Linux kernel version 2.6.31 in 2009. Userspace controlling utility, named perf, is accessed from the command line and provides a number of subcommands; it is capable of statistical profiling of the entire system.

<span class="mw-page-title-main">Parasoft C/C++test</span> Integrated set of tools

Parasoft C/C++test is an integrated set of tools for testing C and C++ source code that software developers use to analyze, test, find defects, and measure the quality and security of their applications. It supports software development practices that are part of development testing, including static code analysis, dynamic code analysis, unit test case generation and execution, code coverage analysis, regression testing, runtime error detection, requirements traceability, and code review. It's a commercial tool that supports operation on Linux, Windows, and Solaris platforms as well as support for on-target embedded testing and cross compilers.

References

  1. Vu Nguyen; Sophia Deeds-Rubin; Thomas Tan; Barry Boehm (2007), A SLOC Counting Standard (PDF), Center for Systems and Software Engineering, University of Southern California
  2. IFPUG "Quantifying the Benefits of Using Function Points"
  3. 1 2 3 4 5 6 "How Many Lines of Code in Windows?". Knowing.NET. December 6, 2005. Retrieved 2010-08-30.
    This in turn cites Vincent Maraia's The Build Master as the source of the information.
  4. "How Many Lines of Code in Windows XP?". Microsoft. January 11, 2011. Archived from the original on 2022-02-26.
  5. "A history of Windows - Microsoft Windows". 2012-09-21. Archived from the original on 2012-09-21. Retrieved 2021-03-26.
  6. 1 2 David A. Wheeler (2001-06-30). "More Than a Gigabuck: Estimating GNU/Linux's Size".
  7. González-Barahona, Jesús M.; Miguel A. Ortuño Pérez; Pedro de las Heras Quirós; José Centeno González; Vicente Matellán Olivera. "Counting potatoes: the size of Debian 2.2". debian.org. Archived from the original on 2008-05-03. Retrieved 2003-08-12.
  8. 1 2 3 4 5 Robles, Gregorio. "Debian Counting". Archived from the original on 2013-03-14. Retrieved 2007-02-16.
  9. Debian 7.0 was released in May 2013. The number is an estimate published on 2012-02-13, using the code base which would become Debian 7.0, using the same software method as for the data published by David A. Wheeler. James Bromberger. "Debian Wheezy: US$19 Billion. Your price... FREE!". Archived from the original on 2014-02-23. Retrieved 2014-02-07.
  10. Jobs, Steve (August 2006). "Live from WWDC 2006: Steve Jobs Keynote" . Retrieved 2007-02-16. 86 million lines of source code that was ported to run on an entirely new architecture with zero hiccups.
  11. Thorsten Leemhuis (2009-12-03). "What's new in Linux 2.6.32". Archived from the original on 2013-12-19. Retrieved 2009-12-24.
  12. Greg Kroah-Hartman; Jonathan Corbet; Amanda McPherson (April 2012). "Linux Kernel Development: How Fast it is Going, Who is Doing It, What They are Doing, and Who is Sponsoring It". The Linux Foundation . Retrieved 2012-04-10.
  13. Thorsten Leemhuis (2012-10-01). "Summary, Outlook, Statistics - The H Open: News and Features". Archived from the original on 2013-12-19.
  14. "Linux-Kernel durchbricht die 20-Millionen-Zeilen-Marke". 30 June 2015.
  15. IFPUG "a short history of lines of code (loc) metrics"
  16. "MacPaint and QuickDraw Source Code". CHM. 2010-07-18. Retrieved 2021-04-15.
  17. "Folklore.org: -2000 Lines Of Code". www.folklore.org. Retrieved 2021-04-15.

Further reading