Software diagnosis

Last updated

Software diagnosis (also: software diagnostics) refers to concepts, techniques, and tools that allow for obtaining findings, conclusions, and evaluations about software systems and their implementation, composition, behaviour, and evolution. It serves as means to monitor, steer, observe and optimize software development, software maintenance, and software re-engineering in the sense of a business intelligence approach specific to software systems. It is generally based on the automatic extraction, analysis, and visualization of corresponding information sources of the software system. It can also be manually done and not automatic.

Contents

Applications

Software diagnosis supports all branches of software engineering, in particular project management, quality management, risk management as well as implementation and test. Its main strength is to support all stakeholders of software projects (in particular during software maintenance and for software re-engineering tasks [1] ) and to provide effective communication means for software development projects. For example, software diagnosis facilitates "bridging an essential information gap between management and development, improve awareness, and serve as early risk detection instrument". [2] Software diagnosis includes assessment methods for "perfective maintenance" that, for example, apply "visual analysis techniques to combine multiple indicators for low maintainability, including code complexity and entanglement with other parts of the system, and recent changes applied to the code". [3]

Characteristics

In contrast to manifold approaches and techniques in software engineering, software diagnosis does not depend on programming languages, modeling techniques, software development processes or the specific techniques used in the various stages of the software development process. Instead, software diagnosis aims at analyzing and evaluating the software system in its as-is state and based on system-generated information to bypass any subjective or potentially outdated information sources (e.g., initial software models). For it, software diagnosis combines and relates sources of information that are typically not directly linked. Examples:

Principles

The core principle of software diagnosis is to automatically extract information from all available information sources of a given software projects such as source code base, project repository, code metrics, execution traces, [6] test results, etc. To combine information, software-specific data mining, analysis, and visualization techniques are applied. Its strength results, among various reasons, from integrating decoupled information spaces in the scope of a typical software project, for example development and developer activities (recorded by the repository) and code and quality metrics (derived by analyzing source code) or key performance indicators (KPIs).

Examples

Examples of software diagnosis tools include software maps and software metrics.

Critics

Software diagnosis—in contrast to many approaches in software engineering—does not assume that developer capabilities, development methods, programming or modeling languages are right or wrong (or better or worse compared to each other): Software diagnosis aims at giving insight into a given software system and its status regardless of the methods, languages, or models used to create and maintain the system.

Related Research Articles

In computer science, static program analysis is the analysis of computer programs performed without executing them, in contrast with dynamic program analysis, which is performed on programs during their execution.

Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but not necessarily limited to:

In software engineering and development, a software metric is a standard of measure of a degree to which a software system or process possesses some property. Even if a metric is not a measurement, often the two terms are used as synonyms. Since quantitative measurements are essential in all sciences, there is a continuous effort by computer science practitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance, testing, software debugging, software performance optimization, and optimal personnel task assignments.

Software development is the process of conceiving, specifying, designing, programming, documenting, testing, and bug fixing involved in creating and maintaining applications, frameworks, or other software components. Software development involves writing and maintaining the source code, but in a broader sense, it includes all processes from the conception of the desired software through to the final manifestation of the software, typically in a planned and structured process. Software development also includes research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.

A programming tool or software development tool is a computer program that software developers use to create, debug, maintain, or otherwise support other programs and applications. The term usually refers to relatively simple programs, that can be combined to accomplish a task, much as one might use multiple hands to fix a physical object. The most basic tools are a source code editor and a compiler or interpreter, which are used ubiquitously and continuously. Other tools are used more or less depending on the language, development methodology, and individual engineer, often used for a discrete task, like a debugger or profiler. Tools may be discrete programs, executed separately – often from the command line – or may be parts of a single large program, called an integrated development environment (IDE). In many cases, particularly for simpler use, simple ad hoc techniques are used instead of a tool, such as print debugging instead of using a debugger, manual timing instead of a profiler, or tracking bugs in a text file or spreadsheet instead of a bug tracking system.

Software maintenance in software engineering is the modification of a software product after delivery to correct faults, to improve performance or other attributes.

In the context of software engineering, software quality refers to two related but distinct notions:

In software engineering, profiling is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aid program optimization, and more specifically, performance engineering.

Software visualization or software visualisation refers to the visualization of information of and related to software systems—either the architecture of its source code or metrics of their runtime behavior—and their development process by means of static, interactive or animated 2-D or 3-D visual representations of their structure, execution, behavior, and evolution.

Application discovery and understanding (ADU) is the process of automatically analyzing artifacts of a software application and determining metadata structures associated with the application in the form of lists of data elements and business rules. The relationships discovered between this application and a central metadata registry is then stored in the metadata registry itself.

Quality engineering is the discipline of engineering concerned with the principles and practice of product and service quality assurance and control. In software development, it is the management, development, operation and maintenance of IT systems and enterprise architectures with a high quality standard.

Requirements traceability is a sub-discipline of requirements management within software development and systems engineering. Traceability as a general term is defined by the IEEE Systems and Software Engineering Vocabulary as (1) the degree to which a relationship can be established between two or more products of the development process, especially products having a predecessor-successor or primary-subordinate relationship to one another; (2) the identification and documentation of derivation paths (upward) and allocation or flowdown paths (downward) of work products in the work product hierarchy; (3) the degree to which each element in a software development product establishes its reason for existing; and (4) discernible association among two or more logical entities, such as requirements, system elements, verifications, or tasks.

Search-based software engineering (SBSE) applies metaheuristic search techniques such as genetic algorithms, simulated annealing and tabu search to software engineering problems. Many activities in software engineering can be stated as optimization problems. Optimization techniques of operations research such as linear programming or dynamic programming are often impractical for large scale software engineering problems because of their computational complexity or their assumptions on the problem structure. Researchers and practitioners use metaheuristic search techniques, which impose little assumptions on the problem structure, to find near-optimal or "good-enough" solutions.

In computer programming and software development, debugging is the process of finding and resolving bugs within computer programs, software, or systems.

Software archaeology or source code archeology is the study of poorly documented or undocumented legacy software implementations, as part of software maintenance. Software archaeology, named by analogy with archaeology, includes the reverse engineering of software modules, and the application of a variety of tools and processes for extracting and understanding program structure and recovering design information. Software archaeology may reveal dysfunctional team processes which have produced poorly designed or even unused software modules, and in some cases deliberately obfuscatory code may be found. The term has been in use for decades.

<span class="mw-page-title-main">Parasoft C/C++test</span> Integrated set of tools

Parasoft C/C++test is an integrated set of tools for testing C and C++ source code that software developers use to analyze, test, find defects, and measure the quality and security of their applications. It supports software development practices that are part of development testing, including static code analysis, dynamic code analysis, unit test case generation and execution, code coverage analysis, regression testing, runtime error detection, requirements traceability, and code review. It's a commercial tool that supports operation on Linux, Windows, and Solaris platforms as well as support for on-target embedded testing and cross compilers.

Software analytics is the analytics specific to the domain of software systems taking into account source code, static and dynamic characteristics as well as related processes of their development and evolution. It aims at describing, monitoring, predicting, and improving the efficiency and effectiveness of software engineering throughout the software lifecycle, in particular during software development and software maintenance. The data collection is typically done by mining software repositories, but can also be achieved by collecting user actions or production data.

KPI driven code analysis is a method of analyzing software source code and source code related IT systems to gain insight into business critical aspects of the development of a software system such as team-performance, time-to-market, risk-management, failure-prediction and much more.

A software map represents static, dynamic, and evolutionary information of software systems and their software development processes by means of 2D or 3D map-oriented information visualization. It constitutes a fundamental concept and tool in software visualization, software analytics, and software diagnosis. Its primary applications include risk analysis for and monitoring of code quality, team activity, or software development progress and, generally, improving effectiveness of software engineering with respect to all related artifacts, processes, and stakeholders throughout the software engineering process and software maintenance.

Software Intelligence is insight into the inner-working and structural condition of software assets produced by software designed to analyze database structure, software framework and source code to better understand and control complex software systems in Information Technology environments. Similarly to Business Intelligence (BI), Software Intelligence is produced by a set of software tools and techniques for the mining of data and software's inner-structure. End results are automatically produced and feed a knowledge base containing technical documentation and make it available to all to be used by business and software stakeholders to make informed decisions, measure the efficiency of software development organizations, communicate about the software health, prevent software catastrophes.

References

  1. Beck, M.; Trümper, J.; Döllner, J. (2011). "A visual analysis and design tool for planning software reengineerings". 2011 6th International Workshop on Visualizing Software for Understanding and Analysis (VISSOFT). IEEE Computer Society. pp. 1–8. doi:10.1109/VISSOF.2011.6069458. ISBN   978-1-4577-0822-0. S2CID   16326080.
  2. Bohnet, J.; Döllner, J. (2011). "Monitoring Code Quality and Development Activity by Software Maps". Proceedings of the IEEE ACM ICSE Workshop on Managing Technical Debt. Association for Computing Machinery. pp. 9–16. doi:10.1145/1985362.1985365. ISBN   9781450305860. S2CID   17258620.
  3. Trümper, J.; Beck, M.; Döllner, J. (2012). "A Visual Analysis Approach to Support Perfective Software Maintenance". 2012 16th International Conference on Information Visualisation. IEEE Computer Society. pp. 308–315. doi:10.1109/IV.2012.59. ISBN   978-1-4673-2260-7. S2CID   5988716.
  4. Limberger, D.; Wasty, B.; Trümper, J.; Döllner, J. (2013). "Interactive software maps for web-based source code analysis". Proceedings of the 18th International Conference on 3D Web Technology. pp. 91–98. doi:10.1145/2466533.2466550. ISBN   9781450321334. S2CID   3040005.
  5. Trümper, Jonas; Telea, Alexandru; Döllner, Jürgen (2012). "ViewFusion: Correlating Structure and Activity Views for Execution Traces". Theory and Practice of Computer Graphics. The Eurographics Association. pp. 45–52. doi:10.2312/LocalChapterEvents/TPCG/TPCG12/045-052. ISBN   978-3-905673-93-7.
  6. Bohnet, J. (2010). Visualization of Execution Traces and its Application to Software Maintenance (PhD). Hasso-Plattner-Institut, University of Potsdam.