Developer(s) | CQSE GmbH, Competence Center Software Maintenance (Technical University of Munich) |
---|---|
Stable release | |
Written in | Java |
Operating system | Cross-platform |
Type | Software Analytics quality |
License | Apache Licence 2.0 [1] |
Website | www |
The Continuous Quality Assessment Toolkit (ConQAT) is a configurable software quality analysis engine. ConQAT is based on a pipes and filters architecture that enables flexible complex analysis configurations using a graphical configuration language. This architecture differs from other analysis tools that usually have a fixed data model and hard-wired analysis logics.
ConQAT's underlying pipes and filters architecture manifests in its analysis configuration, so called ConQAT-blocks. These blocks contain a network of ConQAT processors or additional blocks. This allows configuring analyses that can be adapted to the context of the system to be analyzed with a high degree of flexibility. For example, different kinds of source code (manual written code, generated code, test code) could be treated in different ways. Furthermore, this architecture enables the reuse of blocks and processors in different contexts. For example, graph metrics can be calculated using the same blocks for dependency or control-flow graph of a program or a revision graph from a version management system.
ConQAT analyses are usually executed on a command line in batch mode. Beside the application in software quality audits it is also often used integrated into a nightly build of a system. ConQAT implements processors (so called Scopes) to read data from different sources, such as source code or binary code files as well as from issue trackers or version management systems. For languages such as Java, C#, C/C++, and ABAP, Lexer processors and other preprocessing operations are available. ConQAT implements algorithms for detecting redundancy and architecture analysis in processors/blocks. Furthermore, it integrates established tools, like FindBugs, FxCop etc. using processors that read their output formats. Although ConQAT supports different output formats (e.g. XML), usually generated HTML files are used to present the analysis results. Visualizations include various diagrams and treemaps.
ConQAT was developed in 2007 at the Technische Universität München and has received acclaim due to several scientific publications on its architecture as well as analysis techniques for detecting redundancy (clone detection) or architecture conformance analyses. [2] [3] [4] [5] Since 2009, ConQAT has been maintained and developed in a partnership between TU Munich and CQSE GmbH as an open-source project.
ConQAT is now a dead product. Its end-of-life has been announced in 2018. [6]
In computer programming and software design, code refactoring is the process of restructuring existing computer code—changing the factoring—without changing its external behavior. Refactoring is intended to improve the design, structure, and/or implementation of the software, while preserving its functionality. Potential advantages of refactoring may include improved code readability and reduced complexity; these can improve the source code's maintainability and create a simpler, cleaner, or more expressive internal architecture or object model to improve extensibility. Another potential goal for refactoring is improved performance; software engineers face an ongoing challenge to write programs that perform faster or use less memory.
Software engineering is the systematic application of engineering approaches to the development of software.
Code review is a software quality assurance activity in which one or several people check a program mainly by viewing and reading parts of its source code, and they do so after implementation or as an interruption of implementation. At least one of the persons must not be the code's author. The persons performing the checking, excluding the author, are called "reviewers".
James Reginald Cordy is a Canadian computer scientist and educator who is Professor Emeritus in the School of Computing at Queen's University. As a researcher he is most recently active in the fields of source code analysis and manipulation, software reverse and re-engineering, and pattern analysis and machine intelligence. He has a long record of previous work in programming languages, compiler technology, and software architecture.
Requirements engineering (RE) is the process of defining, documenting, and maintaining requirements in the engineering design process. It is a common role in systems engineering and software engineering.
A documentation generator is a programming tool that generates software documentation intended for programmers or end users, or both, from a set of source code files, and in some cases, binary files. Some generators, such Javadoc, can use special comments to drive the generation. Doxygen is an example of a generator that can use all of these methods.
Software visualization or software visualisation refers to the visualization of information of and related to software systems—either the architecture of its source code or metrics of their runtime behavior—and their development process by means of static, interactive or animated 2-D or 3-D visual representations of their structure, execution, behavior, and evolution.
A call graph is a control-flow graph, which represents calling relationships between subroutines in a computer program. Each node represents a procedure and each edge (f, g) indicates that procedure f calls procedure g. Thus, a cycle in the graph indicates recursive procedure calls.
Duplicate code is a computer programming term for a sequence of source code that occurs more than once, either within a program or across different programs owned or maintained by the same entity. Duplicate code is generally considered undesirable for a number of reasons. A minimum requirement is usually applied to the quantity of code that must appear in a sequence for it to be considered duplicate rather than coincidentally similar. Sequences of duplicate code are sometimes known as code clones or just clones, the automated process of finding duplications in source code is called clone detection.
Rigi is an interactive graph editor tool for software reverse engineering using the white box method, i.e. necessitating source code, thus it is mainly aimed at program comprehension. Rigi is distributed by its main author, Hausi A. Müller and the Rigi research group at the University of Victoria.
The Bauhaus project is a software research project collaboration among the University of Stuttgart, the University of Bremen, and a commercial spin-off company Axivion formerly called Bauhaus Software Technologies. The Bauhaus project serves the fields of software maintenance and software reengineering.
The Architecture Design and Assessment System (ADAS) was a set of software programs offered by the Research Triangle Institute from the mid-1980s until the early 1990s.
In software development, a feature model is a compact representation of all the products of the Software Product Line (SPL) in terms of "features". Feature models are visually represented by means of feature diagrams. Feature models are widely used during the whole product line development process and are commonly used as input to produce other assets such as documents, architecture definition, or pieces of code.
Change impact analysis (IA) is defined by Bohnner and Arnold as "identifying the potential consequences of a change, or estimating what needs to be modified to accomplish a change", and they focus on IA in terms of scoping changes within the details of a design. In contrast, Pfleeger and Atlee focus on the risks associated with changes and state that IA is: "the evaluation of the many risks associated with the change, including estimates of the effects on resources, effort, and schedule". Both the design details and risks associated with modifications are critical to performing IA within the change management processes. A technical colloquial term is also mentioned sometimes in this context, dependency hell.
The mining software repositories (MSR) field analyzes the rich data available in software repositories, such as version control repositories, mailing list archives, bug tracking systems, issue tracking systems, etc. to uncover interesting and actionable information about software systems, projects and software engineering.
Software analytics is the analytics specific to the domain of software systems taking into account source code, static and dynamic characteristics as well as related processes of their development and evolution. It aims at describing, monitoring, predicting, and improving efficiency and effectivity of software engineering throughout the software lifecycle, in particular during software development and software maintenance. The data collection is typically done by mining software repositories, but can also be achieved by collecting user actions or production data. One avenue for using the collected data is to augment the integrated development environments (IDEs) with data-driven features.
A software map represents static, dynamic, and evolutionary information of software systems and their software development processes by means of 2D or 3D map-oriented information visualization. It constitutes a fundamental concept and tool in software visualization, software analytics, and software diagnosis. Its primary applications include risk analysis for and monitoring of code quality, team activity, or software development progress and, generally, improving effectiveness of software engineering with respect to all related artifacts, processes, and stakeholders throughout the software engineering process and software maintenance.
Software diagnosis refers to concepts, techniques, and tools that allow for obtaining findings, conclusions, and evaluations about software systems and their implementation, composition, behavior, and evolution. It serves as means to monitor, steer, observe and optimize software development, software maintenance, and software re-engineering in the sense of a business intelligence approach specific to software systems. It is generally based on the automatic extraction, analysis, and visualization of corresponding information sources of the software system. It can also be manually done and not automatic.
Automatic bug-fixing is the automatic repair of software bugs without the intervention of a human programmer. It is also commonly referred to as automatic patch generation, automatic bug repair, or automatic program repair. The typical goal of such techniques is to automatically generate correct patches to eliminate bugs in software programs without causing software regression.
Static application security testing (SAST) is used to secure software by reviewing the source code of the software to identify sources of vulnerabilities. Although the process of statically analyzing the source code has existed as long as computers have existed, the technique spread to security in the late 90s and the first public discussion of SQL injection in 1998 when Web applications integrated new technologies like JavaScript and Flash.