Automated code review

Last updated

Automated code review software checks source code for compliance with a predefined set of rules or best practices.

Contents

Overview

The use of analytical methods to inspect and review source code to detect bugs or security issues has been a standard development practice in both open source and commercial software domains. [1] This process can be accomplished both manually and in an automated fashion. [2] [3] With automation, software tools provide assistance with the code review and inspection process. The review program or tool typically displays a list of warnings (violations of programming standards). A review program can also provide an automated or a programmer-assisted way to correct the issues found. This is a component for mastering easily software. This is contributing to the Software Intelligence practice. This process is usually called "linting" since one of the first tools for static code analysis was called Lint.

Some static code analysis tools can be used to help with automated code review. They do not compare favorably to manual reviews, however they can be done faster and more efficiently.[ citation needed ] These tools also encapsulate deep knowledge of underlying rules and semantics required to perform this type analysis such that it does not require the human code reviewer to have the same level of expertise as an expert human auditor. [2] Many Integrated Development Environments also provide basic automated code review functionality. For example the Eclipse [4] and Microsoft Visual Studio [5] IDEs support a variety of plugins that facilitate code review.

Next to static code analysis tools, there are also tools that analyze and visualize software structures and help humans to better understand these. Such systems are geared more to analysis because they typically do not contain a predefined set of rules to check software against. Some of these tools (e.g. Imagix 4D, Resharper, SonarJ, Sotoarc, Structure101, ACTool [6] ) allow one to define target architectures and enforce that target architecture constraints are not violated by the actual software implementation.

Recent research has also explored the use of large language models (LLMs) as components in automated code review workflows. General-purpose code models trained on open-source code have been evaluated in a “zero-shot” setting, where the model is asked to propose fixes for security vulnerabilities directly from source code and associated diagnostics. These studies report that LLMs can repair some simple or synthetic vulnerabilities, but that their performance degrades on complex, real-world bugs, with generated patches often being incomplete or functionally incorrect. As a result, current work treats LLMs as potential assistants that can suggest candidate patches to be validated by traditional analysis tools and human reviewers, rather than as reliable standalone code review systems. [7]

Automated code review tools

See also

References

  1. McIntosh, Shane; Kamei, Yasutaka; Adams, Bram; Hassan, Ahmed E. (2014). "The impact of code review coverage and code review participation on software quality: A case study of the qt, vtk, and itk projects". Proceedings of the 11th Working Conference on Mining Software Repositories. doi:10.1145/2597073.2597076.
  2. 1 2 Gomes, Ivo; Morgado, Pedro; Gomes, Tiago; Moreira, Rodrigo (2009). "An overview of the Static Code Analysis approach in Software Development" (PDF). Universidade do Porto. Retrieved 3 October 2010.
  3. "Tricorder: Building a Program Analysis Ecosystem". 2015.
  4. "Collaborative Code Review Tool Development". www.eclipse.org. Archived from the original on 1 April 2010. Retrieved 13 October 2010.
  5. "Code Review Plug-in for Visual Studio 2008, ReviewPal". www.codeproject.com. 4 November 2009. Retrieved 13 October 2010.
  6. Architecture Consistency plugin for Eclipse
  7. Pearce, Hammond; Tan, Benjamin; Ahmad, Baleegh; Karri, Ramesh; Dolan-Gavitt, Brendan (May 2023). "Examining Zero-Shot Vulnerability Repair with Large Language Models". 2023 IEEE Symposium on Security and Privacy (SP). IEEE. pp. 2339–2356. arXiv: 2112.02125 . doi:10.1109/SP46215.2023.10179324.