Automated code review

Last updated

Automated code review refers to the use of software tools and techniques to assist or fully automate the process of reviewing source code for defects, style issues, security vulnerabilities, and maintainability concerns. Such tools are widely used in modern software engineering, particularly within continuous integration (CI) and continuous delivery (CD) pipelines. [1]

Contents

Overview

The use of analytical methods to inspect and review source code to detect bugs or security issues has been a standard development practice in both open source and commercial software domains. [2] This process can be accomplished both manually and in an automated fashion. [3] [4] With automation, software tools provide assistance with the code review and inspection process. The review program or tool typically displays a list of warnings (violations of programming standards). A review program can also provide an automated or a programmer-assisted way to correct the issues found. This is a component for mastering easily software. This is contributing to the Software Intelligence practice. This process is usually called "linting" since one of the first tools for static code analysis was called Lint.

Some static code analysis tools can be used to help with automated code review. They do not compare favorably to manual reviews, however they can be done faster and more efficiently.[ citation needed ] These tools also encapsulate deep knowledge of underlying rules and semantics required to perform this type analysis such that it does not require the human code reviewer to have the same level of expertise as an expert human auditor. [3] Many Integrated Development Environments also provide basic automated code review functionality. For example, the Eclipse [5] and Microsoft Visual Studio [6] IDEs support a variety of plugins that facilitate code review.

Next to static code analysis tools, there are also tools that analyze and visualize software structures and help humans to better understand these. Such systems are geared more to analysis because they typically do not contain a predefined set of rules to check software against. Some of these tools (e.g. Imagix 4D, Resharper, SonarJ, Sotoarc, Structure101, ACTool [7] ) allow one to define target architectures and enforce that target architecture constraints are not violated by the actual software implementation.

Recent research has also explored the use of large language models (LLMs) as components in automated code review workflows. General-purpose code models trained on open-source code have been evaluated in a "zero-shot" setting, where the model is asked to propose fixes for security vulnerabilities directly from source code and associated diagnostics. These studies report that LLMs can repair some simple or synthetic vulnerabilities, but that their performance degrades on complex, real-world bugs, with generated patches often being incomplete or functionally incorrect. As a result, current work treats LLMs as potential assistants that can suggest candidate patches to be validated by traditional analysis tools and human reviewers, rather than as reliable standalone code review systems. [8]

Applications

Automated code review is used in:

Automated code review tools

See also

References

  1. Davila, Nicole; Nunes, Ingrid (1 July 2021). "A systematic literature review and taxonomy of modern code review". Journal of Systems and Software. 177 110951. doi:10.1016/j.jss.2021.110951. ISSN   0164-1212.
  2. McIntosh, Shane; Kamei, Yasutaka; Adams, Bram; Hassan, Ahmed E. (2014). "The impact of code review coverage and code review participation on software quality: A case study of the qt, vtk, and itk projects". Proceedings of the 11th Working Conference on Mining Software Repositories. doi:10.1145/2597073.2597076.
  3. 1 2 Gomes, Ivo; Morgado, Pedro; Gomes, Tiago; Moreira, Rodrigo (2009). "An overview of the Static Code Analysis approach in Software Development" (PDF). Universidade do Porto. Retrieved 3 October 2010.
  4. "Tricorder: Building a Program Analysis Ecosystem". 2015.
  5. "Collaborative Code Review Tool Development". www.eclipse.org. Archived from the original on 1 April 2010. Retrieved 13 October 2010.
  6. "Code Review Plug-in for Visual Studio 2008, ReviewPal". www.codeproject.com. 4 November 2009. Retrieved 13 October 2010.
  7. Architecture Consistency plugin for Eclipse
  8. Pearce, Hammond; Tan, Benjamin; Ahmad, Baleegh; Karri, Ramesh; Dolan-Gavitt, Brendan (May 2023). "Examining Zero-Shot Vulnerability Repair with Large Language Models". 2023 IEEE Symposium on Security and Privacy (SP). IEEE. pp. 2339–2356. arXiv: 2112.02125 . doi:10.1109/SP46215.2023.10179324.
  9. Hilton, Michael; Tunnell, Timothy; Huang, Kai; Marinov, Darko; Dig, Danny (25 August 2016). "Usage, costs, and benefits of continuous integration in open-source projects". Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. ASE '16. New York, NY, USA: Association for Computing Machinery: 426–437. doi:10.1145/2970276.2970358. ISBN   978-1-4503-3845-5.
  10. 1 2 "Static application security testing (SAST) | GitLab Docs". docs.gitlab.com. Retrieved 2 December 2025.
  11. Hilton, Michael; Nelson, Nicholas; Tunnell, Timothy; Marinov, Darko; Dig, Danny (21 August 2017). "Trade-offs in continuous integration: assurance, security, and flexibility". Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. ESEC/FSE 2017. New York, NY, USA: Association for Computing Machinery: 197–207. doi:10.1145/3106237.3106270. ISBN   978-1-4503-5105-8.