Static application security testing

Last updated

Static application security testing (SAST) is used to secure software by reviewing the source code of the software to identify sources of vulnerabilities. Although the process of statically analyzing the source code has existed as long as computers have existed, the technique spread to security in the late 90s and the first public discussion of SQL injection in 1998 when Web applications integrated new technologies like JavaScript and Flash.

Contents

Unlike dynamic application security testing (DAST) tools for black-box testing of application functionality, SAST tools focus on the code content of the application, white-box testing. A SAST tool scans the source code of applications and its components to identify potential security vulnerabilities in their software and architecture. Static analysis tools can detect an estimated 50% of existing security vulnerabilities. [1]

In the software development life cycle (SDLC), SAST is performed early in the development process and at code level, and also when all pieces of code and components are put together in a consistent testing environment. SAST is also used for software quality assurance, [2] even if the many resulting false-positive impede its adoption by developers [3]

SAST tools are integrated into the development process to help development teams as they are primarily focusing on developing and delivering software respecting requested specifications. [4] SAST tools, like other security tools, focus on reducing the risk of downtime of applications or that private information stored in applications will not be compromised.

For the year of 2018, the Privacy Rights Clearinghouse database [5] shows that more than 612 million records have been compromised by hacking.

Overview

Application security tests of applications their release: static application security testing (SAST), dynamic application security testing (DAST), and interactive application security testing (IAST), a combination of the two. [6]

Static analysis tools examine the text of a program syntactically. They look for a fixed set of patterns or rules in the source code. Theoretically, they can also examine a compiled form of the software. This technique relies on instrumentation of the code to do the mapping between compiled components and source code components to identify issues. Static analysis can be done manually as a code review or auditing of the code for different purposes, including security, but it is time-consuming. [7]

The precision of SAST tool is determined by its scope of analysis and the specific techniques used to identify vulnerabilities. Different levels of analysis include:

The scope of the analysis determines its accuracy and capacity to detect vulnerabilities using contextual information. [8] SAST tools unlike DAST gives the developers real-time feedback, and help them secure flaws before they the code to the next level.

At a function level, a common technique is the construction of an Abstract syntax tree to control the flow of data within the function. [9]

Since late 90s, the need to adapt to business challenges has transformed software development with componentization [10] enforced by processes and organization of development teams. [11] Following the flow of data between all the components of an application or group of applications allows validation of required calls to dedicated procedures for sanitization and that proper actions are taken to taint data in specific pieces of code. [12] [13]

The rise of web applications entailed testing them: Verizon Data Breach reports in 2016 that 40% of all data breaches use web application vulnerabilities. [14] As well as external security validations, there is a rise in focus on internal threats. The Clearswift Insider Threat Index (CITI) has reported that 92% of their respondents in a 2015 survey said they had experienced IT or security incidents in the previous 12 months and that 74% of these breaches were originated by insiders. [15] [16] Lee Hadlington categorized internal threats in 3 categories: malicious, accidental, and unintentional. Mobile applications' explosive growth implies securing applications earlier in the development process to reduce malicious code development. [17]

SAST strengths

The earlier a vulnerability is fixed in the SDLC, the cheaper it is to fix. Costs to fix in development are 10 times lower than in testing, and 100 times lower than in production. [18] SAST tools run automatically, either at the code level or application-level and do not require interaction. When integrated into a CI/CD context, SAST tools can be used to automatically stop the integration process if critical vulnerabilities are identified. [19]

Because the tool scans the entire source-code, it can cover 100% of it, while dynamic application security testing covers its execution possibly missing part of the application, [6] or unsecured configuration in configuration files.

SAST tools can offer extended functionalities such as quality and architectural testing. There is a direct correlation between the quality and the security. Bad quality software is also poorly secured software. [20]

SAST weaknesses

Even though developers are positive about the usage of SAST tools, there are different challenges to the adoption of SAST tools by developers. [4] The usability of the output generated by these tools may challenge how much developers can make use of these tools. Research shows that despite the long out generated by these tools, they may lack usability. [21]

With Agile Processes in software development, early integration of SAST generates many bugs, as developers using this framework focus first on features and delivery. [22]

Scanning many lines of code with SAST tools may result in hundreds or thousands of vulnerability warnings for a single application. It can generate many false-positives, increasing investigation time and reducing trust in such tools. This is particularly the case when the context of the vulnerability cannot be caught by the tool. [3]

See also

Related Research Articles

In computer science, static program analysis is the analysis of computer programs performed without executing them, in contrast with dynamic program analysis, which is performed on programs during their execution.

Software testing is the act of examining the artifacts and the behavior of the software under test by verification and validation. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to:

In computer science, program analysis is the process of automatically analyzing the behavior of computer programs regarding a property such as correctness, robustness, safety and liveness. Program analysis focuses on two major areas: program optimization and program correctness. The first focuses on improving the program’s performance while reducing the resource usage while the latter focuses on ensuring that the program does what it is supposed to do.

Code review is a software quality assurance activity in which one or more people check a program, mainly by viewing and reading parts of its source code, either after implementation or as an interruption of implementation. At least one of the persons must not have authored the code. The persons performing the checking, excluding the author, are called "reviewers".

In the context of software engineering, software quality refers to two related but distinct notions:

In programming and software development, fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, e.g., in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to expose corner cases that have not been properly dealt with.

Application security includes all tasks that introduce a secure software development life cycle to development teams. Its final goal is to improve security practices and, through that, to find, fix and preferably prevent security issues within applications. It encompasses the whole application life cycle from requirements analysis, design, implementation, verification as well as maintenance.

Software visualization or software visualisation refers to the visualization of information of and related to software systems—either the architecture of its source code or metrics of their runtime behavior—and their development process by means of static, interactive or animated 2-D or 3-D visual representations of their structure, execution, behavior, and evolution.

Software assurance (SwA) is a critical process in software development that ensures the reliability, safety, and security of software products. It involves a variety of activities, including requirements analysis, design reviews, code inspections, testing, and formal verification. One crucial component of software assurance is secure coding practices, which follow industry-accepted standards and best practices, such as those outlined by the Software Engineering Institute (SEI) in their CERT Secure Coding Standards (SCS).

Security testing is a process intended to detect flaws in the security mechanisms of an information system and as such help enable it to protect data and maintain functionality as intended. Due to the logical limitations of security testing, passing the security testing process is not an indication that no flaws exist or that the system adequately satisfies the security requirements.

A software regression is a type of software bug where a feature that has worked before stops working. This may happen after changes are applied to the software's source code, including the addition of new features and bug fixes. They may also be introduced by changes to the environment in which the software is running, such as system upgrades, system patching or a change to daylight saving time. A software performance regression is a situation where the software still functions correctly, but performs more slowly or uses more memory or resources than before. Various types of software regressions have been identified in practice, including the following:

DevOps is a methodology in the software development and IT industry. Used as a set of practices and tools, DevOps integrates and automates the work of software development (Dev) and IT operations (Ops) as a means for improving and shortening the systems development life cycle.

Software Intelligence is insight into the inner workings and structural condition of software assets produced by software designed to analyze database structure, software framework and source code to better understand and control complex software systems in Information Technology environments. Similarly to Business Intelligence (BI), Software Intelligence is produced by a set of software tools and techniques for the mining of data and the software's inner-structure. Results are automatically produced and feed a knowledge base containing technical documentation and blueprints of the innerworking of applications, and make it available to all to be used by business and software stakeholders to make informed decisions, measure the efficiency of software development organizations, communicate about the software health, prevent software catastrophes.

Automatic bug-fixing is the automatic repair of software bugs without the intervention of a human programmer. It is also commonly referred to as automatic patch generation, automatic bug repair, or automatic program repair. The typical goal of such techniques is to automatically generate correct patches to eliminate bugs in software programs without causing software regression.

Code Dx, Inc. was an American software technology company active from 2015 to 2021. The company's flagship product, Code Dx, is a vulnerability management system that combines and correlates the results generated by a wide variety of static and dynamic testing tools. In 2021, the company was acquired by Synopsys.

CodeSonar is a static code analysis tool from CodeSecure, Inc. CodeSonar is used to find and fix bugs and security vulnerabilities in source and binary code. It performs whole-program, inter-procedural analysis with abstract interpretation on C, C++, C#, Java, as well as x86 and ARM binary executables and libraries. CodeSonar is typically used by teams developing or assessing software to track their quality or security weaknesses. CodeSonar supports Linux, BSD, FreeBSD, NetBSD, MacOS and Windows hosts and embedded operating systems and compilers.

Mathias Payer is a Liechtensteinian computer scientist. His research is invested in software and system security. He is Associate Professor at the École Polytechnique Fédérale de Lausanne (EPFL) and head of the HexHive research group.

In computer science, a code property graph (CPG) is a computer program representation that captures syntactic structure, control flow, and data dependencies in a property graph. The concept was originally introduced to identify security vulnerabilities in C and C++ system code, but has since been employed to analyze web applications, cloud deployments, and smart contracts. Beyond vulnerability discovery, code property graphs find applications in code clone detection, attack-surface detection, exploit generation, measuring code testability, and backporting of security patches.

It is a common software engineering practice to develop software by using different components. Using software components segments the complexity of larger elements into smaller pieces of code and increases flexibility by enabling easier reuse of components to address new requirements. The practice has widely expanded since the late 1990s with the popularization of open-source software (OSS) to help speed up the software development process and reduce time to market.

References

  1. Okun, V.; Guthrie, W. F.; Gaucher, H.; Black, P. E. (October 2007). "Effect of static analysis tools on software security: preliminary investigation" (PDF). Proceedings of the 2007 ACM Workshop on Quality of Protection. ACM: 1–5. doi:10.1145/1314257.1314260. S2CID   6663970.
  2. Ayewah, N.; Hovemeyer, D.; Morgenthaler, J.D.; Penix, J.; Pugh, W. (September 2008). "Using static analysis to find bugs". IEEE Software. IEEE. 25 (5): 22–29. doi:10.1109/MS.2008.130. S2CID   20646690.
  3. 1 2 Johnson, Brittany; Song, Yooki; Murphy-Hill, Emerson; Bowdidge, Robert (May 2013). "Why don't software developers use static analysis tools to find bug". ICSE '13 Proceedings of the 2013 International Conference on Software Engineering: 672–681. ISBN   978-1-4673-3076-3.
  4. 1 2 Oyetoyan, Tosin Daniel; Milosheska, Bisera; Grini, Mari (May 2018). "Myths and Facts About Static Application Security Testing Tools: An Action Research at Telenor Digital". International Conference on Agile Software Development. Springer: 86–103.
  5. "Data Breaches | Privacy Rights Clearinghouse". privacyrights.org.
  6. 1 2 Parizi, R. M.; Qian, K.; Shahriar, H.; Wu, F.; Tao, L. (July 2018). "Benchmark Requirements for Assessing Software Security Vulnerability Testing Tools". 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC). IEEE. pp. 825–826. doi:10.1109/COMPSAC.2018.00139. ISBN   978-1-5386-2666-5. S2CID   52055661.
  7. Chess, B.; McGraw, G. (December 2004). "Static analysis for security". IEEE Security & Privacy. IEEE. 2 (6): 76–79. doi:10.1109/MSP.2004.111.
  8. Chess, B.; McGraw, G. (October 2004). "Risk Analysis in Software Design". IEEE Security & Privacy. IEEE. 2 (4): 76–84. doi: 10.1109/MSP.2004.55 .
  9. Yamaguchi, Fabian; Lottmann, Markus; Rieck, Konrad (December 2012). "Generalized vulnerability extrapolation using abstract syntax trees". Proceedings of the 28th Annual Computer Security Applications Conference. Vol. 2. IEEE. pp. 359–368. doi:10.1145/2420950.2421003. ISBN   9781450313124. S2CID   8970125.
  10. Booch, Grady; Kozaczynski, Wojtek (September 1998). "Component-Based Software Engineering". IEEE Software. 15 (5): 34–36. doi:10.1109/MS.1998.714621. S2CID   33646593.
  11. Mezo, Peter; Jain, Radhika (December 2006). "Agile Software Development: Adaptive Systems Principles and Best Practices". Information Systems Management. 23 (3): 19–30. doi:10.1201/1078.10580530/46108.23.3.20060601/93704.3. S2CID   5087532.
  12. Livshits, V.B.; Lam, M.S. (May 2006). "Finding Security Vulnerabilities in Java Applications with Static Analysis". USENIX Security Symposium. 14: 18.
  13. Jovanovic, N.; Kruegel, C.; Kirda, E. (May 2006). "Pixy: A static analysis tool for detecting Web application vulnerabilities". 2006 IEEE Symposium on Security and Privacy (S&P'06). IEEE. pp. 359–368. doi:10.1109/SP.2006.29. ISBN   0-7695-2574-1. S2CID   1042585.
  14. "2016 Data Breach Investigations Report" (PDF). Verizon. 2016. Retrieved 8 January 2016.
  15. "Clearswift report: 40 percent of firms expect a data breach in the Next Year". Endeavor Business Media. 20 November 2015. Retrieved 8 January 2024.
  16. "The Ticking Time Bomb: 40% of Firms Expect an Insider Data Breach in the Next 12 Months". Fortra. 18 November 2015. Retrieved 8 January 2024.
  17. Xianyong, Meng; Qian, Kai; Lo, Dan; Bhattacharya, Prabir; Wu, Fan (June 2018). "Secure Mobile Software Development with Vulnerability Detectors in Static Code Analysis". 2018 International Symposium on Networks, Computers and Communications (ISNCC). pp. 1–4. doi:10.1109/ISNCC.2018.8531071. ISBN   978-1-5386-3779-1. S2CID   53288239.
  18. Hossain, Shahadat (October 2018). "Rework and Reuse Effects in Software Economy". Global Journal of Computer Science and Technology. 18 (C4): 35–50.
  19. Okun, V.; Guthrie, W. F.; Gaucher, H.; Black, P. E. (October 2007). "Effect of static analysis tools on software security: preliminary investigation" (PDF). Proceedings of the 2007 ACM Workshop on Quality of Protection. ACM: 1–5. doi:10.1145/1314257.1314260. S2CID   6663970.
  20. Siavvas, M.; Tsoukalas, D.; Janković, M.; Kehagias, D.; Chatzigeorgiou, A.; Tzovaras, D.; Aničić, N.; Gelenbe, E. (August 2019). "An Empirical Evaluation of the Relationship between Technical Debt and Software Security". In Konjović, Z.; Zdravković, M.; Trajanović, M. (eds.). International Conference on Information Society and Technology 2019 Proceedings (Data set). Vol. 1. pp. 199–203. doi: 10.5281/zenodo.3374712 .
  21. Tahaei, Mohammad; Vaniea, Kami; Beznosov, Konstantin (Kosta); Wolters, Maria K (6 May 2021). Security Notifications in Static Analysis Tools: Developers' Attitudes, Comprehension, and Ability to Act on Them. pp. 1–17. doi:10.1145/3411764.3445616. ISBN   9781450380966. S2CID   233987670.
  22. Arreaza, Gustavo Jose Nieves (June 2019). "Methodology for Developing Secure Apps in the Clouds. (MDSAC) for IEEECS Confererences". 2019 6th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/ 2019 5th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). IEEE. pp. 102–106. doi:10.1109/CSCloud/EdgeCom.2019.00-11. ISBN   978-1-7281-1661-7. S2CID   203655645.