Vulnerability (computing)

Last updated

Vulnerabilities are flaws in a computer system that weaken the overall security of the system.

Contents

Despite intentions to achieve complete correctness, virtually all hardware and software contains bugs where the system does not behave as expected. If the bug could enable an attacker to compromise the confidentiality, integrity, or availability of system resources, it is called a vulnerability. Insecure software development practices as well as design factors such as complexity can increase the burden of vulnerabilities. There are different types most common in different components such as hardware, operating systems, and applications.

Vulnerability management is a process that includes identifying systems and prioritizing which are most important, scanning for vulnerabilities, and taking action to secure the system. Vulnerability management typically is a combination of remediation (fixing the vulnerability), mitigation (increasing the difficulty or reducing the danger of exploits), and accepting risks that are not economical or practical to eliminate. Vulnerabilities can be scored for risk according to the Common Vulnerability Scoring System or other systems, and added to vulnerability databases. As of 2023, there are more than 20 million vulnerabilities catalogued in the Common Vulnerabilities and Exposures (CVE) database.

A vulnerability is initiated when it is introduced to into hardware or software. It becomes active and exploitable when the software or hardware containing the vulnerability is running. The vulnerability may be discovered by the vendor or a third party. Disclosing the vulnerability (as a patch or otherwise) is associated with an increased risk of compromise because attackers often move faster than patches are rolled out. Regardless of whether a patch is ever released to remediate the vulnerability, its lifecycle will eventually end when the system, or older versions of it, fall out of use.

Causes

Despite developers' goal of delivering a product that works entirely as intended, virtually all software and hardware contains bugs. [1] If a bug creates a security risk, it is called a vulnerability. [2] [3] [4] Software patches are often released to fix identified vulnerabilities, but those that remain unknown (zero days) as well as those that have not been patched are still liable for exploitation. [5] Vulnerabilities vary in their ability to be exploited by malicious actors, [2] and the actual risk is dependent on the nature of the vulnerability as well as the value of the surrounding system. [6] Although some vulnerabilities can only be used for denial of service attacks, more dangerous ones allow the attacker to inject and run their own code (called malware), without the user being aware of it. [2] Only a minority of vulnerabilities allow for privilege escalation, which is necessary for more severe attacks. [7] Without a vulnerability, the exploit cannot gain access. [8] It is also possible for malware to be installed directly, without an exploit, if the attacker uses social engineering or implants the malware in legitimate software that is downloaded deliberately. [9]

Design factors

Fundamental design factors that can increase the burden of vulnerabilities include:

Development factors

Some software development practices can affect the risk of vulnerabilities being introduced to a code base. Lack of knowledge about secure software development or excessive pressure to deliver features quickly can lead to avoidable vulnerabilities to enter production code, especially if security is not prioritized by the company culture. This can lead to unintended vulnerabilities. The more complex the system is, the easier it is for vulnerabilities to go undetected. Some vulnerabilities are deliberately planted, which could be for any reason from a disgruntled employee selling access to hackers, to sophisticated state-sponsored schemes to introduce vulnerabilities to software. [14] Inadequate code reviews can lead to missed bugs, but there are also static code analysis tools that can be used as part of code reviews and may find some vulnerabilities. [15]

DevOps, a development workflow that emphasizes automated testing and deployment to speed up the deployment of new features, often requires that many developers be granted access to change configurations, which can lead to deliberate or inadvertent inclusion of vulnerabilities. [16] Compartmentalizing dependencies, which is often part of DevOps workflows, can reduce the attack surface by paring down dependencies to only what is necessary. [17] If software as a service is used, rather than the organization's own hardware and software, the organization is dependent on the cloud services provider to prevent vulnerabilities. [18]

National Vulnerability Database classification

The National Vulnerability Database classifies vulnerabilities into eight root causes that may be overlapping, including: [19]

  1. Input validation (including buffer overflow and boundary condition) vulnerabilities occur when input checking is not sufficient to prevent the attacker from injecting malicious code. [20]
  2. Access control vulnerabilities enable an attacker to access a system that is supposed to be restricted to them, or engage in privilege escalation. [20]
  3. When the system fails to handle and exceptional or unanticipated condition correctly, an attacker can exploit the situation to gain access. [21]
  4. A configuration vulnerability comes into existence when configuration settings cause risks to the system security, leading to such faults as unpatched software or file system permissions that do not sufficiently restrict access. [21]
  5. A race condition—when timing or other external factors change the outcome and lead to inconsistent or unpredictable results—can cause a vulnerability. [21]

Vulnerabilities by component

Hardware

Deliberate security bugs can be introduced during or after manufacturing and cause the integrated circuit not to behave as expected under certain specific circumstances. Testing for security bugs in hardware is quite difficult due to limited time and the complexity of twenty-first century chips, [22] while the globalization of design and manufacturing has increased the opportunity for these bugs to be introduced by malicious actors. [23]

Operating system

Although operating system vulnerabilities vary depending on the operating system in use, a common problem is privilege escalation bugs that enable the attacker to gain more access than they should be allowed. Open-source operating systems such as Linux and Android have a freely accessible source code and allow anyone to contribute, which could enable the introduction of vulnerabilities. However, the same vulnerabilities also occur in proprietary operating systems such as Microsoft Windows and Apple operating systems. [24] All reputable vendors of operating systems provide patches regularly. [25]

Client–server applications

Client–server applications are downloaded onto the end user's computers and are typically updated less frequently than web applications. Unlike web applications, they interact directly with a user's operating system. Common vulnerabilities in these applications include: [26]

Web applications

Web applications run on many websites. Because they are inherently less secure than other applications, they are a leading source of data breaches and other security incidents. [27] [28] Common types of vulnerabilities found in these applications include:

Management

There is little evidence about the effectiveness and cost-effectiveness of different cyberattack prevention measures. [31] Although estimating the risk of an attack is not straightforward, the mean time to breach and expected cost can be considered to determine the priority for remediating or mitigating an identified vulnerability and whether it is cost effective to do so. [32] Although attention to security can reduce the risk of attack, achieving perfect security for a complex system is impossible, and many security measures have unacceptable cost or usability downsides. [33] For example, reducing the complexity and functionality of the system is effective at reducing the attack surface. [34]

Successful vulnerability management usually involves a combination of remediation (closing a vulnerability), mitigation (increasing the difficulty, and reducing the consequences, of exploits), and accepting some residual risk. Often a defense in depth strategy is used for multiple barriers to attack. [35] Some organizations scan for only the highest-risk vulnerabilities as this enables prioritization in the context of lacking the resources to fix every vulnerability. [36] Increasing expenses is likely to have diminishing returns. [32]

Remediation

Remediation fixes vulnerabilities, for example by downloading a software patch. [37] Software vulnerability scanners are typically unable to detect zero-day vulnerabilities, but are more effective at finding known vulnerabilities based on a database. These systems can find some known vulnerabilities and advise fixes, such as a patch. [38] [39] However, they have limitations including false positives. [37]

Vulnerabilities can only be exploited when they are active-the software in which they are embedded is actively running on the system. [40] Before the code containing the vulnerability is configured to run on the system, it is considered a carrier. [41] Dormant vulnerabilities can run, but are not currently running. Software containing dormant and carrier vulnerabilities can sometimes be uninstalled or disabled, removing the risk. [42] Active vulnerabilities, if distinguished from the other types, can be prioritized for patching. [40]

Mitigation

Vulnerability mitigation is measures that do not close the vulnerability, but make it more difficult to exploit or reduce the consequences of an attack. [43] Reducing the attack surface, particularly for parts of the system with root (administrator) access, and closing off opportunities for exploits to engage in privilege exploitation is a common strategy for reducing the harm that a cyberattack can cause. [37] If a patch for third-party software is unavailable, it may be possible to temporarily disable the software. [44]

Testing

A penetration test attempts to enter the system via an exploit to see if the system is insecure. [45] If a penetration test fails, it does not necessarily mean that the system is secure. [46] Some penetration tests can be conducted with automated software that tests against existing exploits for known vulnerabilities. [47] Other penetration tests are conducted by trained hackers. Many companies prefer to contract out this work as it simulates an outsider attack. [46]

Vulnerability lifecycle

Vulnerability timeline Vulnerability timeline.png
Vulnerability timeline

The vulnerability lifecycle begins when vulnerabilities are introduced into hardware or software. [48] Detection of vulnerabilities can be by the software vendor, or by a third party. In the latter case, it is considered most ethical to immediately disclose the vulnerability to the vendor so it can be fixed. [49] Government or intelligence agencies buy vulnerabilities that have not been publicly disclosed and may use them in an attack, stockpile them, or notify the vendor. [50] As of 2013, the Five Eyes (United States, United Kingdom, Canada, Australia, and New Zealand) captured the plurality of the market and other significant purchasers included Russia, India, Brazil, Malaysia, Singapore, North Korea, and Iran. [51] Organized criminal groups also buy vulnerabilities, although they typically prefer exploit kits. [52]

Even vulnerabilities that are publicly known or patched are often exploitable for an extended period. [53] [54] Security patches can take months to develop, [55] or may never be developed. [54] A patch can have negative effects on the functionality of software [54] and users may need to test the patch to confirm functionality and compatibility. [56] Larger organizations may fail to identify and patch all dependencies, while smaller enterprises and personal users may not install patches. [54] Research suggests that risk of cyberattack increases if the vulnerability is made publicly known or a patch is released. [57] Cybercriminals can reverse engineer the patch to find the underlying vulnerability and develop exploits, [58] often faster than users install the patch. [57]

Vulnerabilities become deprecated when the software or vulnerable versions fall out of use. [49] This can take an extended period of time; in particular, industrial software may not be feasible to replace even if the manufacturer stops supporting it. [59]

Assessment, disclosure, and inventory

Assessment

A commonly used scale for assessing the severity of vulnerabilities is the open-source specification Common Vulnerability Scoring System (CVSS). CVSS evaluates the possibility to exploit the vulnerability and compromise data confidentiality, availability, and integrity. It also considers the how the vulnerability could be used and how complex an exploit would need to be. The amount of access needed for exploitation and whether it could take place without user interaction are also factored in to the overall score. [60] [61]

Disclosure

Someone who discovers a vulnerability may disclose it immediately (full disclosure) or wait until a patch has been developed (responsible disclosure, or coordinated disclosure). The former approach is praised for its transparency, but the drawback is that the risk of attack is likely to be increased after disclosure with no patch available. [62] Some vendors pay bug bounties to those who report vulnerabilities to them. [63] [64] Not all companies respond positively to disclosures, as they can cause legal liability and operational overhead. [65] There is no law requiring disclosure of vulnerabilities. [66] If a vulnerability is discovered by a third party that does not disclose to the vendor or the public, it is called a zero-day vulnerability, often considered the most dangerous type because fewer defenses exist. [67]

Vulnerability inventory

The most commonly used vulnerability dataset is Common Vulnerabilities and Exposures (CVE), maintained by Mitre Corporation. [68] As of 2023, it has over 20 million entries. [38] This information is shared into other databases, including the United States' National Vulnerability Database, [68] where each vulnerability is given a risk score using Common Vulnerability Scoring System (CVSS), Common Platform Enumeration (CPE) scheme, and Common Weakness Enumeration.[ citation needed ] CVE and other databases typically do not track vulnerabilities in software as a service products. [38] Submitting a CVE is voluntary for companies that discovered a vulnerability. [66]

Liability

The software vendor is usually not legally liable for the cost if a vulnerability is used in an attack, which creates an incentive to make cheaper but less secure software. [69] Some companies are covered by laws, such as PCI, HIPAA, and Sarbanes-Oxley, that place legal requirements on vulnerability management. [70]

Related Research Articles

<span class="mw-page-title-main">Computer worm</span> Self-replicating malware program

A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. It often uses a computer network to spread itself, relying on security failures on the target computer to access it. It will use this machine as a host to scan and infect other computers. When these new worm-invaded computers are controlled, the worm will continue to scan and infect other computers using these computers as hosts, and this behaviour will continue. Computer worms use recursive methods to copy themselves without host programs and distribute themselves based on exploiting the advantages of exponential growth, thus controlling and infecting more and more computers in a short time. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.

<span class="mw-page-title-main">Computer security</span> Protection of computer systems from information disclosure, theft or damage

Computer security, cybersecurity, digital security or information technology security is the protection of computer systems and networks from attacks by malicious actors that may result in unauthorized information disclosure, theft of, or damage to hardware, software, or data, as well as from the disruption or misdirection of the services they provide.

In the field of computer security, independent researchers often discover flaws in software that can be abused to cause unintended behaviour; these flaws are called vulnerabilities. The process by which the analysis of these vulnerabilities is shared with third parties is the subject of much debate, and is referred to as the researcher's disclosure policy. Full disclosure is the practice of publishing analysis of software vulnerabilities as early as possible, making the data accessible to everyone without restriction. The primary purpose of widely disseminating information about vulnerabilities is so that potential victims are as knowledgeable as those who attack them.

Malware is any software intentionally designed to cause disruption to a computer, server, client, or computer network, leak private information, gain unauthorized access to information or systems, deprive access to information, or which unknowingly interferes with the user's computer security and privacy. Researchers tend to classify malware into one or more sub-types.

A rootkit is a collection of computer software, typically malicious, designed to enable access to a computer or an area of its software that is not otherwise allowed and often masks its existence or the existence of other software. The term rootkit is a compound of "root" and the word "kit". The term "rootkit" has negative connotations through its association with malware.

A patch is a set of changes to a computer program or its supporting data designed to update or repair it. This includes bugfixes or bug fixes to remove security vulnerabilities and correct bugs (errors). Patches are often written to improve the functionality, usability, or performance of a program. The majority of patches are provided by software vendors for operating system and application updates.

The Windows Metafile vulnerability—also called the Metafile Image Code Execution and abbreviated MICE—is a security vulnerability in the way some versions of the Microsoft Windows operating system handled images in the Windows Metafile format. It permits arbitrary code to be executed on affected computers without the permission of their users. It was discovered on December 27, 2005, and the first reports of affected computers were announced within 24 hours. Microsoft released a high-priority update to eliminate this vulnerability via Windows Update on January 5, 2006. Attacks using this vulnerability are known as WMF exploits.

A data breach, also known as data leakage, is "the unauthorized exposure, disclosure, or loss of personal information".

A zero-day is a vulnerability or security hole in a computer system unknown to its owners, developers or anyone capable of mitigating it. Until the vulnerability is remedied, threat actors can exploit it in a zero-day exploit, or zero-day attack.

A supply chain attack is a cyber-attack that seeks to damage an organization by targeting less secure elements in the supply chain. A supply chain attack can occur in any industry, from the financial sector, oil industry, to a government sector. A supply chain attack can happen in software or hardware. Cybercriminals typically tamper with the manufacturing or distribution of a product by installing malware or hardware-based spying components. Symantec's 2019 Internet Security Threat Report states that supply chain attacks increased by 78 percent in 2018.

Mobile security, or mobile device security, is the protection of smartphones, tablets, and laptops from threats associated with wireless computing. It has become increasingly important in mobile computing. The security of personal and business information now stored on smartphones is of particular concern.

Cyberweapons are commonly defined as malware agents employed for military, paramilitary, or intelligence objectives as part of a cyberattack. This includes computer viruses, trojans, spyware, and worms that can introduce malicious code into existing software, causing a computer to perform actions or processes unintended by its operator.

<span class="mw-page-title-main">Intel Management Engine</span> Autonomous computer subsystem

The Intel Management Engine (ME), also known as the Intel Manageability Engine, is an autonomous subsystem that has been incorporated in virtually all of Intel's processor chipsets since 2008. It is located in the Platform Controller Hub of modern Intel motherboards.

Project Zero is a team of security analysts employed by Google tasked with finding zero-day vulnerabilities. It was announced on 15 July 2014.

Intel Software Guard Extensions (SGX) is a set of instruction codes implementing trusted execution environment that are built into some Intel central processing units (CPUs). They allow user-level and operating system code to define protected private regions of memory, called enclaves. SGX is designed to be useful for implementing secure remote computation, secure web browsing, and digital rights management (DRM). Other applications include concealment of proprietary algorithms and of encryption keys.

Endpoint security or endpoint protection is an approach to the protection of computer networks that are remotely bridged to client devices. The connection of endpoint devices such as laptops, tablets, mobile phones, Internet-of-things devices, and other wireless devices to corporate networks creates attack paths for security threats. Endpoint security attempts to ensure that such devices follow a definite level of compliance to standards.

The market for zero-day exploits is commercial activity related to the trafficking of software exploits.

Operational technology (OT) is hardware and software that detects or causes a change, through the direct monitoring and/or control of industrial equipment, assets, processes and events. The term has become established to demonstrate the technological and functional differences between traditional information technology (IT) systems and industrial control systems environment, the so-called "IT in the non-carpeted areas".

<span class="mw-page-title-main">Meltdown (security vulnerability)</span> Microprocessor security vulnerability

Meltdown is one of the two original transient execution CPU vulnerabilities. Meltdown affects Intel x86 microprocessors, IBM POWER processors, and some ARM-based microprocessors. It allows a rogue process to read all memory, even when it is not authorized to do so.

<span class="mw-page-title-main">Spectre (security vulnerability)</span> Processor security vulnerability

Spectre is one of the two original transient execution CPU vulnerabilities, which involve microarchitectural timing side-channel attacks. These affect modern microprocessors that perform branch prediction and other forms of speculation. On most processors, the speculative execution resulting from a branch misprediction may leave observable side effects that may reveal private data to attackers. For example, if the pattern of memory accesses performed by such speculative execution depends on private data, the resulting state of the data cache constitutes a side channel through which an attacker may be able to extract information about the private data using a timing attack.

References

  1. Ablon & Bogart 2017, p. 1.
  2. 1 2 3 Ablon & Bogart 2017, p. 2.
  3. Daswani & Elbayadi 2021, p. 25.
  4. Seaman 2020, pp. 47–48.
  5. Daswani & Elbayadi 2021, pp. 26–27.
  6. Haber & Hibbert 2018, pp. 5–6.
  7. Haber & Hibbert 2018, p. 6.
  8. Haber & Hibbert 2018, p. 10.
  9. Haber & Hibbert 2018, pp. 13–14.
  10. Kakareka, Almantas (2009). "23". In Vacca, John (ed.). Computer and Information Security Handbook. Morgan Kaufmann Publications. Elsevier Inc. p. 393. ISBN   978-0-12-374354-1.
  11. Krsul, Ivan (April 15, 1997). Technical Report CSD-TR-97-026. The COAST Laboratory Department of Computer Sciences, Purdue University. CiteSeerX   10.1.1.26.5435 .
  12. Linkov & Kott 2019, p. 2.
  13. Haber & Hibbert 2018, p. 155.
  14. Strout 2023, p. 17.
  15. Haber & Hibbert 2018, p. 143.
  16. Haber & Hibbert 2018, p. 141.
  17. Haber & Hibbert 2018, p. 142.
  18. Haber & Hibbert 2018, pp. 135–137.
  19. Garg & Baliyan 2023, pp. 17–18.
  20. 1 2 Garg & Baliyan 2023, p. 17.
  21. 1 2 3 Garg & Baliyan 2023, p. 18.
  22. Salmani 2018, p. 1.
  23. Salmani 2018, p. 11.
  24. Garg & Baliyan 2023, pp. 20–25.
  25. Sharp 2024, p. 271.
  26. 1 2 3 Strout 2023, p. 15.
  27. 1 2 3 4 Strout 2023, p. 13.
  28. Haber & Hibbert 2018, p. 129.
  29. 1 2 3 4 5 Strout 2023, p. 14.
  30. Strout 2023, pp. 14–15.
  31. Agrafiotis et al. 2018, p. 2.
  32. 1 2 Haber & Hibbert 2018, pp. 97–98.
  33. Tjoa et al. 2024, p. 63.
  34. Tjoa et al. 2024, pp. 68, 70.
  35. Magnusson 2020, p. 34.
  36. Haber & Hibbert 2018, pp. 166–167.
  37. 1 2 3 Haber & Hibbert 2018, p. 11.
  38. 1 2 3 Strout 2023, p. 8.
  39. Haber & Hibbert 2018, pp. 12–13.
  40. 1 2 Haber & Hibbert 2018, p. 84.
  41. Haber & Hibbert 2018, p. 85.
  42. Haber & Hibbert 2018, pp. 84–85.
  43. Magnusson 2020, p. 32.
  44. Magnusson 2020, p. 33.
  45. Haber & Hibbert 2018, p. 93.
  46. 1 2 Haber & Hibbert 2018, p. 96.
  47. Haber & Hibbert 2018, p. 94.
  48. Strout 2023, p. 16.
  49. 1 2 Strout 2023, p. 18.
  50. Libicki, Ablon & Webb 2015, p. 44.
  51. Perlroth 2021, p. 145.
  52. Libicki, Ablon & Webb 2015, pp. 44, 46.
  53. Ablon & Bogart 2017, p. 8.
  54. 1 2 3 4 Sood & Enbody 2014, p. 42.
  55. Strout 2023, p. 26.
  56. Libicki, Ablon & Webb 2015, p. 50.
  57. 1 2 Libicki, Ablon & Webb 2015, pp. 49–50.
  58. Strout 2023, p. 28.
  59. Strout 2023, p. 19.
  60. Strout 2023, pp. 5–6.
  61. Haber & Hibbert 2018, pp. 73–74.
  62. "Ask an Ethicist: Vulnerability Disclosure". Association for Computing Machinery's Committee on Professional Ethics. 17 July 2018. Retrieved 3 May 2024.
  63. O'Harrow 2013, p. 18.
  64. Libicki, Ablon & Webb 2015, p. 45.
  65. Strout 2023, p. 36.
  66. 1 2 Haber & Hibbert 2018, p. 110.
  67. Strout 2023, p. 22.
  68. 1 2 Strout 2023, p. 6.
  69. Sloan & Warner 2019, pp. 104–105.
  70. Haber & Hibbert 2018, p. 111.

Sources