Buffer overflow

Last updated
Visualization of a software buffer overflow. Data is written into A, but is too large to fit within A, so it overflows into B. Buffer overflow basicexample.svg
Visualization of a software buffer overflow. Data is written into A, but is too large to fit within A, so it overflows into B.

In programming and information security, a buffer overflow or buffer overrun is an anomaly whereby a program writes data to a buffer beyond the buffer's allocated memory, overwriting adjacent memory locations.

Contents

Buffers are areas of memory set aside to hold data, often while moving it from one section of a program to another, or between programs. Buffer overflows can often be triggered by malformed inputs; if one assumes all inputs will be smaller than a certain size and the buffer is created to be that size, then an anomalous transaction that produces more data could cause it to write past the end of the buffer. If this overwrites adjacent data or executable code, this may result in erratic program behavior, including memory access errors, incorrect results, and crashes.

Exploiting the behavior of a buffer overflow is a well-known security exploit. On many systems, the memory layout of a program, or the system as a whole, is well defined. By sending in data designed to cause a buffer overflow, it is possible to write into areas known to hold executable code and replace it with malicious code, or to selectively overwrite data pertaining to the program's state, therefore causing behavior that was not intended by the original programmer. Buffers are widespread in operating system (OS) code, so it is possible to make attacks that perform privilege escalation and gain unlimited access to the computer's resources. The famed Morris worm in 1988 used this as one of its attack techniques.

Programming languages commonly associated with buffer overflows include C and C++, which provide no built-in protection against accessing or overwriting data in any part of memory and do not automatically check that data written to an array (the built-in buffer type) is within the boundaries of that array. Bounds checking can prevent buffer overflows, but requires additional code and processing time. Modern operating systems use a variety of techniques to combat malicious buffer overflows, notably by randomizing the layout of memory, or deliberately leaving space between buffers and looking for actions that write into those areas ("canaries").

Technical description

A buffer overflow occurs when data written to a buffer also corrupts data values in memory addresses adjacent to the destination buffer due to insufficient bounds checking. [1] :41 This can occur when copying data from one buffer to another without first checking that the data fits within the destination buffer.

Example

In the following example expressed in C, a program has two variables which are adjacent in memory: an 8-byte-long string buffer, A, and a two-byte big-endian integer, B.

charA[8]="";unsignedshortB=1979;

Initially, A contains nothing but zero bytes, and B contains the number 1979.

variable nameAB
value[ null string ]1979
hex value000000000000000007BB

Now, the program attempts to store the null-terminated string "excessive" with ASCII encoding in the A buffer.

strcpy(A,"excessive");

"excessive" is 9 characters long and encodes to 10 bytes including the null terminator, but A can take only 8 bytes. By failing to check the length of the string, it also overwrites the value of B:

variable nameAB
value'e''x''c''e''s''s''i''v'25856
hex65786365737369766500

B's value has now been inadvertently replaced by a number formed from part of the character string. In this example "e" followed by a zero byte would become 25856.

Writing data past the end of allocated memory can sometimes be detected by the operating system to generate a segmentation fault error that terminates the process.

To prevent the buffer overflow from happening in this example, the call to strcpy could be replaced with strlcpy , which takes the maximum capacity of A (including a null-termination character) as an additional parameter and ensures that no more than this amount of data is written to A:

strlcpy(A,"excessive",sizeof(A));

When available, the strlcpy library function is preferred over strncpy which does not null-terminate the destination buffer if the source string's length is greater than or equal to the size of the buffer (the third argument passed to the function). Therefore A may not be null-terminated and cannot be treated as a valid C-style string.

Exploitation

The techniques to exploit a buffer overflow vulnerability vary by architecture, operating system, and memory region. For example, exploitation on the heap (used for dynamically allocated memory), differs markedly from exploitation on the call stack. In general, heap exploitation depends on the heap manager used on the target system, while stack exploitation depends on the calling convention used by the architecture and compiler.

Stack-based exploitation

There are several ways in which one can manipulate a program by exploiting stack-based buffer overflows:

The attacker designs data to cause one of these exploits, then places this data in a buffer supplied to users by the vulnerable code. If the address of the user-supplied data used to affect the stack buffer overflow is unpredictable, exploiting a stack buffer overflow to cause remote code execution becomes much more difficult. One technique that can be used to exploit such a buffer overflow is called "trampolining". Here, an attacker will find a pointer to the vulnerable stack buffer and compute the location of their shellcode relative to that pointer. The attacker will then use the overwrite to jump to an instruction already in memory which will make a second jump, this time relative to the pointer. That second jump will branch execution into the shellcode. Suitable instructions are often present in large code. The Metasploit Project, for example, maintains a database of suitable opcodes, though it lists only those found in the Windows operating system. [4]

Heap-based exploitation

A buffer overflow occurring in the heap data area is referred to as a heap overflow and is exploitable in a manner different from that of stack-based overflows. Memory on the heap is dynamically allocated by the application at run-time and typically contains program data. Exploitation is performed by corrupting this data in specific ways to cause the application to overwrite internal structures such as linked list pointers. The canonical heap overflow technique overwrites dynamic memory allocation linkage (such as malloc meta data) and uses the resulting pointer exchange to overwrite a program function pointer.

Microsoft's GDI+ vulnerability in handling JPEGs is an example of the danger a heap overflow can present. [5]

Barriers to exploitation

Manipulation of the buffer, which occurs before it is read or executed, may lead to the failure of an exploitation attempt. These manipulations can mitigate the threat of exploitation, but may not make it impossible. Manipulations could include conversion to upper or lower case, removal of metacharacters and filtering out of non-alphanumeric strings. However, techniques exist to bypass these filters and manipulations, such as alphanumeric shellcode, polymorphic code, self-modifying code, and return-to-libc attacks. The same methods can be used to avoid detection by intrusion detection systems. In some cases, including where code is converted into Unicode, [6] the threat of the vulnerability has been misrepresented by the disclosers as only Denial of Service when in fact the remote execution of arbitrary code is possible.

Practicalities of exploitation

In real-world exploits there are a variety of challenges which need to be overcome for exploits to operate reliably. These factors include null bytes in addresses, variability in the location of shellcode, differences between environments, and various counter-measures in operation.

NOP sled technique

Illustration of a NOP-sled payload on the stack. Nopsled.svg
Illustration of a NOP-sled payload on the stack.

A NOP-sled is the oldest and most widely known technique for exploiting stack buffer overflows. [7] It solves the problem of finding the exact address of the buffer by effectively increasing the size of the target area. To do this, much larger sections of the stack are corrupted with the no-op machine instruction. At the end of the attacker-supplied data, after the no-op instructions, the attacker places an instruction to perform a relative jump to the top of the buffer where the shellcode is located. This collection of no-ops is referred to as the "NOP-sled" because if the return address is overwritten with any address within the no-op region of the buffer, the execution will "slide" down the no-ops until it is redirected to the actual malicious code by the jump at the end. This technique requires the attacker to guess where on the stack the NOP-sled is instead of the comparatively small shellcode. [8]

Because of the popularity of this technique, many vendors of intrusion prevention systems will search for this pattern of no-op machine instructions in an attempt to detect shellcode in use. It is important to note that a NOP-sled does not necessarily contain only traditional no-op machine instructions. Any instruction that does not corrupt the machine state to a point where the shellcode will not run can be used in place of the hardware assisted no-op. As a result, it has become common practice for exploit writers to compose the no-op sled with randomly chosen instructions which will have no real effect on the shellcode execution. [9]

While this method greatly improves the chances that an attack will be successful, it is not without problems. Exploits using this technique still must rely on some amount of luck that they will guess offsets on the stack that are within the NOP-sled region. [10] An incorrect guess will usually result in the target program crashing and could alert the system administrator to the attacker's activities. Another problem is that the NOP-sled requires a much larger amount of memory in which to hold a NOP-sled large enough to be of any use. This can be a problem when the allocated size of the affected buffer is too small and the current depth of the stack is shallow (i.e., there is not much space from the end of the current stack frame to the start of the stack). Despite its problems, the NOP-sled is often the only method that will work for a given platform, environment, or situation, and as such it is still an important technique.

The jump to address stored in a register technique

The "jump to register" technique allows for reliable exploitation of stack buffer overflows without the need for extra room for a NOP-sled and without having to guess stack offsets. The strategy is to overwrite the return pointer with something that will cause the program to jump to a known pointer stored within a register which points to the controlled buffer and thus the shellcode. For example, if register A contains a pointer to the start of a buffer then any jump or call taking that register as an operand can be used to gain control of the flow of execution. [11]

An instruction from ntdll.dll to call the DbgPrint() routine contains the i386 machine opcode for jmp esp. JumpToEsp.png
An instruction from ntdll.dll to call the DbgPrint() routine contains the i386 machine opcode for jmp esp.

In practice a program may not intentionally contain instructions to jump to a particular register. The traditional solution is to find an unintentional instance of a suitable opcode at a fixed location somewhere within the program memory. Figure E on the left contains an example of such an unintentional instance of the i386 jmp esp instruction. The opcode for this instruction is FF E4. [12] This two-byte sequence can be found at a one-byte offset from the start of the instruction call DbgPrint at address 0x7C941EED. If an attacker overwrites the program return address with this address the program will first jump to 0x7C941EED, interpret the opcode FF E4 as the jmp esp instruction, and will then jump to the top of the stack and execute the attacker's code. [13]

When this technique is possible the severity of the vulnerability increases considerably. This is because exploitation will work reliably enough to automate an attack with a virtual guarantee of success when it is run. For this reason, this is the technique most commonly used in Internet worms that exploit stack buffer overflow vulnerabilities. [14]

This method also allows shellcode to be placed after the overwritten return address on the Windows platform. Since executables are mostly based at address 0x00400000 and x86 is a Little Endian architecture, the last byte of the return address must be a null, which terminates the buffer copy and nothing is written beyond that. This limits the size of the shellcode to the size of the buffer, which may be overly restrictive. DLLs are located in high memory (above 0x01000000) and so have addresses containing no null bytes, so this method can remove null bytes (or other disallowed characters) from the overwritten return address. Used in this way, the method is often referred to as "DLL trampolining".

Protective countermeasures

Various techniques have been used to detect or prevent buffer overflows, with various tradeoffs. The following sections describe the choices and implementations available.

Choice of programming language

Assembly, C, and C++ are popular programming languages that are vulnerable to buffer overflow in part because they allow direct access to memory and are not strongly typed. [15] C provides no built-in protection against accessing or overwriting data in any part of memory. More specifically, it does not check that data written to a buffer is within the boundaries of that buffer. The standard C++ libraries provide many ways of safely buffering data, and C++'s Standard Template Library (STL) provides containers that can optionally perform bounds checking if the programmer explicitly calls for checks while accessing data. For example, a vector's member function at() performs a bounds check and throws an out_of_range exception if the bounds check fails. [16] However, C++ behaves just like C if the bounds check is not explicitly called. Techniques to avoid buffer overflows also exist for C.

Languages that are strongly typed and do not allow direct memory access, such as COBOL, Java, Python, and others, prevent buffer overflow in most cases. [15] Many programming languages other than C or C++ provide runtime checking and in some cases even compile-time checking which might send a warning or raise an exception, while C or C++ would overwrite data and continue to execute instructions until erroneous results are obtained, potentially causing the program to crash. Examples of such languages include Ada, Eiffel, Lisp, Modula-2, Smalltalk, OCaml and such C-derivatives as Cyclone, Rust and D. The Java and .NET Framework bytecode environments also require bounds checking on all arrays. Nearly every interpreted language will protect against buffer overflow, signaling a well-defined error condition. Languages that provide enough type information to do bounds checking often provide an option to enable or disable it. Static code analysis can remove many dynamic bound and type checks, but poor implementations and awkward cases can significantly decrease performance. Software engineers should carefully consider the tradeoffs of safety versus performance costs when deciding which language and compiler setting to use.

Use of safe libraries

The problem of buffer overflows is common in the C and C++ languages because they expose low level representational details of buffers as containers for data types. Buffer overflows can be avoided by maintaining a high degree of correctness in code that performs buffer management. It has also long been recommended to avoid standard library functions that are not bounds checked, such as gets , scanf and strcpy . The Morris worm exploited a gets call in fingerd. [17]

Well-written and tested abstract data type libraries that centralize and automatically perform buffer management, including bounds checking, can reduce the occurrence and impact of buffer overflows. The primary data types in languages in which buffer overflows are common are strings and arrays. Thus, libraries preventing buffer overflows in these data types can provide the vast majority of the necessary coverage. However, failure to use these safe libraries correctly can result in buffer overflows and other vulnerabilities, and naturally any bug in the library is also a potential vulnerability. "Safe" library implementations include "The Better String Library", [18] Vstr [19] and Erwin. [20] The OpenBSD operating system's C library provides the strlcpy and strlcat functions, but these are more limited than full safe library implementations.

In September 2007, Technical Report 24731, prepared by the C standards committee, was published. [21] It specifies a set of functions that are based on the standard C library's string and IO functions, with additional buffer-size parameters. However, the efficacy of these functions for reducing buffer overflows is disputable. They require programmer intervention on a per function call basis that is equivalent to intervention that could make the analogous older standard library functions buffer overflow safe. [22]

Buffer overflow protection

Buffer overflow protection is used to detect the most common buffer overflows by checking that the stack has not been altered when a function returns. If it has been altered, the program exits with a segmentation fault. Three such systems are Libsafe, [23] and the StackGuard [24] and ProPolice [25] gcc patches.

Microsoft's implementation of Data Execution Prevention (DEP) mode explicitly protects the pointer to the Structured Exception Handler (SEH) from being overwritten. [26]

Stronger stack protection is possible by splitting the stack in two: one for data and one for function returns. This split is present in the Forth language, though it was not a security-based design decision. Regardless, this is not a complete solution to buffer overflows, as sensitive data other than the return address may still be overwritten.

This type of protection is also not entirely accurate because it does not detect all attacks. Systems like StackGuard are more centered around the behavior of the attacks, which makes them efficient and faster in comparison to range-check systems. [27]

Pointer protection

Buffer overflows work by manipulating pointers, including stored addresses. PointGuard was proposed as a compiler-extension to prevent attackers from reliably manipulating pointers and addresses. [28] The approach works by having the compiler add code to automatically XOR-encode pointers before and after they are used. Theoretically, because the attacker does not know what value will be used to encode and decode the pointer, one cannot predict what the pointer will point to if it is overwritten with a new value. PointGuard was never released, but Microsoft implemented a similar approach beginning in Windows XP SP2 and Windows Server 2003 SP1. [29] Rather than implement pointer protection as an automatic feature, Microsoft added an API routine that can be called. This allows for better performance (because it is not used all of the time), but places the burden on the programmer to know when its use is necessary.

Because XOR is linear, an attacker may be able to manipulate an encoded pointer by overwriting only the lower bytes of an address. This can allow an attack to succeed if the attacker can attempt the exploit multiple times or complete an attack by causing a pointer to point to one of several locations (such as any location within a NOP sled). [30] Microsoft added a random rotation to their encoding scheme to address this weakness to partial overwrites. [31]

Executable space protection

Executable space protection is an approach to buffer overflow protection that prevents execution of code on the stack or the heap. An attacker may use buffer overflows to insert arbitrary code into the memory of a program, but with executable space protection, any attempt to execute that code will cause an exception.

Some CPUs support a feature called NX ("No eXecute") or XD ("eXecute Disabled") bit, which in conjunction with software, can be used to mark pages of data (such as those containing the stack and the heap) as readable and writable but not executable.

Some Unix operating systems (e.g. OpenBSD, macOS) ship with executable space protection (e.g. W^X). Some optional packages include:

Newer variants of Microsoft Windows also support executable space protection, called Data Execution Prevention. [35] Proprietary add-ons include:

Executable space protection does not generally protect against return-to-libc attacks, or any other attack that does not rely on the execution of the attackers code. However, on 64-bit systems using ASLR, as described below, executable space protection makes it far more difficult to execute such attacks.

Address space layout randomization

Address space layout randomization (ASLR) is a computer security feature that involves arranging the positions of key data areas, usually including the base of the executable and position of libraries, heap, and stack, randomly in a process' address space.

Randomization of the virtual memory addresses at which functions and variables can be found can make exploitation of a buffer overflow more difficult, but not impossible. It also forces the attacker to tailor the exploitation attempt to the individual system, which foils the attempts of internet worms. [38] A similar but less effective method is to rebase processes and libraries in the virtual address space.

Deep packet inspection

The use of deep packet inspection (DPI) can detect, at the network perimeter, very basic remote attempts to exploit buffer overflows by use of attack signatures and heuristics. This technique can block packets that have the signature of a known attack. It was formerly used in situations in which a long series of No-Operation instructions (known as a NOP-sled) was detected and the location of the exploit's payload was slightly variable.

Packet scanning is not an effective method since it can only prevent known attacks and there are many ways that a NOP-sled can be encoded. Shellcode used by attackers can be made alphanumeric, metamorphic, or self-modifying to evade detection by heuristic packet scanners and intrusion detection systems.

Testing

Checking for buffer overflows and patching the bugs that cause them helps prevent buffer overflows. One common automated technique for discovering them is fuzzing. [39] Edge case testing can also uncover buffer overflows, as can static analysis. [40] Once a potential buffer overflow is detected it should be patched. This makes the testing approach useful for software that is in development, but less useful for legacy software that is no longer maintained or supported.

History

Buffer overflows were understood and partially publicly documented as early as 1972, when the Computer Security Technology Planning Study laid out the technique: "The code performing this function does not check the source and destination addresses properly, permitting portions of the monitor to be overlaid by the user. This can be used to inject code into the monitor that will permit the user to seize control of the machine." [41] Today, the monitor would be referred to as the kernel.

The earliest documented hostile exploitation of a buffer overflow was in 1988. It was one of several exploits used by the Morris worm to propagate itself over the Internet. The program exploited was a service on Unix called finger. [42] Later, in 1995, Thomas Lopatic independently rediscovered the buffer overflow and published his findings on the Bugtraq security mailing list. [43] A year later, in 1996, Elias Levy (also known as Aleph One) published in Phrack magazine the paper "Smashing the Stack for Fun and Profit", [44] a step-by-step introduction to exploiting stack-based buffer overflow vulnerabilities.

Since then, at least two major internet worms have exploited buffer overflows to compromise a large number of systems. In 2001, the Code Red worm exploited a buffer overflow in Microsoft's Internet Information Services (IIS) 5.0 [45] and in 2003 the SQL Slammer worm compromised machines running Microsoft SQL Server 2000. [46]

In 2003, buffer overflows present in licensed Xbox games have been exploited to allow unlicensed software, including homebrew games, to run on the console without the need for hardware modifications, known as modchips. [47] The PS2 Independence Exploit also used a buffer overflow to achieve the same for the PlayStation 2. The Twilight hack accomplished the same with the Wii, using a buffer overflow in The Legend of Zelda: Twilight Princess .

See also

Related Research Articles

In hacking, a shellcode is a small piece of code used as the payload in the exploitation of a software vulnerability. It is called "shellcode" because it typically starts a command shell from which the attacker can control the compromised machine, but any piece of code that performs a similar task can be called shellcode. Because the function of a payload is not limited to merely spawning a shell, some have suggested that the name shellcode is insufficient. However, attempts at replacing the term have not gained wide acceptance. Shellcode is commonly written in machine code.

A heap overflow, heap overrun, or heap smashing is a type of buffer overflow that occurs in the heap data area. Heap overflows are exploitable in a different manner to that of stack-based overflows. Memory on the heap is dynamically allocated at runtime and typically contains program data. Exploitation is performed by corrupting this data in specific ways to cause the application to overwrite internal structures such as linked list pointers. The canonical heap overflow technique overwrites dynamic memory allocation linkage and uses the resulting pointer exchange to overwrite a program function pointer.

Uncontrolled format string is a type of code injection vulnerability discovered around 1989 that can be used in security exploits. Originally thought harmless, format string exploits can be used to crash a program or to execute harmful code. The problem stems from the use of unchecked user input as the format string parameter in certain C functions that perform formatting, such as printf . A malicious user may use the %s and %x format tokens, among others, to print data from the call stack or possibly other locations in memory. One may also write arbitrary data to arbitrary locations using the %n format token, which commands printf and similar functions to write the number of bytes formatted to an address stored on the stack.

In computer security, hardening is usually the process of securing a system by reducing its surface of vulnerability, which is larger when a system performs more functions; in principle a single-function system is more secure than a multipurpose one. Reducing available ways of attack typically includes changing default passwords, the removal of unnecessary software, unnecessary usernames or logins, and the disabling or removal of unnecessary services.

Buffer overflow protection is any of various techniques used during software development to enhance the security of executable programs by detecting buffer overflows on stack-allocated variables, and preventing them from causing program misbehavior or from becoming serious security vulnerabilities. A stack buffer overflow occurs when a program writes to a memory address on the program's call stack outside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, which could lead to program crashes, incorrect operation, or security issues.

Exec Shield is a project started at Red Hat, Inc in late 2002 with the aim of reducing the risk of worm or other automated remote attacks on Linux systems. The first result of the project was a security patch for the Linux kernel that emulates an NX bit on x86 CPUs that lack a native NX implementation in hardware. While the Exec Shield project has had many other components, some people refer to this first patch as Exec Shield.

A "return-to-libc" attack is a computer security attack usually starting with a buffer overflow in which a subroutine return address on a call stack is replaced by an address of a subroutine that is already present in the process executable memory, bypassing the no-execute bit feature and ridding the attacker of the need to inject their own code. The first example of this attack in the wild was contributed by Alexander Peslyak on the Bugtraq mailing list in 1997.

Address space layout randomization (ASLR) is a computer security technique involved in preventing exploitation of memory corruption vulnerabilities. In order to prevent an attacker from reliably redirecting code execution to, for example, a particular exploited function in memory, ASLR randomly arranges the address space positions of key data areas of a process, including the base of the executable and the positions of the stack, heap and libraries.

<span class="mw-page-title-main">Dangling pointer</span> Pointer that does not point to a valid object

Dangling pointers and wild pointers in computer programming are pointers that do not point to a valid object of the appropriate type. These are special cases of memory safety violations. More generally, dangling references and wild references are references that do not resolve to a valid destination.

<i>Hacking: The Art of Exploitation</i> 2003 book by Jon "Smibbs" Erickson

Hacking: The Art of Exploitation (ISBN 1-59327-007-0) is a book by Jon "Smibbs" Erickson about computer security and network security. It was published by No Starch Press in 2003, with a second edition in 2008. All of the examples in the book were developed, compiled, and tested on Gentoo Linux. The accompanying CD provides a Linux environment inclusive of all tools and examples referenced in the book.

In computer security, a NOP slide, NOP sled or NOP ramp is a sequence of NOP (no-operation) instructions meant to "slide" the CPU's instruction execution flow to its final, desired destination whenever the program branches to a memory address anywhere on the slide.

In software, a stack buffer overflow or stack buffer overrun occurs when a program writes to a memory address on the program's call stack outside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, and in cases where the overflow was triggered by mistake, will often cause the program to crash or operate incorrectly. Stack buffer overflow is a type of the more general programming malfunction known as buffer overflow. Overfilling a buffer on the stack is more likely to derail program execution than overfilling a buffer on the heap because the stack contains the return addresses for all active function calls.

Memory safety is the state of being protected from various software bugs and security vulnerabilities when dealing with memory access, such as buffer overflows and dangling pointers. For example, Java is said to be memory-safe because its runtime error detection checks array bounds and pointer dereferences. In contrast, C and C++ allow arbitrary pointer arithmetic with pointers implemented as direct memory addresses with no provision for bounds checking, and thus are potentially memory-unsafe.

Secure coding is the practice of developing computer software in such a way that guards against the accidental introduction of security vulnerabilities. Defects, bugs and logic flaws are consistently the primary cause of commonly exploited software vulnerabilities. Through the analysis of thousands of reported vulnerabilities, security professionals have discovered that most vulnerabilities stem from a relatively small number of common software programming errors. By identifying the insecure coding practices that lead to these errors and educating developers on secure alternatives, organizations can take proactive steps to help significantly reduce or eliminate vulnerabilities in software before deployment.

In computer security, heap spraying is a technique used in exploits to facilitate arbitrary code execution. The part of the source code of an exploit that implements this technique is called a heap spray. In general, code that sprays the heap attempts to put a certain sequence of bytes at a predetermined location in the memory of a target process by having it allocate (large) blocks on the process's heap and fill the bytes in these blocks with the right values.

Return-oriented programming (ROP) is a computer security exploit technique that allows an attacker to execute code in the presence of security defenses such as executable space protection and code signing.

JIT spraying is a class of computer security exploit that circumvents the protection of address space layout randomization and data execution prevention by exploiting the behavior of just-in-time compilation. It has been used to exploit the PDF format and Adobe Flash.

Blind return oriented programming (BROP) is an exploit technique which can successfully create an exploit even if the attacker does not possess the target binary. BROP attacks shown by Bittau et al. have defeated address space layout randomization (ASLR) and stack canaries on 64-bit systems.

Control-flow integrity (CFI) is a general term for computer security techniques that prevent a wide variety of malware attacks from redirecting the flow of execution of a program.

Sigreturn-oriented programming (SROP) is a computer security exploit technique that allows an attacker to execute code in presence of security measures such as non-executable memory and code signing. It was presented for the first time at the 35th IEEE Symposium on Security and Privacy in 2014 where it won the best student paper award. This technique employs the same basic assumptions behind the return-oriented programming (ROP) technique: an attacker controlling the call stack, for example through a stack buffer overflow, is able to influence the control flow of the program through simple instruction sequences called gadgets. The attack works by pushing a forged sigcontext structure on the call stack, overwriting the original return address with the location of a gadget that allows the attacker to call the sigreturn system call. Often just a single gadget is needed to successfully put this attack into effect. This gadget may reside at a fixed location, making this attack simple and effective, with a setup generally simpler and more portable than the one needed by the plain return-oriented programming technique.

References

  1. R. Shirey (August 2007). Internet Security Glossary, Version 2. Network Working Group. doi: 10.17487/RFC4949 . RFC 4949.Informational.
  2. "CORE-2007-0219: OpenBSD's IPv6 mbufs remote kernel buffer overflow" . Retrieved 2007-05-15.
  3. "Modern Overflow Targets" (PDF). Archived (PDF) from the original on 2022-10-09. Retrieved 2013-07-05.
  4. "The Metasploit Opcode Database". Archived from the original on 12 May 2007. Retrieved 2007-05-15.
  5. "Microsoft Technet Security Bulletin MS04-028". Microsoft . Archived from the original on 2011-08-04. Retrieved 2007-05-15.
  6. "Creating Arbitrary Shellcode In Unicode Expanded Strings" (PDF). Archived from the original (PDF) on 2006-01-05. Retrieved 2007-05-15.
  7. Vangelis (2004-12-08). "Stack-based Overflow Exploit: Introduction to Classical and Advanced Overflow Technique". Wowhacker via Neworder. Archived from the original (text) on August 18, 2007.{{cite journal}}: Cite journal requires |journal= (help)
  8. Balaban, Murat. "Buffer Overflows Demystified" (text). Enderunix.org.{{cite journal}}: Cite journal requires |journal= (help)
  9. Akritidis, P.; Evangelos P. Markatos; M. Polychronakis; Kostas D. Anagnostakis (2005). "STRIDE: Polymorphic Sled Detection through Instruction Sequence Analysis." (PDF). Proceedings of the 20th IFIP International Information Security Conference (IFIP/SEC 2005). IFIP International Information Security Conference. Archived from the original (PDF) on 2012-09-01. Retrieved 2012-03-04.
  10. Klein, Christian (September 2004). "Buffer Overflow" (PDF). Archived from the original (PDF) on 2007-09-28.{{cite journal}}: Cite journal requires |journal= (help)
  11. Shah, Saumil (2006). "Writing Metasploit Plugins: from vulnerability to exploit" (PDF). Hack In The Box. Kuala Lumpur. Retrieved 2012-03-04.
  12. Intel 64 and IA-32 Architectures Software Developer's Manual Volume 2A: Instruction Set Reference, A-M (PDF). Intel Corporation. May 2007. pp. 3–508. Archived from the original (PDF) on 2007-11-29.
  13. Alvarez, Sergio (2004-09-05). "Win32 Stack BufferOverFlow Real Life Vuln-Dev Process" (PDF). IT Security Consulting. Retrieved 2012-03-04.{{cite journal}}: Cite journal requires |journal= (help)
  14. Ukai, Yuji; Soeder, Derek; Permeh, Ryan (2004). "Environment Dependencies in Windows Exploitation". BlackHat Japan. Japan: eEye Digital Security. Retrieved 2012-03-04.
  15. 1 2 https://www.owasp.org/index.php/Buffer_OverflowsBuffer Overflows article on OWASP Archived 2016-08-29 at the Wayback Machine
  16. "vector::at - C++ Reference". Cplusplus.com. Retrieved 2014-03-27.
  17. "Archived copy". wiretap.area.com. Archived from the original on 5 May 2001. Retrieved 6 June 2022.{{cite web}}: CS1 maint: archived copy as title (link)
  18. "The Better String Library".
  19. "The Vstr Homepage". Archived from the original on 2017-03-05. Retrieved 2007-05-15.
  20. "The Erwin Homepage" . Retrieved 2007-05-15.
  21. International Organization for Standardization (2007). "Information technology – Programming languages, their environments and system software interfaces – Extensions to the C library – Part 1: Bounds-checking interfaces". ISO Online Browsing Platform.
  22. "CERT Secure Coding Initiative". Archived from the original on December 28, 2012. Retrieved 2007-07-30.
  23. "Libsafe at FSF.org" . Retrieved 2007-05-20.
  24. "StackGuard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks by Cowan et al" (PDF). Archived (PDF) from the original on 2022-10-09. Retrieved 2007-05-20.
  25. "ProPolice at X.ORG". Archived from the original on 12 February 2007. Retrieved 2007-05-20.
  26. "Bypassing Windows Hardware-enforced Data Execution Prevention". Archived from the original on 2007-04-30. Retrieved 2007-05-20.
  27. Lhee, Kyung-Suk; Chapin, Steve J. (2003-04-25). "Buffer overflow and format string overflow vulnerabilities". Software: Practice and Experience. 33 (5): 423–460. doi:10.1002/spe.515. ISSN   0038-0644.
  28. "12th USENIX Security Symposium – Technical Paper". www.usenix.org. Retrieved 3 April 2018.
  29. "Protecting against Pointer Subterfuge (Kinda!)". msdn.com. Archived from the original on 2010-05-02. Retrieved 3 April 2018.
  30. "USENIX - The Advanced Computing Systems Association" (PDF). www.usenix.org. Archived (PDF) from the original on 2022-10-09. Retrieved 3 April 2018.
  31. "Protecting against Pointer Subterfuge (Redux)". msdn.com. Archived from the original on 2009-12-19. Retrieved 3 April 2018.
  32. "PaX: Homepage of the PaX team" . Retrieved 2007-06-03.
  33. "KernelTrap.Org". Archived from the original on 2012-05-29. Retrieved 2007-06-03.
  34. "Openwall Linux kernel patch 2.4.34-ow1". Archived from the original on 2012-02-19. Retrieved 2007-06-03.
  35. "Microsoft Technet: Data Execution Prevention". Archived from the original on 2006-06-22. Retrieved 2006-06-30.
  36. "BufferShield: Prevention of Buffer Overflow Exploitation for Windows" . Retrieved 2007-06-03.
  37. "NGSec Stack Defender". Archived from the original on 2007-05-13. Retrieved 2007-06-03.
  38. "PaX at GRSecurity.net" . Retrieved 2007-06-03.
  39. "The Exploitant - Security info and tutorials" . Retrieved 2009-11-29.
  40. Larochelle, David; Evans, David (13 August 2001). "Statically Detecting Likely Buffer Overflow Vulnerabilities". USENIX Security Symposium. 32.
  41. "Computer Security Technology Planning Study" (PDF). p. 61. Archived from the original (PDF) on 2011-07-21. Retrieved 2007-11-02.
  42. ""A Tour of The Worm" by Donn Seeley, University of Utah". Archived from the original on 2007-05-20. Retrieved 2007-06-03.
  43. "Bugtraq security mailing list archive". Archived from the original on 2007-09-01. Retrieved 2007-06-03.
  44. ""Smashing the Stack for Fun and Profit" by Aleph One" . Retrieved 2012-09-05.
  45. "eEye Digital Security". Archived from the original on 2009-06-20. Retrieved 2007-06-03.
  46. "Microsoft Technet Security Bulletin MS02-039". Microsoft . Archived from the original on 2008-03-07. Retrieved 2007-06-03.
  47. "Hacker breaks Xbox protection without mod-chip". Archived from the original on 2007-09-27. Retrieved 2007-06-03.