NOP slide

Last updated

In computer security, a NOP slide, NOP sled or NOP ramp is a sequence of NOP (no-operation) instructions meant to "slide" the CPU's instruction execution flow to its final, desired destination whenever the program branches to a memory address anywhere on the slide.

Contents

The technique sees common usage in software exploits, where it is used to direct program execution when a branch instruction target is not known precisely. Other notable applications include defensive programming strategies such as EMC-aware programming.

While a NOP slide will function if it consists of a list of canonical NOP instructions, the presence of such code is suspicious and easy to automatically detect. For this reason, practical NOP slides are often composed of non-canonical NOP instructions (such as moving a register to itself or adding zero [1] ), or of instructions that affect program state only inconsequentially, which makes them much more difficult to identify.

A NOP-sled is the oldest and most widely known technique for exploiting stack buffer overflows. [2] It solves the problem of finding the exact address of the buffer by effectively increasing the size of the target area. To do this, much larger sections of the stack are corrupted with the no-op machine instruction. At the end of the attacker-supplied data, after the no-op instructions, the attacker places an instruction to perform a relative jump to the top of the buffer where the shellcode is located. This collection of no-ops is referred to as the "NOP-sled" because if the return address is overwritten with any address within the no-op region of the buffer, the execution will "slide" down the no-ops until it is redirected to the actual malicious code by the jump at the end. This technique requires the attacker to guess where on the stack the NOP-sled is instead of the comparatively small shellcode. [3]

Because of the popularity of this technique, many vendors of intrusion prevention systems will search for this pattern of no-op machine instructions in an attempt to detect shellcode in use. It is important to note that a NOP-sled does not necessarily contain only traditional no-op machine instructions; any instruction that does not corrupt the machine state to a point where the shellcode will not run can be used in place of the hardware assisted no-op. As a result, it has become common practice for exploit writers to compose the no-op sled with randomly chosen instructions which will have no real effect on the shellcode execution. [4]

While this method greatly improves the chances that an attack will be successful, it is not without problems. Exploits using this technique still must rely on some amount of luck that they will guess offsets on the stack that are within the NOP-sled region. [5] An incorrect guess will usually result in the target program crashing and could alert the system administrator to the attacker's activities. Another problem is that the NOP-sled requires a much larger amount of memory in which to hold a NOP-sled large enough to be of any use. This can be a problem when the allocated size of the affected buffer is too small and the current depth of the stack is shallow (i.e., there is not much space from the end of the current stack frame to the start of the stack). Despite its problems, the NOP-sled is often the only method that will work for a given platform, environment, or situation, and as such it is still an important technique.

The entropy of a NOP slide is dependent upon the constraints placed on it. If it can be determined that certain registers are not in use (that is to say, they will be set to a known value before their next use), instructions which manipulate them arbitrarily may be used in the NOP slide. Additionally, if the alignment of both the NOP slide and the instruction pointer are deterministic, multi-byte instructions can be used in a NOP slide without regard to the results of unaligned execution. If the input providing the attack vector into which the NOP slide and payload are to be introduced are filtered (such as accepting only printable characters), the field of possible instructions for inclusion is limited. While instructions that are part of an architecture extension (such as SSE) may frequently be irrelevant to program state, they cannot be used in a NOP slide targeting a computer on which the extension is not supported.

Detection

Many techniques exist to detect the presence of NOP slides in memory. For example, in 2005, Greek researchers found that they can be easily detected by checking whether a memory image contains a lengthy sequence of bytes such that each starting offset within the sequence is valid and leads execution to the same place. [6]

See also

Related Research Articles

<span class="mw-page-title-main">Buffer overflow</span> Anomaly in computer security and programming

In programming and information security, a buffer overflow or buffer overrun is an anomaly whereby a program writes data to a buffer beyond the buffer's allocated memory, overwriting adjacent memory locations.

In computer science, self-modifying code is code that alters its own instructions while it is executing – usually to reduce the instruction path length and improve performance or simply to reduce otherwise repetitively similar code, thus simplifying maintenance. The term is usually only applied to code where the self-modification is intentional, not in situations where code accidentally modifies itself due to an error such as a buffer overflow.

In hacking, a shellcode is a small piece of code used as the payload in the exploitation of a software vulnerability. It is called "shellcode" because it typically starts a command shell from which the attacker can control the compromised machine, but any piece of code that performs a similar task can be called shellcode. Because the function of a payload is not limited to merely spawning a shell, some have suggested that the name shellcode is insufficient. However, attempts at replacing the term have not gained wide acceptance. Shellcode is commonly written in machine code.

A heap overflow, heap overrun, or heap smashing is a type of buffer overflow that occurs in the heap data area. Heap overflows are exploitable in a different manner to that of stack-based overflows. Memory on the heap is dynamically allocated at runtime and typically contains program data. Exploitation is performed by corrupting this data in specific ways to cause the application to overwrite internal structures such as linked list pointers. The canonical heap overflow technique overwrites dynamic memory allocation linkage and uses the resulting pointer exchange to overwrite a program function pointer.

In computer science, a NOP, no-op, or NOOP is a machine language instruction and its assembly language mnemonic, programming language statement, or computer protocol command that does nothing.

Uncontrolled format string is a type of code injection vulnerability discovered around 1989 that can be used in security exploits. Originally thought harmless, format string exploits can be used to crash a program or to execute harmful code. The problem stems from the use of unchecked user input as the format string parameter in certain C functions that perform formatting, such as printf . A malicious user may use the %s and %x format tokens, among others, to print data from the call stack or possibly other locations in memory. One may also write arbitrary data to arbitrary locations using the %n format token, which commands printf and similar functions to write the number of bytes formatted to an address stored on the stack.

Buffer overflow protection is any of various techniques used during software development to enhance the security of executable programs by detecting buffer overflows on stack-allocated variables, and preventing them from causing program misbehavior or from becoming serious security vulnerabilities. A stack buffer overflow occurs when a program writes to a memory address on the program's call stack outside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, which could lead to program crashes, incorrect operation, or security issues.

A "return-to-libc" attack is a computer security attack usually starting with a buffer overflow in which a subroutine return address on a call stack is replaced by an address of a subroutine that is already present in the process executable memory, bypassing the no-execute bit feature and ridding the attacker of the need to inject their own code. The first example of this attack in the wild was contributed by Alexander Peslyak on the Bugtraq mailing list in 1997.

Address space layout randomization (ASLR) is a computer security technique involved in preventing exploitation of memory corruption vulnerabilities. In order to prevent an attacker from reliably redirecting code execution to, for example, a particular exploited function in memory, ASLR randomly arranges the address space positions of key data areas of a process, including the base of the executable and the positions of the stack, heap and libraries.

In computer security, arbitrary code execution (ACE) is an attacker's ability to run any commands or code of the attacker's choice on a target machine or in a target process. An arbitrary code execution vulnerability is a security flaw in software or hardware allowing arbitrary code execution. A program that is designed to exploit such a vulnerability is called an arbitrary code execution exploit. The ability to trigger arbitrary code execution over a network is often referred to as remote code execution.

<i>Hacking: The Art of Exploitation</i> 2003 book by Jon "Smibbs" Erickson

Hacking: The Art of Exploitation (ISBN 1-59327-007-0) is a book by Jon "Smibbs" Erickson about computer security and network security. It was published by No Starch Press in 2003, with a second edition in 2008. All the examples in the book were developed, compiled, and tested on Gentoo Linux. The accompanying CD provides a Linux environment containing all the tools and examples referenced in the book.

Intrusion detection system evasion techniques are modifications made to attacks in order to prevent detection by an intrusion detection system (IDS). Almost all published evasion techniques modify network attacks. The 1998 paper Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection popularized IDS evasion, and discussed both evasion techniques and areas where the correct interpretation was ambiguous depending on the targeted computer system. The 'fragroute' and 'fragrouter' programs implement evasion techniques discussed in the paper. Many web vulnerability scanners, such as 'Nikto', 'whisker' and 'Sandcat', also incorporate IDS evasion techniques.

In software, a stack buffer overflow or stack buffer overrun occurs when a program writes to a memory address on the program's call stack outside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, and in cases where the overflow was triggered by mistake, will often cause the program to crash or operate incorrectly. Stack buffer overflow is a type of the more general programming malfunction known as buffer overflow. Overfilling a buffer on the stack is more likely to derail program execution than overfilling a buffer on the heap because the stack contains the return addresses for all active function calls.

Memory safety is the state of being protected from various software bugs and security vulnerabilities when dealing with memory access, such as buffer overflows and dangling pointers. For example, Java is said to be memory-safe because its runtime error detection checks array bounds and pointer dereferences. In contrast, C and C++ allow arbitrary pointer arithmetic with pointers implemented as direct memory addresses with no provision for bounds checking, and thus are potentially memory-unsafe.

In computer security, heap spraying is a technique used in exploits to facilitate arbitrary code execution. The part of the source code of an exploit that implements this technique is called a heap spray. In general, code that sprays the heap attempts to put a certain sequence of bytes at a predetermined location in the memory of a target process by having it allocate (large) blocks on the process's heap and fill the bytes in these blocks with the right values.

Return-oriented programming (ROP) is a computer security exploit technique that allows an attacker to execute code in the presence of security defenses such as executable space protection and code signing.

Blind return-oriented programming (BROP) is an exploit technique which can successfully create an exploit even if the attacker does not possess the target binary. BROP attacks shown by Bittau et al. have defeated address space layout randomization (ASLR) and stack canaries on 64-bit systems.

Control-flow integrity (CFI) is a general term for computer security techniques that prevent a wide variety of malware attacks from redirecting the flow of execution of a program.

Sigreturn-oriented programming (SROP) is a computer security exploit technique that allows an attacker to execute code in presence of security measures such as non-executable memory and code signing. It was presented for the first time at the 35th IEEE Symposium on Security and Privacy in 2014 where it won the best student paper award. This technique employs the same basic assumptions behind the return-oriented programming (ROP) technique: an attacker controlling the call stack, for example through a stack buffer overflow, is able to influence the control flow of the program through simple instruction sequences called gadgets. The attack works by pushing a forged sigcontext structure on the call stack, overwriting the original return address with the location of a gadget that allows the attacker to call the sigreturn system call. Often just a single gadget is needed to successfully put this attack into effect. This gadget may reside at a fixed location, making this attack simple and effective, with a setup generally simpler and more portable than the one needed by the plain return-oriented programming technique.

<span class="mw-page-title-main">Pacman (security vulnerability)</span> Processor security vulnerability

Pacman is a side-channel vulnerability in certain ARM CPUs that was made public by Massachusetts Institute of Technology security researchers on June 10, 2021. It affects the pointer authentication (PAC) mechanism in many ARMv8.3 chips, including Apple's M1 CPU. Pacman creates an 'oracle' that lets an attacker guess a pointer's PAC signature without crashing the program if the guess is wrong. PAC signatures are typically less than 16 bits wide, so an attacker can use the oracle to guess the signature in 216 tries or fewer. It is unfixable without hardware changes because it is caused by the inherent design of CPU caches and branch predictors.

References

  1. corelanc0d3r (2011-12-31). "Exploit writing tutorial part 11: Heap Spraying Demystified". Corelan Team. Archived from the original on 2015-04-25. Retrieved 2014-01-15.{{cite web}}: CS1 maint: numeric names: authors list (link)
  2. Vangelis (2004-12-08). "Stack-based Overflow Exploit: Introduction to Classical and Advanced Overflow Technique". Wowhacker via Neworder. Archived from the original (text) on 2007-08-18.{{cite journal}}: Cite journal requires |journal= (help)
  3. Balaban, Murat. "Buffer Overflows Demystified" (text). Enderunix.org.{{cite journal}}: Cite journal requires |journal= (help)
  4. Akritidis, P.; Markatos, Evangelos P.; Polychronakis, M.; Anagnostakis, Kostas D. (2005). "STRIDE: Polymorphic Sled Detection through Instruction Sequence Analysis" (PDF). Proceedings of the 20th IFIP International Information Security Conference (IFIP/SEC 2005). IFIP International Information Security Conference. Archived from the original (PDF) on 2012-09-01. Retrieved 2012-03-04.
  5. Klein, Christian (September 2004). "Buffer Overflow" (PDF). Archived from the original (PDF) on 2007-09-28.{{cite journal}}: Cite journal requires |journal= (help)
  6. Akritidis, P.; Markatos, E. P.; Polychronakis, M.; Anagnostakis, K. (2005). "STRIDE: Polymorphic Sled Detection Through Instruction Sequence Analysis". Security and Privacy in the Age of Ubiquitous Computing. IFIP Advances in Information and Communication Technology. Vol. 181. pp. 375–391. doi:10.1007/0-387-25660-1_25. ISBN   978-0-387-25658-0.{{cite book}}: |journal= ignored (help)