Sigreturn-oriented programming

Last updated

Sigreturn-oriented programming (SROP) is a computer security exploit technique that allows an attacker to execute code in presence of security measures such as non-executable memory and code signing. [1] It was presented for the first time at the 35th IEEE Symposium on Security and Privacy in 2014 where it won the best student paper award. [2] This technique employs the same basic assumptions behind the return-oriented programming (ROP) technique: an attacker controlling the call stack, for example through a stack buffer overflow, is able to influence the control flow of the program through simple instruction sequences called gadgets . The attack works by pushing a forged sigcontext structure [3] on the call stack, overwriting the original return address with the location of a gadget that allows the attacker to call the sigreturn [4] system call. [5] Often just a single gadget is needed to successfully put this attack into effect. This gadget may reside at a fixed location, making this attack simple and effective, with a setup generally simpler and more portable than the one needed by the plain return-oriented programming technique. [1]

Contents

Sigreturn-oriented programming can be considered a weird machine since it allows code execution outside the original specification of the program. [1]

Background

Sigreturn-oriented programming (SROP) is a technique similar to return-oriented programming (ROP), since it employs code reuse to execute code outside the scope of the original control flow. In this sense, the adversary needs to be able to carry out a stack smashing attack, usually through a stack buffer overflow, to overwrite the return address contained inside the call stack.

Stack hopping exploits

If mechanisms such as data execution prevention are employed, it won't be possible for the attacker to just place a shellcode on the stack and cause the machine to execute it by overwriting the return address. With such protections in place, the machine won't execute any code present in memory areas marked as writable and non-executable. Therefore, the attacker will need to reuse code already present in memory.

Most programs do not contain functions that will allow the attacker to directly carry out the desired action (e.g., obtain access to a shell), but the necessary instructions are often scattered around memory. [6]

Return-oriented programming requires these sequences of instructions, called gadgets, to end with a RET instruction. In this way, the attacker can write a sequence of addresses for these gadgets to the stack, and as soon as a RET instruction in one gadget is executed, the control flow will proceed to the next gadget in the list.

Signal handler mechanism

Stack content while handling a signal (linux x86/64) including sigcontext structure Sigret stackframe.svg
Stack content while handling a signal (linux x86/64) including sigcontext structure

This attack is made possible by how signals are handled in most POSIX-like systems. Whenever a signal is delivered, the kernel needs to context switch to the installed signal handler. To do so, the kernel saves the current execution context in a frame on the stack. [5] [6] The structure pushed onto the stack is an architecture-specific variant of the sigcontext structure, which holds various data comprising the contents of the registers at the moment of the context switch. When the execution of the signal handler is completed, the sigreturn() system call is called.

Calling the sigreturn syscall means being able to easily set the contents of registers using a single gadget that can be easily found on most systems. [1]

Differences from ROP

There are several factors that characterize an SROP exploit and distinguish it from a classical return-oriented programming exploit. [7]

First, ROP is dependent on available gadgets, which can be very different in distinct binaries, thus making chains of gadget non-portable. Address space layout randomization (ASLR) makes it hard to use gadgets without an information leakage to get their exact positions in memory.

Although Turing-complete ROP compilers exist, [8] it is usually non-trivial to create a ROP chain. [7]

SROP exploits are usually portable across different binaries with minimal or no effort and allow easily setting the contents of the registers, which could be non-trivial or unfeasible for ROP exploits if the needed gadgets are not present. [6] Moreover, SROP requires a minimal number of gadgets and allows constructing effective shellcodes by chaining system calls. These gadgets are always present in memory, and in some cases are always at fixed locations: [7]

list of gadgets for different systems
OSASLRGadgetMemory MapFixed Memory Location
Linux i386Yes check.svgsigreturn[vdso]
Linux < 3.11 ARMDark Red x.svgsigreturn[vectors]0xffff0000
Linux < 3.3 x86-64Dark Red x.svgsyscall&return[vsyscall]0xffffffffff600000
Linux ≥ 3.3 x86-64Yes check.svgsyscall&returnLibc
Linux x86-64Yes check.svgsigreturnLibc
FreeBSD 9.2 x86-64Dark Red x.svgsigreturn0x7ffffffff000
Mac OSX x86-64Yes check.svgsigreturnLibc
iOS ARMYes check.svgsigreturnLibsystem
iOS ARMYes check.svgsyscall & returnLibsystem

Attacks

Linux

An example of the kind of gadget needed for SROP exploits can always be found in the VDSO memory area on x86-Linux systems:

__kernel_sigreturnprocnear:popeaxmoveax,77hint80h; LINUX - sys_sigreturnnopleaesi,[esi+0]__kernel_sigreturnendp

On some Linux kernel versions, ASLR can be disabled by setting the limit for the stack size to unlimited, [9] effectively bypassing ASLR and allowing easy access to the gadget present in VDSO.

For Linux kernels prior to version 3.3, it is also possible to find a suitable gadget inside the vsyscall page, which is a mechanism to accelerate the access to certain system calls often used by legacy programs and resides always at a fixed location.

Turing-completeness

It is possible to use gadgets to write into the contents of the stack frames, thereby constructing a self-modifying program. Using this technique, it is possible to devise a simple virtual machine, which can be used as the compilation target for a Turing-complete language. An example of such an approach can be found in Bosman's paper, which demonstrates the construction of an interpreter for a language similar to the Brainfuck programming language. The language provides a program counter PC, a memory pointer P, and a temporary register used for 8-bit addition A. This means that also complex backdoors or obfuscated attacks can be devised. [1]

Defenses and mitigations

A number of techniques exists to mitigate SROP attacks, relying on address space layout randomization, canaries and cookies, or shadow stacks.

Address space layout randomization

Address space layout randomization makes it harder to use suitable gadgets by making their locations unpredictable.

Signal cookies

A mitigation for SROP called signal cookies has been proposed. It consists of a way of verifying that the sigcontext structure has not been tampered with by the means of a random cookie XORed with the address of the stack location where it is to be stored. In this way, the sigreturn syscall just needs to verify the cookie's existence at the expected location, effectively mitigating SROP with a minimal impact on performances. [1] [10]

Vsyscall emulation

In Linux kernel versions greater than 3.3, the vsyscall interface is emulated, and any attempt to directly execute gadgets in the page will result in an exception. [11] [12]

RAP

Grsecurity is a set of patches for the Linux kernel to harden and improve system security. [13] It includes the so-called Return-Address Protection (RAP) to help protect from code reuse attacks. [14]

CET

Starting in 2016, Intel is developing a Control-flow Enforcement Technology (CET) to help mitigate and prevent stack-hopping exploits. CET works by implementing a shadow stack in RAM which will only contain return addresses, protected by the CPU's memory management unit. [15] [16]

See also

Related Research Articles

<span class="mw-page-title-main">Buffer overflow</span> Anomaly in computer security and programming

In programming and information security, a buffer overflow or buffer overrun is an anomaly whereby a program writes data to a buffer beyond the buffer's allocated memory, overwriting adjacent memory locations.

<span class="mw-page-title-main">Shellcode</span> Small piece of code used as a payload to exploit a software vulnerability

In hacking, a shellcode is a small piece of code used as the payload in the exploitation of a software vulnerability. It is called "shellcode" because it typically starts a command shell from which the attacker can control the compromised machine, but any piece of code that performs a similar task can be called shellcode. Because the function of a payload is not limited to merely spawning a shell, some have suggested that the name shellcode is insufficient. However, attempts at replacing the term have not gained wide acceptance. Shellcode is commonly written in machine code.

A heap overflow, heap overrun, or heap smashing is a type of buffer overflow that occurs in the heap data area. Heap overflows are exploitable in a different manner to that of stack-based overflows. Memory on the heap is dynamically allocated at runtime and typically contains program data. Exploitation is performed by corrupting this data in specific ways to cause the application to overwrite internal structures such as linked list pointers. The canonical heap overflow technique overwrites dynamic memory allocation linkage and uses the resulting pointer exchange to overwrite a program function pointer.

<span class="mw-page-title-main">Crash (computing)</span> When a computer program stops functioning properly and self-terminates

In computing, a crash, or system crash, occurs when a computer program such as a software application or an operating system stops functioning properly and exits. On some operating systems or individual applications, a crash reporting service will report the crash and any details relating to it, usually to the developer(s) of the application. If the program is a critical part of the operating system, the entire system may crash or hang, often resulting in a kernel panic or fatal system error.

Uncontrolled format string is a type of software vulnerability discovered around 1989 that can be used in security exploits. Originally thought harmless, format string exploits can be used to crash a program or to execute harmful code. The problem stems from the use of unchecked user input as the format string parameter in certain C functions that perform formatting, such as printf . A malicious user may use the %s and %x format tokens, among others, to print data from the call stack or possibly other locations in memory. One may also write arbitrary data to arbitrary locations using the %n format token, which commands printf and similar functions to write the number of bytes formatted to an address stored on the stack.

In computer security, hardening is usually the process of securing a system by reducing its surface of vulnerability, which is larger when a system performs more functions; in principle a single-function system is more secure than a multipurpose one. Reducing available ways of attack typically includes changing default passwords, the removal of unnecessary software, unnecessary usernames or logins, and the disabling or removal of unnecessary services.

Buffer overflow protection is any of various techniques used during software development to enhance the security of executable programs by detecting buffer overflows on stack-allocated variables, and preventing them from causing program misbehavior or from becoming serious security vulnerabilities. A stack buffer overflow occurs when a program writes to a memory address on the program's call stack outside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, which could lead to program crashes, incorrect operation, or security issues.

Exec Shield is a project started at Red Hat, Inc in late 2002 with the aim of reducing the risk of worm or other automated remote attacks on Linux systems. The first result of the project was a security patch for the Linux kernel that emulates an NX bit on x86 CPUs that lack a native NX implementation in hardware. While the Exec Shield project has had many other components, some people refer to this first patch as Exec Shield.

A "return-to-libc" attack is a computer security attack usually starting with a buffer overflow in which a subroutine return address on a call stack is replaced by an address of a subroutine that is already present in the process executable memory, bypassing the no-execute bit feature and ridding the attacker of the need to inject their own code. The first example of this attack in the wild was contributed by Alexander Peslyak on the Bugtraq mailing list in 1997.

Address space layout randomization (ASLR) is a computer security technique involved in preventing exploitation of memory corruption vulnerabilities. In order to prevent an attacker from reliably jumping to, for example, a particular exploited function in memory, ASLR randomly arranges the address space positions of key data areas of a process, including the base of the executable and the positions of the stack, heap and libraries.

<i>Hacking: The Art of Exploitation</i> 2003 book by Jon "Smibbs" Erickson

Hacking: The Art of Exploitation (ISBN 1-59327-007-0) is a book by Jon "Smibbs" Erickson about computer security and network security. It was published by No Starch Press in 2003, with a second edition in 2008. All of the examples in the book were developed, compiled, and tested on Gentoo Linux. The book also comes with a CD that contains a Linux environment with all the tools and examples used in the book.

In computer security, executable-space protection marks memory regions as non-executable, such that an attempt to execute machine code in these regions will cause an exception. It makes use of hardware features such as the NX bit, or in some cases software emulation of those features. However, technologies that emulate or supply an NX bit will usually impose a measurable overhead while using a hardware-supplied NX bit imposes no measurable overhead.

In computer security, a NOP slide, NOP sled or NOP ramp is a sequence of NOP (no-operation) instructions meant to "slide" the CPU's instruction execution flow to its final, desired destination whenever the program branches to a memory address anywhere on the slide.

In software, a stack buffer overflow or stack buffer overrun occurs when a program writes to a memory address on the program's call stack outside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, and in cases where the overflow was triggered by mistake, will often cause the program to crash or operate incorrectly. Stack buffer overflow is a type of the more general programming malfunction known as buffer overflow. Overfilling a buffer on the stack is more likely to derail program execution than overfilling a buffer on the heap because the stack contains the return addresses for all active function calls.

Return-oriented programming (ROP) is a computer security exploit technique that allows an attacker to execute code in the presence of security defenses such as executable space protection and code signing.

Blind return oriented programming (BROP) is an exploit technique which can successfully create an exploit even if the attacker does not possess the target binary. BROP attacks shown by Bittau et al. have defeated address space layout randomization (ASLR) and stack canaries on 64-bit systems.

Control-flow integrity (CFI) is a general term for computer security techniques that prevent a wide variety of malware attacks from redirecting the flow of execution of a program.

<span class="mw-page-title-main">Kernel page-table isolation</span>

Kernel page-table isolation is a Linux kernel feature that mitigates the Meltdown security vulnerability and improves kernel hardening against attempts to bypass kernel address space layout randomization (KASLR). It works by better isolating user space and kernel space memory. KPTI was merged into Linux kernel version 4.15, and backported to Linux kernels 4.14.11, 4.9.75, and 4.4.110. Windows and macOS released similar updates. KPTI does not address the related Spectre vulnerability.

<span class="mw-page-title-main">Meltdown (security vulnerability)</span> Microprocessor security vulnerability

Meltdown is one of the two original transient execution CPU vulnerabilities. Meltdown affects Intel x86 microprocessors, IBM POWER processors, and some ARM-based microprocessors. It allows a rogue process to read all memory, even when it is not authorized to do so.

<span class="mw-page-title-main">Spectre (security vulnerability)</span> Processor security vulnerability

Spectre refers to one of the two original transient execution CPU vulnerabilities, which involve microarchitectural timing side-channel attacks. These affect modern microprocessors that perform branch prediction and other forms of speculation. On most processors, the speculative execution resulting from a branch misprediction may leave observable side effects that may reveal private data to attackers. For example, if the pattern of memory accesses performed by such speculative execution depends on private data, the resulting state of the data cache constitutes a side channel through which an attacker may be able to extract information about the private data using a timing attack.

References

  1. 1 2 3 4 5 6 Bosman, Erik; Bos, Herbert (2014). "Framing Signals - A Return to Portable Shellcode" (PDF). 2014 IEEE Symposium on Security and Privacy. pp. 243–358. doi:10.1109/SP.2014.23. ISBN   978-1-4799-4686-0. S2CID   6153855 . Retrieved 2016-06-16.
  2. "Award Papers of the 2014 IEEE Symposium on Security and Privacy". IEEE security. IEEE Computer Society's Technical Committee on Security and Privacy. Retrieved 2016-06-17.
  3. "Linux Cross Reference - sigcontext.h".
  4. "SIGRETURN(2) - Linux manual page".
  5. 1 2 "Playing with signals: An overview on Sigreturn Oriented Programming" . Retrieved 2016-06-21.
  6. 1 2 3 "Sigreturn-oriented programming and its mitigation" . Retrieved 2016-06-20.
  7. 1 2 3 Bosman, Erik; Bos, Herbert. "Framing Signals: a return to portable shellcode" (PDF).
  8. "ROPC — Turing complete ROP compiler (part 1)".
  9. "CVE-2016-3672 - Unlimiting the stack not longer disables ASLR" . Retrieved 2016-06-20.
  10. "Sigreturn-oriented programming and its mitigation" . Retrieved 2016-06-20.
  11. "On vsyscalls and the vDSO" . Retrieved 2016-06-20.
  12. "Hack.lu 2015 - Stackstuff 150: Why and how does vsyscall emulation work" . Retrieved 2016-06-20.
  13. "Linux Kernel Security (SELinux vs AppArmor vs Grsecurity)".
  14. "RAP: RIP ROP" (PDF). Retrieved 2016-06-20.
  15. "RIP ROP: Intel's cunning plot to kill stack-hopping exploits at CPU level". The Register . Retrieved 2016-06-20.
  16. "Control-Flow-Enforcement technology preview" (PDF).