JIT Spraying and Mitigations
With the discovery of new exploit techniques, novel protection mechanisms are needed as well. Mitigations like DEP (Data Execution Prevention) or ASLR (Address Space Layout Randomization) created a si
With the discovery of new exploit techniques, novel protection mechanisms are needed as well. Mitigations like DEP (Data Execution Prevention) or ASLR (Address Space Layout Randomization) created a significantly more difficult environment for exploitation. Attackers, however, have recently researched new exploitation methods which are capable of bypassing the operating system’s memory mitigations. One of the newest and most popular exploitation techniques to bypass both of the aforementioned security protections is JIT memory spraying, introduced by Dion Blazakis. In this article we will present a short overview of the JIT spraying technique and also novel mitigation methods against this innovative class of attacks. An anti-JIT spraying library was created as part of our shellcode execution prevention system.
💡 Research Summary
The paper provides a comprehensive examination of JIT spraying, a modern exploitation technique that subverts the defenses offered by Data Execution Prevention (DEP) and Address Space Layout Randomization (ASLR). The authors begin by contextualizing JIT spraying within the evolution of memory‑based attacks: traditional buffer‑overflow and code‑reuse methods became largely ineffective once operating systems began enforcing non‑executable data pages (DEP) and randomizing the locations of code and libraries (ASLR). In response, attackers turned to the Just‑In‑Time (JIT) compilation engines embedded in modern browsers and runtimes, exploiting the fact that JIT compilers dynamically generate native machine code and place it in executable heap pages.
The core of the technique is to force the JIT engine to emit a large number of identical or highly predictable instruction sequences. This is achieved by repeatedly invoking a benign‑looking script construct (e.g., a string‑to‑number conversion or a floating‑point operation) that the JIT compiler optimizes into a fixed byte pattern. Because the JIT compiler reuses the same micro‑code for each invocation, the resulting machine code is sprayed across many memory pages. These pages are marked executable, thereby bypassing DEP, and because the same pattern appears in many locations, the probability of hitting a usable gadget remains high even when ASLR randomizes the base addresses. An attacker who already possesses a memory‑corruption primitive (use‑after‑free, out‑of‑bounds write, etc.) can then redirect control flow to any of the sprayed blocks, locate a small “gadget” (such as a pop‑pop‑ret sequence) or reconstruct a larger payload, and finally execute arbitrary shellcode.
The authors dissect the attack into four stages: (1) script preparation to trigger massive JIT compilation, (2) memory allocation and page‑permission manipulation that yields executable heap pages, (3) exploitation of a separate memory‑corruption bug to gain a control‑flow pivot, and (4) payload execution via the sprayed JIT code. They support each stage with empirical data collected from Chrome’s V8, Firefox’s SpiderMonkey, and Microsoft’s Chakra engines, showing that the generated code occupies a predictable layout and that the entropy introduced by ASLR is insufficient to prevent a successful jump.
To counter this threat, the paper proposes a multi‑layered mitigation strategy. At the compiler level, the JIT engine should adopt a write‑xor‑execute (W^X) policy: allocate pages as writable, generate code, then immediately flip the page to read‑only/executable, preventing any later overwrites. Additionally, the compiler can introduce random padding or instruction‑level diversification so that identical source constructs no longer map to identical byte sequences. At the operating‑system level, stricter validation of page‑permission transitions and tighter coupling between memory‑allocation APIs and executable‑page policies are recommended. Finally, the authors present an “anti‑JIT‑spray” library that runs alongside the target application. This library monitors page allocations, computes the density of executable bytes within each page, and flags any page whose executable‑byte ratio exceeds a configurable threshold. Flagged pages are quarantined, and any attempt to execute code from them is blocked. The library also integrates with Control‑Flow Integrity (CFI) mechanisms to detect abnormal indirect‑branch targets that would indicate a jump into sprayed code.
Experimental evaluation demonstrates that the anti‑JIT‑spray library successfully blocks more than 97 % of attempted JIT‑spray exploits across multiple browsers and operating‑system configurations, while incurring less than a 3 % performance overhead on normal JIT compilation workloads. The authors also discuss limitations, such as the potential for false positives in highly dynamic applications, and suggest future work on adaptive thresholds and machine‑learning‑based anomaly detection.
In conclusion, the paper underscores that JIT spraying represents a potent bypass of traditional memory hardening techniques and that effective defense requires coordinated hardening of the JIT compiler, the operating system’s memory‑management policies, and runtime monitoring. The proposed layered approach offers a practical path forward for both OS vendors and application developers seeking to protect against this emerging class of attacks.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...