Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Instantly Obsoleting the Address-code Associations: A New Principle for Defending Advanced Code Reuse Attack (1507.02786v1)

Published 10 Jul 2015 in cs.CR

Abstract: Fine-grained Address Space Randomization has been considered as an effective protection against code reuse attacks such as ROP/JOP. However, it only employs a one-time randomization, and such a limitation has been exploited by recent just-in-time ROP and side channel ROP, which collect gadgets on-the-fly and dynamically compile them for malicious purposes. To defeat these advanced code reuse attacks, we propose a new defense principle: instantly obsoleting the address-code associations. We have initialized this principle with a novel technique called virtual space page table remapping and implemented the technique in a system CHAMELEON. CHAMELEON periodically re-randomizes the locations of code pages on-the-fly. A set of techniques are proposed to achieve our goal, including iterative instrumentation that instruments a to-be-protected binary program to generate a re-randomization compatible binary, runtime virtual page shuffling, and function reordering and instruction rearranging optimizations. We have tested CHAMELEON with over a hundred binary programs. Our experiments show that CHAMELEON can defeat all of our tested exploits by both preventing the exploit from gathering sufficient gadgets, and blocking the gadgets execution. Regarding the interval of our re-randomization, it is a parameter and can be set as short as 100ms, 10ms or 1ms. The experiment results show that CHAMELEON introduces on average 11.1%, 12.1% and 12.9% performance overhead for these parameters, respectively.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ping Chen (123 papers)
  2. Jun Xu (397 papers)
  3. Jun Wang (990 papers)
  4. Peng Liu (372 papers)
Citations (2)