Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SafeAligner: Safety Alignment against Jailbreak Attacks via Response Disparity Guidance (2406.18118v3)

Published 26 Jun 2024 in cs.CR and cs.CL

Abstract: As the development of LLMs rapidly advances, securing these models effectively without compromising their utility has become a pivotal area of research. However, current defense strategies against jailbreak attacks (i.e., efforts to bypass security protocols) often suffer from limited adaptability, restricted general capability, and high cost. To address these challenges, we introduce SafeAligner, a methodology implemented at the decoding stage to fortify defenses against jailbreak attacks. We begin by developing two specialized models: the Sentinel Model, which is trained to foster safety, and the Intruder Model, designed to generate riskier responses. SafeAligner leverages the disparity in security levels between the responses from these models to differentiate between harmful and beneficial tokens, effectively guiding the safety alignment by altering the output token distribution of the target model. Extensive experiments show that SafeAligner can increase the likelihood of beneficial tokens, while reducing the occurrence of harmful ones, thereby ensuring secure alignment with minimal loss to generality.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Caishuang Huang (13 papers)
  2. Wanxu Zhao (3 papers)
  3. Rui Zheng (78 papers)
  4. Huijie Lv (3 papers)
  5. Shihan Dou (46 papers)
  6. Sixian Li (12 papers)
  7. Xiao Wang (507 papers)
  8. Enyu Zhou (12 papers)
  9. Junjie Ye (66 papers)
  10. Yuming Yang (14 papers)
  11. Tao Gui (127 papers)
  12. Qi Zhang (784 papers)
  13. Xuanjing Huang (287 papers)
Citations (4)