Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What Makes and Breaks Safety Fine-tuning? A Mechanistic Study (2407.10264v3)

Published 14 Jul 2024 in cs.LG and cs.CL

Abstract: Safety fine-tuning helps align LLMs with human preferences for their safe deployment. To better understand the underlying factors that make models safe via safety fine-tuning, we design a synthetic data generation framework that captures salient aspects of an unsafe input by modeling the interaction between the task the model is asked to perform (e.g., "design") versus the specific concepts the task is asked to be performed upon (e.g., a "cycle" vs. a "bomb"). Using this, we investigate three well-known safety fine-tuning methods -- supervised safety fine-tuning, direct preference optimization, and unlearning -- and provide significant evidence demonstrating that these methods minimally transform MLP weights to specifically align unsafe inputs into its weights' null space. This yields a clustering of inputs based on whether the model deems them safe or not. Correspondingly, when an adversarial input (e.g., a jailbreak) is provided, its activations are closer to safer samples, leading to the model processing such an input as if it were safe. We validate our findings, wherever possible, on real-world models -- specifically, Llama-2 7B and Llama-3 8B.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Samyak Jain (19 papers)
  2. Ekdeep Singh Lubana (33 papers)
  3. Tom Joy (6 papers)
  4. Philip H. S. Torr (219 papers)
  5. Amartya Sanyal (35 papers)
  6. Puneet K. Dokania (44 papers)
  7. Kemal Oksuz (14 papers)
Citations (5)