Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and Activations (2406.11801v2)

Published 17 Jun 2024 in cs.CL

Abstract: Ensuring the safe alignment of LLMs with human values is critical as they become integral to applications like translation and question answering. Current alignment methods struggle with dynamic user intentions and complex objectives, making models vulnerable to generating harmful content. We propose Safety Arithmetic, a training-free framework enhancing LLM safety across different scenarios: Base models, Supervised fine-tuned models (SFT), and Edited models. Safety Arithmetic involves Harm Direction Removal to avoid harmful content and Safety Alignment to promote safe responses. Additionally, we present NoIntentEdit, a dataset highlighting edit instances that could compromise model safety if used unintentionally. Our experiments show that Safety Arithmetic significantly improves safety measures, reduces over-safety, and maintains model utility, outperforming existing methods in ensuring safe content generation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Rima Hazra (21 papers)
  2. Sayan Layek (11 papers)
  3. Somnath Banerjee (22 papers)
  4. Soujanya Poria (138 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com