Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Safe RLHF-V: Safe Reinforcement Learning from Multi-modal Human Feedback (2503.17682v2)

Published 22 Mar 2025 in cs.LG and cs.AI

Abstract: Multimodal LLMs (MLLMs) are essential for building general-purpose AI assistants; however, they pose increasing safety risks. How can we ensure safety alignment of MLLMs to prevent undesired behaviors? Going further, it is critical to explore how to fine-tune MLLMs to preserve capabilities while meeting safety constraints. Fundamentally, this challenge can be formulated as a min-max optimization problem. However, existing datasets have not yet disentangled single preference signals into explicit safety constraints, hindering systematic investigation in this direction. Moreover, it remains an open question whether such constraints can be effectively incorporated into the optimization process for multi-modal models. In this work, we present the first exploration of the Safe RLHF-V -- the first multimodal safety alignment framework. The framework consists of: $\mathbf{(I)}$ BeaverTails-V, the first open-source dataset featuring dual preference annotations for helpfulness and safety, supplemented with multi-level safety labels (minor, moderate, severe); $\mathbf{(II)}$ Beaver-Guard-V, a multi-level guardrail system to proactively defend against unsafe queries and adversarial attacks. Applying the guard model over five rounds of filtering and regeneration significantly enhances the precursor model's overall safety by an average of 40.9%. $\mathbf{(III)}$ Based on dual preference, we initiate the first exploration of multi-modal safety alignment within a constrained optimization. Experimental results demonstrate that Safe RLHF effectively improves both model helpfulness and safety. Specifically, Safe RLHF-V enhances model safety by 34.2% and helpfulness by 34.3%.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (16)
  1. Jiaming Ji (37 papers)
  2. Xinyu Chen (65 papers)
  3. Rui Pan (67 papers)
  4. Han Zhu (50 papers)
  5. Conghui Zhang (1 paper)
  6. Jiahao Li (80 papers)
  7. Donghai Hong (10 papers)
  8. Boyuan Chen (75 papers)
  9. Jiayi Zhou (24 papers)
  10. Kaile Wang (17 papers)
  11. JunTao Dai (21 papers)
  12. Chi-Min Chan (18 papers)
  13. Sirui Han (19 papers)
  14. Yike Guo (144 papers)
  15. Yaodong Yang (169 papers)
  16. Yida Tang (1 paper)
Github Logo Streamline Icon: https://streamlinehq.com