Safe RLHF-V: Safe Reinforcement Learning from Multi-modal Human Feedback (2503.17682v2)
Abstract: Multimodal LLMs (MLLMs) are essential for building general-purpose AI assistants; however, they pose increasing safety risks. How can we ensure safety alignment of MLLMs to prevent undesired behaviors? Going further, it is critical to explore how to fine-tune MLLMs to preserve capabilities while meeting safety constraints. Fundamentally, this challenge can be formulated as a min-max optimization problem. However, existing datasets have not yet disentangled single preference signals into explicit safety constraints, hindering systematic investigation in this direction. Moreover, it remains an open question whether such constraints can be effectively incorporated into the optimization process for multi-modal models. In this work, we present the first exploration of the Safe RLHF-V -- the first multimodal safety alignment framework. The framework consists of: $\mathbf{(I)}$ BeaverTails-V, the first open-source dataset featuring dual preference annotations for helpfulness and safety, supplemented with multi-level safety labels (minor, moderate, severe); $\mathbf{(II)}$ Beaver-Guard-V, a multi-level guardrail system to proactively defend against unsafe queries and adversarial attacks. Applying the guard model over five rounds of filtering and regeneration significantly enhances the precursor model's overall safety by an average of 40.9%. $\mathbf{(III)}$ Based on dual preference, we initiate the first exploration of multi-modal safety alignment within a constrained optimization. Experimental results demonstrate that Safe RLHF effectively improves both model helpfulness and safety. Specifically, Safe RLHF-V enhances model safety by 34.2% and helpfulness by 34.3%.
- Jiaming Ji (37 papers)
- Xinyu Chen (65 papers)
- Rui Pan (67 papers)
- Han Zhu (50 papers)
- Conghui Zhang (1 paper)
- Jiahao Li (80 papers)
- Donghai Hong (10 papers)
- Boyuan Chen (75 papers)
- Jiayi Zhou (24 papers)
- Kaile Wang (17 papers)
- JunTao Dai (21 papers)
- Chi-Min Chan (18 papers)
- Sirui Han (19 papers)
- Yike Guo (144 papers)
- Yaodong Yang (169 papers)
- Yida Tang (1 paper)