Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration (2409.11365v2)

Published 17 Sep 2024 in cs.CL

Abstract: The deployment of multimodal LLMs (MLLMs) has demonstrated remarkable success in engaging in conversations involving visual inputs, thanks to the superior power of LLMs. Those MLLMs are typically built based on the LLMs, with an image encoder to process images into the token embedding space of the LLMs. However, the integration of visual modality has introduced a unique vulnerability: the MLLM becomes susceptible to malicious visual inputs and prone to generating sensitive or harmful responses, even though the LLM has been trained on textual dataset to align with human value. In this paper, we first raise the question: ``Do the MLLMs possess safety-awareness against malicious image inputs?". We find that after adding a principle that specifies the safety requirement into the input of the MLLM, the model's safety awareness becomes boosted. This phenomenon verifies the existence of MLLM's safety-awareness against image inputs, it is only weakened by the modality gap. We then introduce a simple yet effective technique termed CoCA, which amplifies the safety-awareness of the MLLM by calibrating its output distribution. Our proposed strategy helps the model reclaim its original safety awareness without losing its original capabilities. We verify the effectiveness of our approach on both multimodal safety and understanding benchmarks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jiahui Gao (25 papers)
  2. Renjie Pi (37 papers)
  3. Tianyang Han (6 papers)
  4. Han Wu (124 papers)
  5. Lanqing Hong (72 papers)
  6. Lingpeng Kong (134 papers)
  7. Xin Jiang (242 papers)
  8. Zhenguo Li (195 papers)
Citations (1)