Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Calibrated Self-Rewarding Vision Language Models (2405.14622v4)

Published 23 May 2024 in cs.LG, cs.CL, and cs.CV

Abstract: Large Vision-LLMs (LVLMs) have made substantial progress by integrating pre-trained LLMs and vision models through instruction tuning. Despite these advancements, LVLMs often exhibit the hallucination phenomenon, where generated text responses appear linguistically plausible but contradict the input image, indicating a misalignment between image and text pairs. This misalignment arises because the model tends to prioritize textual information over visual input, even when both the LLM and visual representations are of high quality. Existing methods leverage additional models or human annotations to curate preference data and enhance modality alignment through preference optimization. These approaches may not effectively reflect the target LVLM's preferences, making the curated preferences easily distinguishable. Our work addresses these challenges by proposing the Calibrated Self-Rewarding (CSR) approach, which enables the model to self-improve by iteratively generating candidate responses, evaluating the reward for each response, and curating preference data for fine-tuning. In the reward modeling, we employ a step-wise strategy and incorporate visual constraints into the self-rewarding process to place greater emphasis on visual input. Empirical results demonstrate that CSR enhances performance and reduces hallucinations across ten benchmarks and tasks, achieving substantial improvements over existing methods by 7.62%. Our empirical results are further supported by rigorous theoretical analysis, under mild assumptions, verifying the effectiveness of introducing visual constraints into the self-rewarding paradigm. Additionally, CSR shows compatibility with different vision-LLMs and the ability to incrementally improve performance through iterative fine-tuning. Our data and code are available at https://github.com/YiyangZhou/CSR.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Yiyang Zhou (33 papers)
  2. Zhiyuan Fan (26 papers)
  3. Dongjie Cheng (4 papers)
  4. Sihan Yang (11 papers)
  5. Zhaorun Chen (28 papers)
  6. Chenhang Cui (14 papers)
  7. Xiyao Wang (26 papers)
  8. Yun Li (154 papers)
  9. Linjun Zhang (70 papers)
  10. Huaxiu Yao (103 papers)
Citations (12)
Github Logo Streamline Icon: https://streamlinehq.com