Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
133 tokens/sec
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforced MLLM: A Survey on RL-Based Reasoning in Multimodal Large Language Models (2504.21277v2)

Published 30 Apr 2025 in cs.AI

Abstract: The application of reinforcement learning (RL) to enhance the reasoning capabilities of Multimodal LLMs (MLLMs) constitutes a rapidly advancing research area. While MLLMs extend LLMs to handle diverse modalities such as vision, audio, and video, enabling robust reasoning across multimodal inputs remains challenging. This paper provides a systematic review of recent advances in RL-based reasoning for MLLMs, covering key algorithmic designs, reward mechanism innovations, and practical applications. We highlight two main RL paradigms, value-model-free and value-model-based methods, and analyze how RL enhances reasoning abilities by optimizing reasoning trajectories and aligning multimodal information. Additionally, we provide an extensive overview of benchmark datasets, evaluation protocols, and current limitations, and propose future research directions to address challenges such as sparse rewards, inefficient cross-modal reasoning, and real-world deployment constraints. Our goal is to provide a comprehensive and structured guide to RL-based multimodal reasoning.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets