Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Describe-then-Reason: Improving Multimodal Mathematical Reasoning through Visual Comprehension Training (2404.14604v3)

Published 22 Apr 2024 in cs.CL

Abstract: Open-source multimodal LLMs (MLLMs) excel in various tasks involving textual and visual inputs but still struggle with complex multimodal mathematical reasoning, lagging behind proprietary models like GPT-4V(ision) and Gemini-Pro. Although fine-tuning with intermediate steps (i.e., rationales) elicits some mathematical reasoning skills, the resulting models still fall short in visual comprehension due to inadequate visual-centric supervision, which leads to inaccurate interpretation of math figures. To address this issue, we propose a two-step training pipeline VCAR, which emphasizes the Visual Comprehension training in Addition to mathematical Reasoning learning. It first improves the visual comprehension ability of MLLMs through the visual description generation task, followed by another training step on generating rationales with the assistance of descriptions. Experimental results on two popular benchmarks demonstrate that VCAR substantially outperforms baseline methods solely relying on rationale supervision, especially on problems with high visual demands.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mengzhao Jia (12 papers)
  2. Zhihan Zhang (54 papers)
  3. Wenhao Yu (139 papers)
  4. Fangkai Jiao (19 papers)
  5. Meng Jiang (126 papers)
Citations (5)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets