Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Visual Planning: Let's Think Only with Images (2505.11409v1)

Published 16 May 2025 in cs.LG, cs.AI, cs.CL, and cs.CV

Abstract: Recent advancements in LLMs and their multimodal extensions (MLLMs) have substantially enhanced machine reasoning across diverse tasks. However, these models predominantly rely on pure text as the medium for both expressing and structuring reasoning, even when visual information is present. In this work, we argue that language may not always be the most natural or effective modality for reasoning, particularly in tasks involving spatial and geometrical information. Motivated by this, we propose a new paradigm, Visual Planning, which enables planning through purely visual representations, independent of text. In this paradigm, planning is executed via sequences of images that encode step-by-step inference in the visual domain, akin to how humans sketch or visualize future actions. We introduce a novel reinforcement learning framework, Visual Planning via Reinforcement Learning (VPRL), empowered by GRPO for post-training large vision models, leading to substantial improvements in planning in a selection of representative visual navigation tasks, FrozenLake, Maze, and MiniBehavior. Our visual planning paradigm outperforms all other planning variants that conduct reasoning in the text-only space. Our results establish Visual Planning as a viable and promising alternative to language-based reasoning, opening new avenues for tasks that benefit from intuitive, image-based inference.

Summary

  • The paper introduces a novel paradigm for visual planning that performs reasoning solely with image sequences, bypassing text-based mediation.
  • The paper presents VPRL, a two-stage reinforcement learning framework that improves planning accuracy by over 20% compared to supervised methods.
  • The paper demonstrates robust performance across tasks like FrozenLake, Maze, and MiniBehavior, highlighting the benefits of direct visual reasoning.

This paper introduces a new paradigm called "Visual Planning," where reasoning and planning are performed entirely using sequences of images, without relying on textual mediation. The authors argue that language may not always be the most effective modality for reasoning, especially for tasks involving spatial and geometrical information. Traditional multimodal LLMs (MLLMs) often convert visual information into text before reasoning, which can create a modality gap and hinder performance in vision-centric tasks.

To address this, the paper proposes Visual Planning via Reinforcement Learning (VPRL), a novel two-stage reinforcement learning framework designed to train Large Vision Models (LVMs) for visual planning. LVMs are chosen because they are trained exclusively on images and video frames, eliminating potential confounding factors from language-based supervision.

Visual Planning Paradigm

The core idea is to generate a sequence of intermediate images T=(v^1,,v^n)\mathcal{T} = (\hat{v}_1, \ldots, \hat{v}_n) that represent step-by-step visual states, leading from an initial visual state v0v_0 to a goal. Each subsequent image v^i\hat{v}_i is generated autoregressively by a generative vision model πθ\pi_{\theta}:

v^iπθ(viv0,v^1,...,v^i1)\hat{v}_i \sim \pi_{\theta}(v_i | v_0, \hat{v}_1, ..., \hat{v}_{i-1})

This process is analogous to how humans might sketch or visualize steps to solve a problem.

Visual Planning via Reinforcement Learning (VPRL)

VPRL is a two-stage training framework:

  1. Stage 1: Policy Initialization:
    • The LVM (πθ\pi_\theta) is initialized by training it on random trajectories generated by random walks in the environment.
    • The goal is to enable the model to generate valid sequences of visual states and encourage exploration.
    • The model is trained to predict the next state vi+1()v_{i+1}^{(\ell)} given a prefix sequence viv_{\leq i} by minimizing the loss:

      $\mathcal{L}_{\text{VPFT}(\theta)= -\mathbb{E}_{(v_{\leq i},\,v_{i+1}^{(\ell)})} \Bigl[ \log \pi_\theta\!\bigl( v^{(\ell)}_{i+1} \,\big|\, v_{\leq i} \bigr) \Bigr]$

* This stage acts as a warm-up, focusing on visual coherence and generation quality.

  1. Stage 2: Reinforcement Learning for Visual Planning:
    • This stage uses the initialized model from Stage 1 and applies reinforcement learning, specifically Group Relative Policy Optimization (GRPO), to optimize for visual planning.
    • Given an input prefix viv_{\leq i}, the behavior model πθold\pi_{\theta_{\text{old}}} samples a group of GG candidate next visual states {v^i+1(1),,v^i+1(G)}\{\hat{v}_{i+1}^{(1)}, \ldots, \hat{v}_{i+1}^{(G)}\}.
    • Each candidate state v^i+1(k)\hat{v}_{i+1}^{(k)} corresponds to a planned action.
    • A rule-based parsing function P(vi,v^i+1(k))\mathcal{P}(v_i, \hat{v}_{i+1}^{(k)}) maps pairs of visual states to discrete actions (valid or invalid).
    • Candidates are scored using a composite reward function r(vi,v^i+1(k))r(v_i, \hat{v}_{i+1}^{(k)}).
    • GRPO computes relative advantages A(k)A^{(k)} for each candidate within the group.
    • The policy πθ\pi_\theta is updated by maximizing the GRPO objective:

      $\mathcal{J}_{\text{VPRL}(\theta)} = \mathbb{E} \left[ \frac{1}{G} \sum_{k=1}^G \min \left( \rho^{(k)} A^{(k)},\; \text{clip} \left( \rho^{(k)}, 1-\epsilon,\, 1+\epsilon \right) A^{(k)} \right) - \beta\, D_{\text{KL}\left( \pi_\theta \,||\, \pi_{\text{ref}} \right) \right]$

      where ρ(k)\rho^{(k)} is the importance sampling ratio.

    Reward Design:

    The reward function is crucial for guiding the visual planner. * A state-action parsing function P:V×VAE\mathcal{P}: \mathcal{V} \times \mathcal{V} \rightarrow \mathcal{A} \cup \mathcal{E} interprets the intended action from the current state viv_i to a generated candidate state v^i+1(k)\hat{v}_{i+1}^{(k)}. A\mathcal{A} is the set of valid actions, and E\mathcal{E} is the set of invalid transitions. * A progress map D(v)D(v) estimates the remaining steps to the goal from state vv. * Actions are categorized into: * Aopt\mathcal{A}_{\mathrm{opt}}: Optimal actions (progress towards goal, D(v^i+1(k))<D(vi)D(\hat{v}_{i+1}^{(k)}) < D(v_i)). * Anopt\mathcal{A}_{\mathrm{nopt}}: Non-optimal valid actions (D(v^i+1(k))D(vi)D(\hat{v}_{i+1}^{(k)}) \geq D(v_i)). * Einv\mathcal{E}_{\mathrm{inv}}: Invalid actions. * The progress reward function is:

    r(vi,v^i+1(k))=αoptI[P()Aopt]+αnoptI[P()Anopt]+αinvI[P()Einv]r(v_i,\hat{v}_{i+1}^{(k)}) = \alpha_{\text{opt}}\cdot\mathbb{I}[\mathcal{P}(\cdot)\in\mathcal{A}_{\mathrm{opt}}] + \alpha_{\text{nopt}}\cdot\mathbb{I}[\mathcal{P}(\cdot)\in\mathcal{A}_{\mathrm{nopt}}] + \alpha_{\text{inv}}\cdot\mathbb{I}[\mathcal{P}(\cdot)\in\mathcal{E}_{\mathrm{inv}}]

* In experiments, αopt=1\alpha_{\text{opt}} = 1, αnopt=0\alpha_{\text{nopt}} = 0, and αinv=5\alpha_{\text{inv}} = -5.

System Variants for Comparison

  1. Visual Planning via Fine-Tuning (VPFT): A supervised learning baseline that shares the architecture of VPRL Stage 1 but is trained on optimal planning trajectories instead of random walks.

  2. Supervised Fine-Tuning (SFT) in Text: A traditional approach where the model, given a visual input and a textual prompt, generates a textual sequence of actions. The loss is cross-entropy for action prediction:

    LSFT(θ)=E(v,t)[l=1Llogπθ(tlt<l,v,p)]\mathcal{L}_{\text{SFT}(\theta)} = -\mathbb{E}_{(v, t)} \left[ \sum_{l=1}^{L} \log \pi_\theta(t_{l} \mid t_{<l},\, v, p) \right]

Experiments and Results

  • Tasks: Three visual navigation environments were used:

    • FrozenLake: Navigate an agent to a destination on a grid, avoiding holes.
    • Maze: Navigate an agent from a start to a goal in a maze.
    • MiniBehavior: A more complex task involving picking up an object (printer) and dropping it at a target location (table).
  • Models:
    • LVM-3B: A 3-billion parameter Large Vision Model used for VPFT and VPRL.
    • Qwen 2.5-VL-Instruct-3B: Used for the SFT in text baseline.
    • Closed-Source Models: Gemini 2.0 Flash and Gemini 2.5 Pro were used as reference points for state-of-the-art multimodal reasoning.
  • Evaluation Metrics:
    • Exact Match (EM): Measures if the generated visual trajectory perfectly matches the shortest optimal path.
    • Progress Rate (PR): Measures the ratio of consecutively correct steps from the start compared to the optimal path length.

Key Findings:

  1. Visual Planning Surpasses Textual Planning:
    • VPRL consistently achieved the best performance across all tasks.
    • VPFT (visual planning with SFT) outperformed SFT in text by an average of over 22% in EM.
    • This suggests that for visual-centric tasks, reasoning directly in the visual modality is more effective.
    • Inference-only MLLMs (even advanced ones like Gemini 2.5 Pro) struggled without task-specific fine-tuning.
  2. Gains from Reinforcement Learning:
    • VPRL significantly outperformed its supervised counterpart VPFT by more than 20% across all tasks.
    • VPRL Stage 1 (policy initialization) achieved near-random performance, while Stage 2 (RL optimization) led to the best results, highlighting RL's effectiveness in learning planning strategies beyond imitation.
  3. Robustness with Scaling Complexity:
    • As task complexity increased (e.g., larger grid sizes in FrozenLake), the performance of text-based reasoning models like Gemini 2.5 Pro dropped sharply.
    • Visual planners (VPFT and VPRL) maintained higher accuracy and showed more gradual performance degradation, with VPRL being the most robust.

Discussion and Analysis

  • Error Analysis: VPRL can still make non-optimal (taking detours) or invalid actions (violating environment constraints, e.g., walking through walls), but it is more flexible than VPFT. Visual planning avoids cascading errors seen in text-based systems that misinterpret visual information early on.
  • Random Policy Initialization: Initializing the policy with random trajectories (VPRL Stage 1) is crucial for exploration. VPFT, trained on optimal paths, has limited exploration (low entropy) and struggles if used directly for RL, as it yields near-zero advantages for GRPO. VPRL Stage 1 maintains high entropy with a low invalid action ratio.
  • VPRL Reduces Invalid Actions: VPRL significantly reduces the proportion of failed trajectories caused by invalid actions compared to VPFT (e.g., from 60-78% down to 25-37%).

Implementation Details

  • LVM Backbone: LVM-3B uses a VQGAN-based tokenizer to encode images into 256 discrete visual tokens.
  • State-Action Parsing for Reward: The rule-based parsing function P\mathcal{P} for reward calculation involves:
    • Converting images to grayscale and a coordinate-based representation.
    • Computing Intersection-over-Union (IoU) to find the agent's predicted position.
    • Inferring actions by comparing start and predicted positions against task rules.
    • Using pixel-wise Mean Squared Error (MSE) to detect invalid transitions like agent disappearance.
    • For MiniBehavior, IoU changes detect "pick" actions, and MSE changes in table regions detect "drop" actions.
  • Progress Map for Reward: Breadth-First Search (BFS) is used to calculate the optimal steps to the goal from each position, forming the progress map D(v)D(v).
  • Training:
    • Low-Rank Adaptation (LoRA) was applied for fine-tuning.
    • VPRL Stage 1 trained for 10 epochs on random trajectories.
    • VPRL Stage 2 trained for 10 epochs using GRPO with a group size of 10 candidate responses and a KL divergence penalty coefficient β=0.001\beta = 0.001.

The paper concludes that Visual Planning is a viable and promising alternative to language-based reasoning for visually oriented tasks, opening new avenues for research in multimodal AI. The VPRL framework demonstrates significant improvements in planning performance and generalization.

Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com