Papers
Topics
Authors
Recent
Search
2000 character limit reached

SeGPruner: Semantic-Geometric Visual Token Pruner for 3D Question Answering

Published 31 Mar 2026 in cs.CV | (2603.29437v1)

Abstract: Vision-LLMs (VLMs) have been widely adopted for 3D question answering (3D QA). In typical pipelines, visual tokens extracted from multiple viewpoints are concatenated with language tokens and jointly processed by a LLM for inference. However, aggregating multi-view observations inevitably introduces severe token redundancy, leading to an overly large visual token set that significantly hinders inference efficiency under constrained token budgets. Visual token pruning has emerged as a prevalent strategy to address this issue. Nevertheless, most existing pruners are primarily tailored to 2D inputs or rely on indirect geometric cues, which limits their ability to explicitly retain semantically critical objects and maintain sufficient spatial coverage for robust 3D reasoning. In this paper, we propose SeGPruner, a semantic-aware and geometry-guided token reduction framework for efficient 3D QA with multi-view images. Specifically, SeGPruner first preserves semantically salient tokens through an attention-based importance module (Saliency-aware Token Selector), ensuring that object-critical evidence is retained. It then complements these tokens with spatially diverse ones via a geometry-guided selector (Geometry-aware Token Diversifier), which jointly considers semantic relevance and 3D geometric distance. This cooperation between saliency preservation and geometry-guided diversification balances object-level evidence and global scene coverage under aggressive token reduction. Extensive experiments on ScanQA and OpenEQA demonstrate that SeGPruner substantially improves inference efficiency, reducing the visual token budget by 91% and inference latency by 86%, while maintaining competitive performance in 3D reasoning tasks.

Summary

  • The paper introduces a novel semantic–geometric token pruning method that reduces token count significantly while maintaining or improving 3D QA accuracy.
  • It employs a dual-stage process combining 3D-aware feature construction with saliency and geometry-based diversified token selection to optimize multi-view feature aggregation.
  • The approach achieves up to 86% latency reduction and retains only 23% of tokens on ScanQA, demonstrating robustness even under extreme token compression.

SeGPruner: Semantic–Geometric Visual Token Pruner for 3D Question Answering

Introduction and Contribution

SeGPruner introduces a paradigm for visual token selection in 3D Question Answering (3D QA) using off-the-shelf Vision-LLMs (VLMs) that process multi-view 2D images. Standard multi-view token aggregation leads to massive redundancy, prohibiting efficient deployment under constrained token budgets. Existing pruning solutions either operate in pure 2D space or use weak geometric cues, failing to both explicitly preserve semantically critical tokens and guarantee diverse spatial coverage. SeGPruner addresses these limitations by enforcing a semantic–geometry aware selection process, resulting in improved reasoning fidelity while providing significant token and latency reductions.

Methodology

SeGPruner is an inference-time module inserted between the visual encoder and the LLM, decomposed into three stages:

  1. 3D-Aware Feature Construction: Visual tokens are back-projected into a unified 3D coordinate frame using calibrated depth maps and camera poses for all input views, supporting spatial comparisons of tokens across viewpoints.
  2. Saliency-aware Token Selector: High-importance tokens are scored using column-averaged attention maps from the visual encoder, ensuring that object-centric and context-relevant tokens are preserved. Top-kk tokens by attention are selected as salient.
  3. Geometry-aware Token Diversifier: After salient token selection, the remaining tokens are sampled to maximize 3D spatial coverage while penalizing semantic redundancy. A semantic–spatial fusion metric is computed by combining normalized Euclidean distances in 3D and feature similarities, iteratively selecting farthest-point tokens as in FPS. Figure 1

    Figure 1: SeGPruner's architecture interleaves saliency importance and geometry-aware diversification between multi-view visual encoding and LLM processing, yielding compact tokens for robust cross-modal reasoning.

Formally, for each visual patch, its back-projected world coordinate is

ci=1ΩipΩiΠ(d(p),K1,T)\mathbf{c}_i = \frac{1}{|\Omega_i|} \sum_{p\in \Omega_i} \Pi(d(p), K^{-1}, T)

Tokens are aggregated across all views for downstream importance and diversity selection. Distance for diversification is

drjgeo=λcrcj2dx+(1λ)(1srj)d^{geo}_{rj} = \lambda \frac{||\mathbf{c}_r - \mathbf{c}_j||_2}{d_x} + (1-\lambda) (1 - s_{rj})

with srjs_{rj} the cosine similarity, λ\lambda the fusion coefficient.

The final reduced token set comprises the union of salient and spatially diverse tokens, concatenated and fed to the LLM for question answering.

Experimental Results

SeGPruner is evaluated with LLaVA-OneVision-7B on ScanQA and OpenEQA, the de-facto benchmarks for open-vocabulary 3D QA and embodied multi-modal reasoning. The module is strictly training-free and preserves the backbone's weights.

Efficiency Gains:

  • On ScanQA, only 23% of original visual tokens are retained, and inference latency is reduced by 86%.
  • Even more aggressive pruning (down to 9% retention) delivers 95.3% of the base accuracy, demonstrating robustness to extreme token budgets.

Accuracy Preservation:

  • SeGPruner often matches or slightly surpasses the base model's EM@1 score at moderate retention, due to the removal of redundant or misleading tokens.
  • At 23% retention, SeGPruner outperforms both DTC (3D-only) and VisPruner (2D attention-based), highlighting the complementary benefit of saliency and spatial diversity.
  • On OpenEQA, SeGPruner gives higher LLM-Match scores compared to both baselines under aggressive reduction.

These results support a bold claim: purely attention-based or purely geometric reduction is outperformed by their principled integration as in SeGPruner, especially under tight inference constraints.

(Figure 2)

Figure 2: 2D-only pruning leads to background redundancy, while geometric token selection guarantees object-centric and spatially even coverage in 3D QA settings.

Ablation and Qualitative Analysis

Ablations reveal that removing either the Saliency-aware Token Selector or the Geometry-aware Token Diversifier reduces accuracy, and uniform/semantic-only baselines degrade rapidly when retention is harsh. The two modules provide complementary error correction—saliency for object importance, diversity for structural coverage.

Qualitative outputs (see ablation studies and qualitative figures in the paper) demonstrate that SeGPruner preserves both key objects and fine-grained spatial/structural details, preventing information loss typical in uniform or 2D-only approaches. This is further validated by point cloud visualizations, where SeGPruner's token set reconstructs scene geometry and object completeness superior to baselines.

Practical and Theoretical Implications

Practically, SeGPruner enables the deployment of multi-view VLMs for 3D QA in latency or memory-constrained environments (e.g., robotics, AR/VR, or mobile), by reducing token counts without architectural retraining or external supervision. Theoretically, its results support the hypothesis that joint semantic and spatial modeling is indispensable for robust 3D scene understanding, even when only multi-view 2D images are available.

Methodologically, SeGPruner's separation of saliency preservation and spatial diversification defines an extensible template for future inference-time token selection research. The fusion metric can be further generalized for dynamic balancing or augmented with external geometric priors.

Future Research Directions

  • Extension to dynamic scenes and temporal reasoning by incorporating 4D space-time token representations.
  • Adaptive tuning of the semantic–geometric trade-off coefficient (λ\lambda) via downstream feedback or reinforcement signals.
  • Integration with more advanced or structurally aware visual encoders (e.g., spatial transformers with explicit 3D positional encoding).
  • Exploration of hierarchical token pruning for scalable long-context multi-modal models in real-time applications.

Conclusion

SeGPruner operationalizes a semantic–geometry aware token reduction module that achieves substantial inference acceleration for multi-view 3D QA with negligible loss, or even improvement, in accuracy. Its principled decomposition into salient object-centric and geometry-aware diverse token selection sets a new standard for plug-and-play VLM compression in spatial reasoning tasks, with strong implications for efficient embodied AI.

(2603.29437)

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.