Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models (2407.08706v1)

Published 11 Jul 2024 in cs.CV

Abstract: High-resolution inputs enable Large Vision-LLMs (LVLMs) to discern finer visual details, enhancing their comprehension capabilities. To reduce the training and computation costs caused by high-resolution input, one promising direction is to use sliding windows to slice the input into uniform patches, each matching the input size of the well-trained vision encoder. Although efficient, this slicing strategy leads to the fragmentation of original input, i.e., the continuity of contextual information and spatial geometry is lost across patches, adversely affecting performance in cross-patch context perception and position-specific tasks. To overcome these shortcomings, we introduce HiRes-LLaVA, a novel framework designed to efficiently process any size of high-resolution input without altering the original contextual and geometric information. HiRes-LLaVA comprises two innovative components: (i) a SliceRestore adapter that reconstructs sliced patches into their original form, efficiently extracting both global and local features via down-up-sampling and convolution layers, and (ii) a Self-Mining Sampler to compresses the vision tokens based on themselves, preserving the original context and positional information while reducing training overhead. To assess the ability of handling context fragmentation, we construct a new benchmark, EntityGrid-QA, consisting of edge-related and position-related tasks. Our comprehensive experiments demonstrate the superiority of HiRes-LLaVA on both existing public benchmarks and on EntityGrid-QA, particularly on document-oriented tasks, establishing new standards for handling high-resolution inputs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Runhui Huang (18 papers)
  2. Xinpeng Ding (21 papers)
  3. Chunwei Wang (13 papers)
  4. Jianhua Han (49 papers)
  5. Yulong Liu (48 papers)
  6. Hengshuang Zhao (117 papers)
  7. Hang Xu (204 papers)
  8. Lu Hou (50 papers)
  9. Wei Zhang (1489 papers)
  10. Xiaodan Liang (318 papers)
Citations (6)
X Twitter Logo Streamline Icon: https://streamlinehq.com