Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models (2312.06109v1)

Published 11 Dec 2023 in cs.CV

Abstract: Modern Large Vision-LLMs (LVLMs) enjoy the same vision vocabulary -- CLIP, which can cover most common vision tasks. However, for some special vision task that needs dense and fine-grained vision perception, e.g., document-level OCR or chart understanding, especially in non-English scenarios, the CLIP-style vocabulary may encounter low efficiency in tokenizing the vision knowledge and even suffer out-of-vocabulary problem. Accordingly, we propose Vary, an efficient and effective method to scale up the vision vocabulary of LVLMs. The procedures of Vary are naturally divided into two folds: the generation and integration of a new vision vocabulary. In the first phase, we devise a vocabulary network along with a tiny decoder-only transformer to produce the desired vocabulary via autoregression. In the next, we scale up the vanilla vision vocabulary by merging the new one with the original one (CLIP), enabling the LVLMs can quickly garner new features. Compared to the popular BLIP-2, MiniGPT4, and LLaVA, Vary can maintain its vanilla capabilities while enjoying more excellent fine-grained perception and understanding ability. Specifically, Vary is competent in new document parsing features (OCR or markdown conversion) while achieving 78.2% ANLS in DocVQA and 36.2% in MMVet. Our code will be publicly available on the homepage.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Haoran Wei (55 papers)
  2. Lingyu Kong (13 papers)
  3. Jinyue Chen (5 papers)
  4. Liang Zhao (353 papers)
  5. Zheng Ge (60 papers)
  6. Jinrong Yang (27 papers)
  7. Jianjian Sun (23 papers)
  8. Chunrui Han (21 papers)
  9. Xiangyu Zhang (328 papers)
Citations (61)