Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Are Bigger Encoders Always Better in Vision Large Models? (2408.00620v1)

Published 1 Aug 2024 in cs.CV and cs.CL

Abstract: In recent years, multimodal LLMs (MLLMs) have shown strong potential in real-world applications. They are developing rapidly due to their remarkable ability to comprehend multimodal information and their inherent powerful cognitive and reasoning capabilities. Among MLLMs, vision LLMs (VLM) stand out for their ability to understand vision information. However, the scaling trend of VLMs under the current mainstream paradigm has not been extensively studied. Whether we can achieve better performance by training even larger models is still unclear. To address this issue, we conducted experiments on the pretraining stage of MLLMs. We conduct our experiment using different encoder sizes and LLM sizes. Our findings indicate that merely increasing the size of encoders does not necessarily enhance the performance of VLMs. Moreover, we analyzed the effects of LLM backbone parameter size and data quality on the pretraining outcomes. Additionally, we explored the differences in scaling laws between LLMs and VLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Bozhou Li (5 papers)
  2. Hao Liang (137 papers)
  3. Zimo Meng (2 papers)
  4. Wentao Zhang (261 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets