Scaling Capability in Token Space: An Analysis of Large Vision Language Model (2412.18387v2)
Abstract: The scaling capability has been widely validated in neural LLMs with respect to the number of parameters and the size of training data. One important question is that does the scaling capability also exists similarly with respect to the number of vision tokens in large vision LLM? This study fills the gap by investigating the relationship between the number of vision tokens and the performance on vision-LLMs. Our theoretical analysis and empirical evaluations demonstrate that the model exhibits scalable performance (S(N_l)) with respect to the number of vision tokens (N_l), characterized by the relationship (S(N_l) \approx (c/N_l){\alpha}). Furthermore, we also investigate the impact of a fusion mechanism that integrates the user's question with vision tokens. The results reveal two key findings. First, the scaling capability remains intact with the incorporation of the fusion mechanism. Second, the fusion mechanism enhances model performance, particularly when the user's question is task-specific and relevant. The analysis, conducted on fifteen diverse benchmarks spanning a broad range of tasks and domains, validates the effectiveness of the proposed approach.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.