Papers
Topics
Authors
Recent
2000 character limit reached

Scaling Capability in Token Space: An Analysis of Large Vision Language Model (2412.18387v2)

Published 24 Dec 2024 in cs.AI and cs.LG

Abstract: The scaling capability has been widely validated in neural LLMs with respect to the number of parameters and the size of training data. One important question is that does the scaling capability also exists similarly with respect to the number of vision tokens in large vision LLM? This study fills the gap by investigating the relationship between the number of vision tokens and the performance on vision-LLMs. Our theoretical analysis and empirical evaluations demonstrate that the model exhibits scalable performance (S(N_l)) with respect to the number of vision tokens (N_l), characterized by the relationship (S(N_l) \approx (c/N_l){\alpha}). Furthermore, we also investigate the impact of a fusion mechanism that integrates the user's question with vision tokens. The results reveal two key findings. First, the scaling capability remains intact with the incorporation of the fusion mechanism. Second, the fusion mechanism enhances model performance, particularly when the user's question is task-specific and relevant. The analysis, conducted on fifteen diverse benchmarks spanning a broad range of tasks and domains, validates the effectiveness of the proposed approach.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 1 like.

Upgrade to Pro to view all of the tweets about this paper: