Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Less is More: High-value Data Selection for Visual Instruction Tuning (2403.09559v4)

Published 14 Mar 2024 in cs.CL and cs.CV

Abstract: Visual instruction tuning is the key to building large vision LLMs~(LVLMs), which can greatly improve the task generalization and solving capabilities by learning a mixture of instruction data from diverse visual tasks. Previous work mostly collects multiple existing visual instruction datasets via heuristic ways for training (even more than a million instructions), which may introduce data redundancy and enlarge the training cost. To investigate this issue, we conduct a series of empirical studies, which reveal a significant redundancy within the visual instruction datasets, and show that greatly reducing the amount of instructions from several tasks even do not affect the performance. Based on the findings, we propose a high-value data selection approach TIVE, to eliminate redundancy within the visual instruction data and reduce the training cost. In TIVE, we first estimate the instance influence score on its corresponding task, and the task difficulty score, based on the gradient-based influence functions. Then, we leverage the two kinds of scores to determine the task proportion within the selected visual instruction subset, and select high-value instances for each task, respectively. Experiments on various LVLMs show that our approach using only about 15% data can achieve comparable average performance to the full-data fine-tuned model across eight benchmarks, even surpassing it on four of the benchmarks. Our code and data will be publicly released.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Zikang Liu (11 papers)
  2. Kun Zhou (217 papers)
  3. Wayne Xin Zhao (196 papers)
  4. Dawei Gao (27 papers)
  5. Yaliang Li (117 papers)
  6. Ji-Rong Wen (299 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com