Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Lever LM: Configuring In-Context Sequence to Lever Large Vision Language Models (2312.10104v4)

Published 15 Dec 2023 in cs.CV, cs.CL, and cs.LG

Abstract: As Archimedes famously said, ``Give me a lever long enough and a fulcrum on which to place it, and I shall move the world'', in this study, we propose to use a tiny LLM (LM), \eg, a Transformer with 67M parameters, to lever much larger Vision-LLMs (LVLMs) with 9B parameters. Specifically, we use this tiny \textbf{Lever-LM} to configure effective in-context demonstration (ICD) sequences to improve the In-Context Learinng (ICL) performance of LVLMs. Previous studies show that diverse ICD configurations like the selection and ordering of the demonstrations heavily affect the ICL performance, highlighting the significance of configuring effective ICD sequences. Motivated by this and by re-considering the the process of configuring ICD sequence, we find this is a mirror process of human sentence composition and further assume that effective ICD configurations may contain internal statistical patterns that can be captured by Lever-LM. Then a dataset with effective ICD sequences is constructed to train Lever-LM. After training, given novel queries, new ICD sequences are configured by the trained Lever-LM to solve vision-language tasks through ICL. Experiments show that these ICD sequences can improve the ICL performance of two LVLMs compared with some strong baselines in Visual Question Answering and Image Captioning, validating that Lever-LM can really capture the statistical patterns for levering LVLMs. The code is available at \url{https://github.com/ForJadeForest/Lever-LM}.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yingzhe Peng (7 papers)
  2. Xu Yang (222 papers)
  3. Haoxuan Ma (10 papers)
  4. Shuo Xu (16 papers)
  5. Chi Zhang (566 papers)
  6. Yucheng Han (9 papers)
  7. Hanwang Zhang (161 papers)
Citations (2)
Github Logo Streamline Icon: https://streamlinehq.com