Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Z-LaVI: Zero-Shot Language Solver Fueled by Visual Imagination (2210.12261v1)

Published 21 Oct 2022 in cs.CL and cs.CV

Abstract: Large-scale pretrained LLMs have made significant advances in solving downstream language understanding tasks. However, they generally suffer from reporting bias, the phenomenon describing the lack of explicit commonsense knowledge in written text, e.g., ''an orange is orange''. To overcome this limitation, we develop a novel approach, Z-LaVI, to endow LLMs with visual imagination capabilities. Specifically, we leverage two complementary types of ''imaginations'': (i) recalling existing images through retrieval and (ii) synthesizing nonexistent images via text-to-image generation. Jointly exploiting the language inputs and the imagination, a pretrained vision-LLM (e.g., CLIP) eventually composes a zero-shot solution to the original language tasks. Notably, fueling LLMs with imagination can effectively leverage visual knowledge to solve plain language tasks. In consequence, Z-LaVI consistently improves the zero-shot performance of existing LLMs across a diverse set of language tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yue Yang (146 papers)
  2. Wenlin Yao (38 papers)
  3. Hongming Zhang (111 papers)
  4. Xiaoyang Wang (134 papers)
  5. Dong Yu (328 papers)
  6. Jianshu Chen (66 papers)
Citations (19)