Visually-Augmented Language Modeling (2205.10178v2)
Abstract: Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained LLMs rely on text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VaLM, to Visually-augment text tokens with retrieved relevant images for LLMing. Specifically, VaLM builds on a novel latent text-image alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VaLM uses a visual knowledge fusion layer to enable multimodal grounded LLMing by attending to both text context and visual knowledge in images. We evaluate VaLM on various visual knowledge-intensive commonsense reasoning tasks, which require visual information to excel. The experimental results illustrate that VaLM outperforms all strong language-only and vision-language baselines with substantial gains in reasoning object commonsense including color, size, and shape. Our code is available at https://github.com/Victorwz/VaLM.
- Weizhi Wang (18 papers)
- Li Dong (154 papers)
- Hao Cheng (190 papers)
- Haoyu Song (21 papers)
- Xiaodong Liu (162 papers)
- Xifeng Yan (52 papers)
- Jianfeng Gao (344 papers)
- Furu Wei (291 papers)