Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Visually-Augmented Language Modeling (2205.10178v2)

Published 20 May 2022 in cs.CL

Abstract: Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained LLMs rely on text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VaLM, to Visually-augment text tokens with retrieved relevant images for LLMing. Specifically, VaLM builds on a novel latent text-image alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VaLM uses a visual knowledge fusion layer to enable multimodal grounded LLMing by attending to both text context and visual knowledge in images. We evaluate VaLM on various visual knowledge-intensive commonsense reasoning tasks, which require visual information to excel. The experimental results illustrate that VaLM outperforms all strong language-only and vision-language baselines with substantial gains in reasoning object commonsense including color, size, and shape. Our code is available at https://github.com/Victorwz/VaLM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Weizhi Wang (18 papers)
  2. Li Dong (154 papers)
  3. Hao Cheng (190 papers)
  4. Haoyu Song (21 papers)
  5. Xiaodong Liu (162 papers)
  6. Xifeng Yan (52 papers)
  7. Jianfeng Gao (344 papers)
  8. Furu Wei (291 papers)
Citations (17)
Github Logo Streamline Icon: https://streamlinehq.com