Visual Grounding Strategies for Text-Only Natural Language Processing (2103.13942v1)
Abstract: Visual grounding is a promising path toward more robust and accurate NLP models. Many multimodal extensions of BERT (e.g., VideoBERT, LXMERT, VL-BERT) allow a joint modeling of texts and images that lead to state-of-the-art results on multimodal tasks such as Visual Question Answering. Here, we leverage multimodal modeling for purely textual tasks (LLMing and classification) with the expectation that the multimodal pretraining provides a grounding that can improve text processing accuracy. We propose possible strategies in this respect. A first type of strategy, referred to as {\it transferred grounding} consists in applying multimodal models to text-only tasks using a placeholder to replace image input. The second one, which we call {\it associative grounding}, harnesses image retrieval to match texts with related images during both pretraining and text-only downstream tasks. We draw further distinctions into both strategies and then compare them according to their impact on LLMing and commonsense-related downstream tasks, showing improvement over text-only baselines.
- Damien Sileo (27 papers)