2000 character limit reached
What do Models Learn From Training on More Than Text? Measuring Visual Commonsense Knowledge (2205.07065v1)
Published 14 May 2022 in cs.CL
Abstract: There are limitations in learning language from text alone. Therefore, recent focus has been on developing multimodal models. However, few benchmarks exist that can measure what LLMs learn about language from multimodal training. We hypothesize that training on a visual modality should improve on the visual commonsense knowledge in LLMs. Therefore, we introduce two evaluation tasks for measuring visual commonsense knowledge in LLMs and use them to evaluate different multimodal models and unimodal baselines. Primarily, we find that the visual commonsense knowledge is not significantly different between the multimodal models and unimodal baseline models trained on visual text data.
- Lovisa Hagström (8 papers)
- Richard Johansson (18 papers)