Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What do Models Learn From Training on More Than Text? Measuring Visual Commonsense Knowledge (2205.07065v1)

Published 14 May 2022 in cs.CL

Abstract: There are limitations in learning language from text alone. Therefore, recent focus has been on developing multimodal models. However, few benchmarks exist that can measure what LLMs learn about language from multimodal training. We hypothesize that training on a visual modality should improve on the visual commonsense knowledge in LLMs. Therefore, we introduce two evaluation tasks for measuring visual commonsense knowledge in LLMs and use them to evaluate different multimodal models and unimodal baselines. Primarily, we find that the visual commonsense knowledge is not significantly different between the multimodal models and unimodal baseline models trained on visual text data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Lovisa Hagström (8 papers)
  2. Richard Johansson (18 papers)
Citations (4)