Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it? (2109.11321v2)

Published 23 Sep 2021 in cs.CL

Abstract: LLMs are known to suffer from the hallucination problem in that they are prone to output statements that are false or inconsistent, indicating a lack of knowledge. A proposed solution to this is to provide the model with additional data modalities that complements the knowledge obtained through text. We investigate the use of visual data to complement the knowledge of LLMs by proposing a method for evaluating visual knowledge transfer to text for uni- or multimodal LLMs. The method is based on two steps, 1) a novel task querying for knowledge of memory colors, i.e. typical colors of well-known objects, and 2) filtering of model training data to clearly separate knowledge contributions. Additionally, we introduce a model architecture that involves a visual imagination step and evaluate it with our proposed method. We find that our method can successfully be used to measure visual knowledge transfer capabilities in models and that our novel model architecture shows promising results for leveraging multimodal knowledge in a unimodal setting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Tobias Norlund (6 papers)
  2. Lovisa Hagström (8 papers)
  3. Richard Johansson (18 papers)
Citations (23)