Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

KM-BART: Knowledge Enhanced Multimodal BART for Visual Commonsense Generation (2101.00419v2)

Published 2 Jan 2021 in cs.CL

Abstract: We present Knowledge Enhanced Multimodal BART (KM-BART), which is a Transformer-based sequence-to-sequence model capable of reasoning about commonsense knowledge from multimodal inputs of images and texts. We adapt the generative BART architecture to a multimodal model with visual and textual inputs. We further develop novel pretraining tasks to improve the model performance on the Visual Commonsense Generation (VCG) task. In particular, our pretraining task of Knowledge-based Commonsense Generation (KCG) boosts model performance on the VCG task by leveraging commonsense knowledge from a LLM pretrained on external commonsense knowledge graphs. To the best of our knowledge, we are the first to propose a dedicated task for improving model performance on the VCG task. Experimental results show that our model reaches state-of-the-art performance on the VCG task by applying these novel pretraining tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yiran Xing (2 papers)
  2. Zai Shi (2 papers)
  3. Zhao Meng (14 papers)
  4. Gerhard Lakemeyer (19 papers)
  5. Yunpu Ma (57 papers)
  6. Roger Wattenhofer (212 papers)
Citations (38)
Youtube Logo Streamline Icon: https://streamlinehq.com