Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers (2009.11278v1)

Published 23 Sep 2020 in cs.CV, cs.AI, cs.CL, and cs.LG

Abstract: Mirroring the success of masked LLMs, vision-and-language counterparts like ViLBERT, LXMERT and UNITER have achieved state of the art performance on a variety of multimodal discriminative tasks like visual question answering and visual grounding. Recent work has also successfully adapted such models towards the generative task of image captioning. This begs the question: Can these models go the other way and generate images from pieces of text? Our analysis of a popular representative from this model family - LXMERT - finds that it is unable to generate rich and semantically meaningful imagery with its current training setup. We introduce X-LXMERT, an extension to LXMERT with training refinements including: discretizing visual representations, using uniform masking with a large range of masking ratios and aligning the right pre-training datasets to the right objectives which enables it to paint. X-LXMERT's image generation capabilities rival state of the art generative models while its question answering and captioning abilities remains comparable to LXMERT. Finally, we demonstrate the generality of these training refinements by adding image generation capabilities into UNITER to produce X-UNITER.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jaemin Cho (36 papers)
  2. Jiasen Lu (32 papers)
  3. Dustin Schwenk (15 papers)
  4. Hannaneh Hajishirzi (176 papers)
  5. Aniruddha Kembhavi (79 papers)
Citations (98)

Summary

We haven't generated a summary for this paper yet.