Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MixGen: A New Multi-Modal Data Augmentation (2206.08358v3)

Published 16 Jun 2022 in cs.CV, cs.AI, and cs.LG

Abstract: Data augmentation is a necessity to enhance data efficiency in deep learning. For vision-language pre-training, data is only augmented either for images or for text in previous works. In this paper, we present MixGen: a joint data augmentation for vision-language representation learning to further improve data efficiency. It generates new image-text pairs with semantic relationships preserved by interpolating images and concatenating text. It's simple, and can be plug-and-played into existing pipelines. We evaluate MixGen on four architectures, including CLIP, ViLT, ALBEF and TCL, across five downstream vision-language tasks to show its versatility and effectiveness. For example, adding MixGen in ALBEF pre-training leads to absolute performance improvements on downstream tasks: image-text retrieval (+6.2% on COCO fine-tuned and +5.3% on Flicker30K zero-shot), visual grounding (+0.9% on RefCOCO+), visual reasoning (+$0.9% on NLVR2), visual question answering (+0.3% on VQA2.0), and visual entailment (+0.4% on SNLI-VE).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xiaoshuai Hao (34 papers)
  2. Yi Zhu (233 papers)
  3. Srikar Appalaraju (21 papers)
  4. Aston Zhang (48 papers)
  5. Wanqian Zhang (8 papers)
  6. Bo Li (1107 papers)
  7. Mu Li (95 papers)
Citations (70)

Summary

We haven't generated a summary for this paper yet.