OPT: Omni-Perception Pre-Trainer for Cross-Modal Understanding and Generation (2107.00249v2)
Abstract: In this paper, we propose an Omni-perception Pre-Trainer (OPT) for cross-modal understanding and generation, by jointly modeling visual, text and audio resources. OPT is constructed in an encoder-decoder framework, including three single-modal encoders to generate token-based embeddings for each modality, a cross-modal encoder to encode the correlations among the three modalities, and two cross-modal decoders to generate text and image respectively. For the OPT's pre-training, we design a multi-task pretext learning scheme to model multi-modal resources from three different data granularities, \ie, token-, modality-, and sample-level modeling, through which OPT learns to align and translate among different modalities. The pre-training task is carried out on a large amount of image-text-audio triplets from Open Images. Experimental results show that OPT can learn strong image-text-audio multi-modal representations and achieve promising results on a variety of cross-modal understanding and generation tasks.
- Jing Liu (526 papers)
- Xinxin Zhu (21 papers)
- Fei Liu (232 papers)
- Longteng Guo (31 papers)
- Zijia Zhao (17 papers)
- Mingzhen Sun (10 papers)
- Weining Wang (32 papers)
- Hanqing Lu (34 papers)
- Shiyu Zhou (32 papers)
- Jiajun Zhang (176 papers)
- Jinqiao Wang (76 papers)