i-Code: An Integrative and Composable Multimodal Learning Framework (2205.01818v2)
Abstract: Human intelligence is multimodal; we integrate visual, linguistic, and acoustic signals to maintain a holistic worldview. Most current pretraining methods, however, are limited to one or two modalities. We present i-Code, a self-supervised pretraining framework where users may flexibly combine the modalities of vision, speech, and language into unified and general-purpose vector representations. In this framework, data from each modality are first given to pretrained single-modality encoders. The encoder outputs are then integrated with a multimodal fusion network, which uses novel attention mechanisms and other architectural innovations to effectively combine information from the different modalities. The entire system is pretrained end-to-end with new objectives including masked modality unit modeling and cross-modality contrastive learning. Unlike previous research using only video for pretraining, the i-Code framework can dynamically process single, dual, and triple-modality data during training and inference, flexibly projecting different combinations of modalities into a single representation space. Experimental results demonstrate how i-Code can outperform state-of-the-art techniques on five video understanding tasks and the GLUE NLP benchmark, improving by as much as 11% and demonstrating the power of integrative multimodal pretraining.
- Ziyi Yang (77 papers)
- Yuwei Fang (31 papers)
- Chenguang Zhu (100 papers)
- Reid Pryzant (17 papers)
- Dongdong Chen (164 papers)
- Yu Shi (153 papers)
- Yichong Xu (42 papers)
- Yao Qian (37 papers)
- Mei Gao (8 papers)
- Yi-Ling Chen (13 papers)
- Liyang Lu (15 papers)
- Yujia Xie (29 papers)
- Robert Gmyr (20 papers)
- Noel Codella (21 papers)
- Naoyuki Kanda (61 papers)
- Bin Xiao (93 papers)
- Lu Yuan (130 papers)
- Takuya Yoshioka (77 papers)
- Michael Zeng (76 papers)
- Xuedong Huang (22 papers)