Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

M5Product: Self-harmonized Contrastive Learning for E-commercial Multi-modal Pretraining (2109.04275v5)

Published 9 Sep 2021 in cs.CV and cs.MM

Abstract: Despite the potential of multi-modal pre-training to learn highly discriminative feature representations from complementary data modalities, current progress is being slowed by the lack of large-scale modality-diverse datasets. By leveraging the natural suitability of E-commerce, where different modalities capture complementary semantic information, we contribute a large-scale multi-modal pre-training dataset M5Product. The dataset comprises 5 modalities (image, text, table, video, and audio), covers over 6,000 categories and 5,000 attributes, and is 500 larger than the largest publicly available dataset with a similar number of modalities. Furthermore, M5Product contains incomplete modality pairs and noise while also having a long-tailed distribution, resembling most real-world problems. We further propose Self-harmonized ContrAstive LEarning (SCALE), a novel pretraining framework that integrates the different modalities into a unified model through an adaptive feature fusion mechanism, where the importance of each modality is learned directly from the modality embeddings and impacts the inter-modality contrastive learning and masked tasks within a multi-modal transformer model. We evaluate the current multi-modal pre-training state-of-the-art approaches and benchmark their ability to learn from unlabeled data when faced with the large number of modalities in the M5Product dataset. We conduct extensive experiments on four downstream tasks and demonstrate the superiority of our SCALE model, providing insights into the importance of dataset scale and diversity.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Xiao Dong (62 papers)
  2. Xunlin Zhan (5 papers)
  3. Yangxin Wu (5 papers)
  4. Yunchao Wei (151 papers)
  5. Michael C. Kampffmeyer (13 papers)
  6. Xiaoyong Wei (16 papers)
  7. Minlong Lu (5 papers)
  8. Yaowei Wang (149 papers)
  9. Xiaodan Liang (318 papers)
Citations (31)