Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark (2202.06767v4)

Published 14 Feb 2022 in cs.CV and cs.LG

Abstract: Vision-Language Pre-training (VLP) models have shown remarkable performance on various downstream tasks. Their success heavily relies on the scale of pre-trained cross-modal datasets. However, the lack of large-scale datasets and benchmarks in Chinese hinders the development of Chinese VLP models and broader multilingual applications. In this work, we release a large-scale Chinese cross-modal dataset named Wukong, which contains 100 million Chinese image-text pairs collected from the web. Wukong aims to benchmark different multi-modal pre-training methods to facilitate the VLP research and community development. Furthermore, we release a group of models pre-trained with various image encoders (ViT-B/ViT-L/SwinT) and also apply advanced pre-training techniques into VLP such as locked-image text tuning, token-wise similarity in contrastive learning, and reduced-token interaction. Extensive experiments and a benchmarking of different downstream tasks including a new largest human-verified image-text test dataset are also provided. Experiments show that Wukong can serve as a promising Chinese pre-training dataset and benchmark for different cross-modal learning methods. For the zero-shot image classification task on 10 datasets, $Wukong_{ViT-L}$ achieves an average accuracy of 73.03%. For the image-text retrieval task, it achieves a mean recall of 71.6% on AIC-ICC which is 12.9% higher than WenLan 2.0. Also, our Wukong models are benchmarked on downstream tasks with other variants on multiple datasets, e.g., Flickr8K-CN, Flickr-30K-CN, COCO-CN, et al. More information can be referred to: https://wukong-dataset.github.io/wukong-dataset/.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Jiaxi Gu (17 papers)
  2. Xiaojun Meng (23 papers)
  3. Guansong Lu (20 papers)
  4. Lu Hou (50 papers)
  5. Minzhe Niu (11 papers)
  6. Xiaodan Liang (318 papers)
  7. Lewei Yao (15 papers)
  8. Runhui Huang (18 papers)
  9. Wei Zhang (1489 papers)
  10. Xin Jiang (242 papers)
  11. Chunjing Xu (66 papers)
  12. Hang Xu (204 papers)
Citations (68)
Github Logo Streamline Icon: https://streamlinehq.com