Task-customized Masked AutoEncoder via Mixture of Cluster-conditional Experts (2402.05382v1)
Abstract: Masked Autoencoder~(MAE) is a prevailing self-supervised learning method that achieves promising results in model pre-training. However, when the various downstream tasks have data distributions different from the pre-training data, the semantically irrelevant pre-training information might result in negative transfer, impeding MAE's scalability. To address this issue, we propose a novel MAE-based pre-training paradigm, Mixture of Cluster-conditional Experts (MoCE), which can be trained once but provides customized pre-training models for diverse downstream tasks. Different from the mixture of experts (MoE), our MoCE trains each expert only with semantically relevant images by using cluster-conditional gates. Thus, each downstream task can be allocated to its customized model pre-trained with data most similar to the downstream data. Experiments on a collection of 11 downstream tasks show that MoCE outperforms the vanilla MAE by 2.45\% on average. It also obtains new state-of-the-art self-supervised learning results on detection and segmentation.
- Zhili Liu (20 papers)
- Kai Chen (512 papers)
- Jianhua Han (49 papers)
- Lanqing Hong (72 papers)
- Hang Xu (204 papers)
- Zhenguo Li (195 papers)
- James T. Kwok (65 papers)