AutoTaskFormer: Searching Vision Transformers for Multi-task Learning (2304.08756v2)
Abstract: Vision Transformers have shown great performance in single tasks such as classification and segmentation. However, real-world problems are not isolated, which calls for vision transformers that can perform multiple tasks concurrently. Existing multi-task vision transformers are handcrafted and heavily rely on human expertise. In this work, we propose a novel one-shot neural architecture search framework, dubbed AutoTaskFormer (Automated Multi-Task Vision TransFormer), to automate this process. AutoTaskFormer not only identifies the weights to share across multiple tasks automatically, but also provides thousands of well-trained vision transformers with a wide range of parameters (e.g., number of heads and network depth) for deployment under various resource constraints. Experiments on both small-scale (2-task Cityscapes and 3-task NYUv2) and large-scale (16-task Taskonomy) datasets show that AutoTaskFormer outperforms state-of-the-art handcrafted vision transformers in multi-task learning. The entire code and models will be open-sourced.
- Yang Liu (2253 papers)
- Shen Yan (47 papers)
- Yuge Zhang (12 papers)
- Kan Ren (41 papers)
- Quanlu Zhang (14 papers)
- Zebin Ren (2 papers)
- Deng Cai (181 papers)
- Mi Zhang (85 papers)