Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AutoTaskFormer: Searching Vision Transformers for Multi-task Learning (2304.08756v2)

Published 18 Apr 2023 in cs.CV

Abstract: Vision Transformers have shown great performance in single tasks such as classification and segmentation. However, real-world problems are not isolated, which calls for vision transformers that can perform multiple tasks concurrently. Existing multi-task vision transformers are handcrafted and heavily rely on human expertise. In this work, we propose a novel one-shot neural architecture search framework, dubbed AutoTaskFormer (Automated Multi-Task Vision TransFormer), to automate this process. AutoTaskFormer not only identifies the weights to share across multiple tasks automatically, but also provides thousands of well-trained vision transformers with a wide range of parameters (e.g., number of heads and network depth) for deployment under various resource constraints. Experiments on both small-scale (2-task Cityscapes and 3-task NYUv2) and large-scale (16-task Taskonomy) datasets show that AutoTaskFormer outperforms state-of-the-art handcrafted vision transformers in multi-task learning. The entire code and models will be open-sourced.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yang Liu (2253 papers)
  2. Shen Yan (47 papers)
  3. Yuge Zhang (12 papers)
  4. Kan Ren (41 papers)
  5. Quanlu Zhang (14 papers)
  6. Zebin Ren (2 papers)
  7. Deng Cai (181 papers)
  8. Mi Zhang (85 papers)