Memory-Efficient Differentiable Transformer Architecture Search (2105.14669v1)
Abstract: Differentiable architecture search (DARTS) is successfully applied in many vision tasks. However, directly using DARTS for Transformers is memory-intensive, which renders the search process infeasible. To this end, we propose a multi-split reversible network and combine it with DARTS. Specifically, we devise a backpropagation-with-reconstruction algorithm so that we only need to store the last layer's outputs. By relieving the memory burden for DARTS, it allows us to search with larger hidden size and more candidate operations. We evaluate the searched architecture on three sequence-to-sequence datasets, i.e., WMT'14 English-German, WMT'14 English-French, and WMT'14 English-Czech. Experimental results show that our network consistently outperforms standard Transformers across the tasks. Moreover, our method compares favorably with big-size Evolved Transformers, reducing search computation by an order of magnitude.
- Yuekai Zhao (1 paper)
- Li Dong (154 papers)
- Yelong Shen (83 papers)
- Zhihua Zhang (118 papers)
- Furu Wei (291 papers)
- Weizhu Chen (128 papers)