Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Improving Ranking Correlation of Supernet with Candidates Enhancement and Progressive Training (2108.05866v1)

Published 12 Aug 2021 in cs.CV

Abstract: One-shot neural architecture search (NAS) applies weight-sharing supernet to reduce the unaffordable computation overhead of automated architecture designing. However, the weight-sharing technique worsens the ranking consistency of performance due to the interferences between different candidate networks. To address this issue, we propose a candidates enhancement method and progressive training pipeline to improve the ranking correlation of supernet. Specifically, we carefully redesign the sub-networks in the supernet and map the original supernet to a new one of high capacity. In addition, we gradually add narrow branches of supernet to reduce the degree of weight sharing which effectively alleviates the mutual interference between sub-networks. Finally, our method ranks the 1st place in the Supernet Track of CVPR2021 1st Lightweight NAS Challenge.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Ziwei Yang (23 papers)
  2. Ruyi Zhang (19 papers)
  3. Zhi Yang (188 papers)
  4. Xubo Yang (7 papers)
  5. Lei Wang (975 papers)
  6. Zheyang Li (10 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.