Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity (2106.14568v4)

Published 28 Jun 2021 in cs.LG and cs.CV

Abstract: The success of deep ensembles on improving predictive performance, uncertainty estimation, and out-of-distribution robustness has been extensively studied in the machine learning literature. Albeit the promising results, naively training multiple deep neural networks and combining their predictions at inference leads to prohibitive computational costs and memory requirements. Recently proposed efficient ensemble approaches reach the performance of the traditional deep ensembles with significantly lower costs. However, the training resources required by these approaches are still at least the same as training a single dense model. In this work, we draw a unique connection between sparse neural network training and deep ensembles, yielding a novel efficient ensemble learning framework called FreeTickets. Instead of training multiple dense networks and averaging them, we directly train sparse subnetworks from scratch and extract diverse yet accurate subnetworks during this efficient, sparse-to-sparse training. Our framework, FreeTickets, is defined as the ensemble of these relatively cheap sparse subnetworks. Despite being an ensemble method, FreeTickets has even fewer parameters and training FLOPs than a single dense model. This seemingly counter-intuitive outcome is due to the ultra training/inference efficiency of dynamic sparse training. FreeTickets surpasses the dense baseline in all the following criteria: prediction accuracy, uncertainty estimation, out-of-distribution (OoD) robustness, as well as efficiency for both training and inference. Impressively, FreeTickets outperforms the naive deep ensemble with ResNet50 on ImageNet using around only 1/5 of the training FLOPs required by the latter. We have released our source code at https://github.com/VITA-Group/FreeTickets.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Shiwei Liu (76 papers)
  2. Tianlong Chen (202 papers)
  3. Zahra Atashgahi (11 papers)
  4. Xiaohan Chen (30 papers)
  5. Ghada Sokar (17 papers)
  6. Elena Mocanu (15 papers)
  7. Mykola Pechenizkiy (118 papers)
  8. Zhangyang Wang (375 papers)
  9. Decebal Constantin Mocanu (52 papers)
Citations (46)

Summary

We haven't generated a summary for this paper yet.