Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

E$^2$CM: Early Exit via Class Means for Efficient Supervised and Unsupervised Learning (2103.01148v3)

Published 1 Mar 2021 in cs.LG, cs.NE, and stat.ML

Abstract: State-of-the-art neural networks with early exit mechanisms often need considerable amount of training and fine tuning to achieve good performance with low computational cost. We propose a novel early exit technique, Early Exit Class Means (E$2$CM), based on class means of samples. Unlike most existing schemes, E$2$CM does not require gradient-based training of internal classifiers and it does not modify the base network by any means. This makes it particularly useful for neural network training in low-power devices, as in wireless edge networks. We evaluate the performance and overheads of E$2$CM over various base neural networks such as MobileNetV3, EfficientNet, ResNet, and datasets such as CIFAR-100, ImageNet, and KMNIST. Our results show that, given a fixed training time budget, E$2$CM achieves higher accuracy as compared to existing early exit mechanisms. Moreover, if there are no limitations on the training time budget, E$2$CM can be combined with an existing early exit scheme to boost the latter's performance, achieving a better trade-off between computational cost and network accuracy. We also show that E$2$CM can be used to decrease the computational cost in unsupervised learning tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Alperen Görmez (3 papers)
  2. Venkat R. Dasari (12 papers)
  3. Erdem Koyuncu (28 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.