Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Coreset-Based Neural Network Compression (1807.09810v1)

Published 25 Jul 2018 in cs.CV and cs.LG

Abstract: We propose a novel Convolutional Neural Network (CNN) compression algorithm based on coreset representations of filters. We exploit the redundancies extant in the space of CNN weights and neuronal activations (across samples) in order to obtain compression. Our method requires no retraining, is easy to implement, and obtains state-of-the-art compression performance across a wide variety of CNN architectures. Coupled with quantization and Huffman coding, we create networks that provide AlexNet-like accuracy, with a memory footprint that is $832\times$ smaller than the original AlexNet, while also introducing significant reductions in inference time as well. Additionally these compressed networks when fine-tuned, successfully generalize to other domains as well.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Abhimanyu Dubey (35 papers)
  2. Moitreya Chatterjee (16 papers)
  3. Narendra Ahuja (32 papers)
Citations (74)

Summary

We haven't generated a summary for this paper yet.