Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Weightless: Lossy Weight Encoding For Deep Neural Network Compression (1711.04686v1)

Published 13 Nov 2017 in cs.LG and stat.ML

Abstract: The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding which complements conventional compression techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, Weightless, can compress DNN weights by up to 496x with the same model accuracy. This results in up to a 1.51x improvement over the state-of-the-art.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Brandon Reagen (39 papers)
  2. Udit Gupta (30 papers)
  3. Robert Adolf (2 papers)
  4. Michael M. Mitzenmacher (4 papers)
  5. Alexander M. Rush (115 papers)
  6. Gu-Yeon Wei (54 papers)
  7. David Brooks (204 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.