Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

WSNet: Compact and Efficient Networks Through Weight Sampling (1711.10067v3)

Published 28 Nov 2017 in cs.CV, cs.NE, cs.SD, and eess.AS

Abstract: We present a new approach and a novel architecture, termed WSNet, for learning compact and efficient deep neural networks. Existing approaches conventionally learn full model parameters independently and then compress them via ad hoc processing such as model pruning or filter factorization. Alternatively, WSNet proposes learning model parameters by sampling from a compact set of learnable parameters, which naturally enforces {parameter sharing} throughout the learning process. We demonstrate that such a novel weight sampling approach (and induced WSNet) promotes both weights and computation sharing favorably. By employing this method, we can more efficiently learn much smaller networks with competitive performance compared to baseline networks with equal numbers of convolution filters. Specifically, we consider learning compact and efficient 1D convolutional neural networks for audio classification. Extensive experiments on multiple audio classification datasets verify the effectiveness of WSNet. Combined with weight quantization, the resulted models are up to 180 times smaller and theoretically up to 16 times faster than the well-established baselines, without noticeable performance drop.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xiaojie Jin (51 papers)
  2. Yingzhen Yang (38 papers)
  3. Ning Xu (151 papers)
  4. Jianchao Yang (48 papers)
  5. Nebojsa Jojic (43 papers)
  6. Jiashi Feng (297 papers)
  7. Shuicheng Yan (275 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.