Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency (2002.12900v1)

Published 27 Feb 2020 in eess.SP and cs.LG

Abstract: Binarized neural networks (BNNs) have shown exciting potential for utilising neural networks in embedded implementations where area, energy and latency constraints are paramount. With BNNs, multiply-accumulate (MAC) operations can be simplified to XnorPopcount operations, leading to massive reductions in both memory and computation resources. Furthermore, multiple efficient implementations of BNNs have been reported on field-programmable gate array (FPGA) implementations. This paper proposes a smaller, faster, more energy-efficient approximate replacement for the XnorPopcountoperation, called XNorMaj, inspired by state-of-the-art FPGAlook-up table schemes which benefit FPGA implementations. Weshow that XNorMaj is up to 2x more resource-efficient than the XnorPopcount operation. While the XNorMaj operation has a minor detrimental impact on accuracy, the resource savings enable us to use larger networks to recover the loss.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Seyedramin Rasoulinezhad (4 papers)
  2. Sean Fox (1 paper)
  3. Hao Zhou (351 papers)
  4. Lingli Wang (9 papers)
  5. David Boland (6 papers)
  6. Philip H. W. Leong (12 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.