Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MeliusNet: Can Binary Neural Networks Achieve MobileNet-level Accuracy? (2001.05936v2)

Published 16 Jan 2020 in cs.LG, cs.CV, and stat.ML

Abstract: Binary Neural Networks (BNNs) are neural networks which use binary weights and activations instead of the typical 32-bit floating point values. They have reduced model sizes and allow for efficient inference on mobile or embedded devices with limited power and computational resources. However, the binarization of weights and activations leads to feature maps of lower quality and lower capacity and thus a drop in accuracy compared to traditional networks. Previous work has increased the number of channels or used multiple binary bases to alleviate these problems. In this paper, we instead present an architectural approach: MeliusNet. It consists of alternating a DenseBlock, which increases the feature capacity, and our proposed ImprovementBlock, which increases the feature quality. Experiments on the ImageNet dataset demonstrate the superior performance of our MeliusNet over a variety of popular binary architectures with regards to both computation savings and accuracy. Furthermore, with our method we trained BNN models, which for the first time can match the accuracy of the popular compact network MobileNet-v1 in terms of model size, number of operations and accuracy. Our code is published online at https://github.com/hpi-xnor/BMXNet-v2

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Joseph Bethge (9 papers)
  2. Christian Bartz (13 papers)
  3. Haojin Yang (38 papers)
  4. Ying Chen (333 papers)
  5. Christoph Meinel (51 papers)
Citations (88)

Summary

An Overview of MeliusNet: Advancements in Binary Neural Networks

The research paper titled "MeliusNet: Can Binary Neural Networks Achieve MobileNet-level Accuracy?" introduces an innovative approach to enhancing Binary Neural Networks (BNNs) by presenting a novel architecture known as MeliusNet. BNNs utilize binary weights and activations, offering the potential for reduced model sizes and efficient inference on devices with limited resources. However, the inherent reduction in feature quality and capacity due to binary quantization often results in significant accuracy loss when compared to full-precision networks. The authors propose a refined architectural strategy, MeliusNet, aiming to ameliorate these limitations and facilitate BNNs in achieving accuracy levels comparable to traditional compact networks like MobileNet-v1.

Key Contributions of MeliusNet

MeliusNet distinguishes itself from prior enhancements to BNNs by introducing a unique composition of DenseBlocks and the proposed ImprovementBlocks. The architecture alternates between these blocks, where the DenseBlock enhances feature capacity by adding new channels, and the ImprovementBlock augments the quality of these features, exploiting residual connections to enhance accuracy without disproportionately increasing computational demand. This strategic block arrangement is designed to systematically balance the two primary deficiencies of BNNs: reduced feature quality and limited capacity.

Experimentation on the ImageNet dataset demonstrated that MeliusNet surpasses existing binary architectures in both computational efficiency and accuracy. Notably, for the first time, BNNs trained using MeliusNet can match the accuracy of MobileNet-v1 across varying model sizes and operational requirements.

Numerical Results and Implications

The paper provides compelling numerical results that highlight the efficiencies of MeliusNet. For instance, MeliusNet42, with a similar number of operations as MobileNet-v1 0.75, achieves a superior top-1 accuracy of 69.2%. MeliusNet59, at full scale, rivaled the accuracy of MobileNet-v1 1.0, showcasing that BNN architectures can compete with established compact networks when equipped with appropriate architectural designs.

The implications of these advancements are multifaceted. MeliusNet provides a scalable framework for developing BNNs that not only optimize deployment on resource-constrained devices but also maintain competitive accuracy rates, which could significantly broaden the applicability of AI solutions in edge computing settings.

Theoretical and Practical Significance

From a theoretical perspective, MeliusNet challenges the conventional assumptions around the limitations of BNNs by employing architectural improvements rather than merely increasing computational bases or channel counts. This innovative approach could inspire further exploration of architectural strategies in model design, extending beyond BNNs.

Practically, MeliusNet's design facilitates more efficient use of embedded device resources while maintaining high performance, marking a step towards more sustainable AI. The implementation details, particularly the use of a grouped stem design to replace costly initial convolutions, contribute to significant reductions in operations while preserving accuracy, further enhancing the viability of BNNs in real-world applications.

Future Directions

The work on MeliusNet opens the door for future inquiries into hybrid architectures that could incorporate elements of MeliusNet with other high-performing BNN strategies, such as GroupNet's approach involving multiple binary bases. Moreover, integrating advanced training methodologies, like knowledge distillation, with MeliusNet could yield additional accuracy gains. In the evolving landscape of AI deployment, these advancements solidify the position of BNNs as viable, efficient options for inference in low-resource environments.

In summary, MeliusNet represents a pivotal advancement in BNN research, both confirming the potential of binary architectures and setting the stage for ongoing innovation in the efficient design of neural networks.

Github Logo Streamline Icon: https://streamlinehq.com