Boolean Variation and Boolean Logic BackPropagation
Abstract: The notion of variation is introduced for the Boolean set and based on which Boolean logic backpropagation principle is developed. Using this concept, deep models can be built with weights and activations being Boolean numbers and operated with Boolean logic instead of real arithmetic. In particular, Boolean deep models can be trained directly in the Boolean domain without latent weights. No gradient but logic is synthesized and backpropagated through layers.
- S. B. Akers, Jr. On a theory of Boolean functions. Journal of the Society for Industrial and Applied Mathematics, 7(4):487–498, 1959. doi: 10.1137/0107041. URL https://doi.org/10.1137/0107041.
- A novel 8T XNOR-SRAM: Computing-in-memory design for binary/ternary deep neural networks. Electronics, 12(4), 2023. ISSN 2079-9292. doi: 10.3390/electronics12040877. URL https://www.mdpi.com/2079-9292/12/4/877.
- C. Baldassi. Generalization learning in a perceptron with binary synapses. J. Stat. Phys., 136(5):902–916, Sep 2009. ISSN 1572-9613. doi: 10.1007/s10955-009-9822-1. URL https://doi.org/10.1007/s10955-009-9822-1.
- C. Baldassi and A. Braunstein. A max-sum algorithm for training discrete neural networks. J. Stat. Mech: Theory Exp., 2015(8):P08008, 2015.
- Subdominant dense clusters allow for simple learning and high computational performance in neural networks with discrete synapses. Physical Review Letters, 115(12):128101, 2015.
- BinaryDenseNet: Developing an architecture for binary neural networks. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 0–0, 2019.
- AdderNet: Do we really need multiplications in deep learning? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020a.
- A statistical framework for low-bitwidth training of deep neural networks. In Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 2020b.
- Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE journal of solid-state circuits, 52(1):127–138, 2016.
- A survey of model compression and acceleration for deep neural networks. arXiv preprint arXiv:1710.09282, 2020.
- Logarithmic unbiased quantization: Simple 4-bit training in deep learning. arXiv:2112.10769, 2021. doi: 10.48550/ARXIV.2112.10769. URL https://arxiv.org/abs/2112.10769.
- J. Cong and B. Xiao. Minimizing computation in convolutional neural networks. In Artificial Neural Networks and Machine Learning–ICANN 2014: 24th International Conference on Artificial Neural Networks, Hamburg, Germany, September 15-19, 2014. Proceedings 24, pages 281–290. Springer, 2014.
- BinaryConnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pages 3123–3131, 2015.
- General bitwidth assignment for efficient deep convolutional neural network quantization. IEEE Trans. Neural Netw. Learn. Syst., 33(10):5253–5267, 2022. doi: 10.1109/TNNLS.2021.3069886.
- Adult neuroplasticity: more than 40 years of research. Neural plasticity, 2014, 2014.
- Estimation of energy consumption in machine learning. J Parallel Distr Com, 134:75–88, 2019.
- Low-Power Computer Vision, chapter A Survey of Quantization Methods for Efficient Neural Network Inference, pages 291–326. Chapman and Hall/CRC, 2022.
- C. Grimm and N. Verma. Neural network training on in-memory-computing hardware with radix-4 gradients. IEEE Trans. Circuits Syst. I, 69(10):4056–4068, 2022. doi: 10.1109/TCSI.2022.3185556.
- Join the high accuracy club on ImageNet with a binary neural network ticket. arXiv preprint arXiv:2211.12933, 2022.
- Deep learning with limited numerical precision. In F. Bach and D. Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1737–1746, Lille, France, 07–09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/gupta15.html.
- D. Hampel and R. O. Winder. Threshold logic. IEEE Spectrum, 8(5):32–39, 1971.
- Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv:1510.00149, 2015. doi: 10.48550/ARXIV.1510.00149. URL https://arxiv.org/abs/1510.00149.
- D. O. Hebb. The Organization of Behavior: A Neuropsychological Theory. Psychology press, 2005.
- Latent weights do not exist: Rethinking binarized neural network optimization. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), 2019. URL https://arxiv.org/abs/1906.02107.
- M. Horowitz. 1.1 computing’s energy problem (and what we can do about it). In 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pages 10–14, 2014. doi: 10.1109/ISSCC.2014.6757323.
- Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
- All-you-can-fit 8-bit flexible floating-point format for accurate and memory-efficient inference of deep neural networks. arXiv:2104.07329, 2021.
- Binarized neural networks. In Advances in neural information processing systems, pages 4107–4115, 2016.
- Quantized neural networks: Training neural networks with low precision weights and activations. The Journal of Machine Learning Research, 18(1):6869–6898, 2017.
- R. Ito and T. Saito. Dynamic binary neural networks and evolutionary learning. In The 2010 International Joint Conference on Neural Networks, pages 1–5. IEEE, 2010.
- F8Net: Fixed-point 8-bit only multiplication for network quantization. arXiv preprint arXiv:2202.05239, 2021.
- Understanding reuse, performance, and hardware cost of dnn dataflow: A data-centric approach. In Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 52, pages 754–768, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450369381. doi: 10.1145/3352460.3358252. URL https://doi.org/10.1145/3352460.3358252.
- Proximity preserving binary code using signed graph-cut. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 4535–4544, 2020.
- Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- Ascend: a scalable and unified architecture for ubiquitous deep neural network computing : Industry track paper. In 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pages 789–801, 2021. doi: 10.1109/HPCA51647.2021.00071.
- Circulant binary convolutional networks: Enhancing the performance of 1-bit dcnns with circulant back propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2691–2699, 2019.
- Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In Proceedings of the European conference on computer vision (ECCV), pages 722–737, 2018. URL https://github.com/ichakra2/pca-hybrid.
- ReActNet: Towards precise binary neural network with generalized activation functions. In European Conference on Computer Vision, pages 143–159, 2020.
- A survey of memristive threshold logic circuits. IEEE Transactions on Neural Networks and Learning Systems, 28(8):1734–1746, 2017.
- The Theory of Error-Correcting Codes, volume 16. North-Holland Publishing Company, Amsterdam, 1977. URL www.academia.edu/download/43668701/linear_codes.pdf.
- W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5(4):115–133, 1943.
- G. Morse and K. O. Stanley. Simple evolutionary optimization can rival stochastic gradient descent in neural networks. In Proceedings of the Genetic and Evolutionary Computation Conference 2016, pages 477–484, 2016.
- XNOR-Net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pages 525–542. Springer, 2016.
- M. Rutter. An introduction to computing, 2001. URL https://www.mjr19.org.uk/courses/lect1.pdf.
- Memory devices and applications for in-memory computing. Nature Nanotechnology, 15(7):529–544, 2020.
- K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- Expectation backpropagation: Parameter-free training of multilayer neural networks with continuous or discrete weights. In Advances in Neural Information Processing Systems, pages 963–971, 2014.
- Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy, Jul 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1355. URL https://aclanthology.org/P19-1355.
- Ultra-low precision 4-bit training of deep neural networks. In Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020), 2020. URL https://proceedings.neurips.cc/paper/2020/hash/13b919438259814cd5be8cb45877d577-Abstract.html.
- Efficient processing of deep neural networks: A tutorial and survey. Proc. IEEE, 105(12):2295–2329, 2017. URL https://ieeexplore.ieee.org/document/8114708.
- Efficient processing of deep neural networks. Synthesis Lectures on Computer Architecture, 15(2):1–341, 2020a.
- How to evaluate deep neural network processors: Tops/w (alone) considered harmful. IEEE Solid-State Circuits Mag., 12(3):28–41, 2020b.
- M. Tan and Q. Le. EfficientNet: Rethinking model scaling for convolutional neural networks. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 6105–6114. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/tan19a.html.
- FINN: A framework for fast, scalable binarized neural network inference. In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pages 65–74, 2017.
- In-memory computing: Advances and prospects. IEEE Solid-State Circuits Mag., 11(3):43–55, 2019. doi: 10.1109/MSSC.2019.2922889.
- K. Yamamoto. Learnable companding quantization for accurate low-bit neural networks. arXiv:2103.07156, 2021.
- A method to estimate the energy consumption of deep neural networks. In 2017 51st Asilomar Conference on Signals, Systems, and Computers, pages 1916–1920, Oct 2017a. doi: 10.1109/ACSSC.2017.8335698.
- Designing energy-efficient convolutional neural networks using energy-aware pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017b.
- Interstellar: Using halide’s scheduling language to analyze dnn accelerators. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, pages 369–383, 2020.
- Research progress on memristor: From synapses to computing systems. IEEE Trans. Circuits Syst. I, 69(5):1845–1857, May 2022a. ISSN 1558-0806. doi: 10.1109/TCSI.2022.3159153.
- Towards efficient full 8-bit integer DNN online training on resource-limited devices without batch normalization. Neurocomputing, 2022b.
- S. Yu and P.-Y. Chen. Emerging memory technologies: Recent trends and prospects. IEEE Solid-State Circuits Mag., 8(2):43–56, 2016. doi: 10.1109/MSSC.2016.2546199.
- Lq-nets: Learned quantization for highly accurate and compact deep neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
- HD-CIM: Hybrid-device computing-in-memory structure based on mram and sram to reduce weight loading energy of neural networks. IEEE Trans. Circuits Syst. I, 69(11):4465–4474, Nov 2022a. ISSN 1558-0806. doi: 10.1109/TCSI.2022.3199440.
- PokeBNN: A binary pursuit of lightweight accuracy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12475–12485, 2022b.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.