Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Testing Spintronics Implemented Monte Carlo Dropout-Based Bayesian Neural Networks (2401.04744v1)

Published 9 Jan 2024 in cs.ET, cs.AR, and cs.LG

Abstract: Bayesian Neural Networks (BayNNs) can inherently estimate predictive uncertainty, facilitating informed decision-making. Dropout-based BayNNs are increasingly implemented in spintronics-based computation-in-memory architectures for resource-constrained yet high-performance safety-critical applications. Although uncertainty estimation is important, the reliability of Dropout generation and BayNN computation is equally important for target applications but is overlooked in existing works. However, testing BayNNs is significantly more challenging compared to conventional NNs, due to their stochastic nature. In this paper, we present for the first time the model of the non-idealities of the spintronics-based Dropout module and analyze their impact on uncertainty estimates and accuracy. Furthermore, we propose a testing framework based on repeatability ranking for Dropout-based BayNN with up to $100\%$ fault coverage while using only $0.2\%$ of training data as test vectors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. F. Tambon, G. Laberge, L. An, A. Nikanjam, P. S. N. Mindom, Y. Pequignot, F. Khomh, G. Antoniol, E. Merlo, and F. Laviolette, “How to certify machine learning based safety-critical systems? a systematic literature review,” Automated Software Engineering, vol. 29, no. 2, p. 38, 2022.
  2. L. Wu, M. Taouil, S. Rao, E. J. Marinissen, and S. Hamdioui, “Survey on stt-mram testing: Failure mechanisms, fault models, and tests,” arXiv preprint arXiv:2001.05463, 2020.
  3. S. T. Ahmed, K. Danouchi, M. Hefenbrock, G. Prenat, L. Anghel, and M. B. Tahoori, “Scale-dropout: Estimating uncertainty in deep neural networks using stochastic scale,” arXiv pp arXiv:2311.15816, 2023.
  4. ——, “Spatial-spindrop: Spatial dropout-based binary bayesian neural network with spintronics implementation,” arXiv preprint arXiv:2306.10185, 2023.
  5. S. T. Ahmed, K. Danouchi, C. Münch, G. Prenat, L. Anghel, and M. B. Tahoori, “Spindrop: Dropout-based bayesian binary neural networks with spintronic implementation,” IEEE JETCAS, vol. 13, no. 1, pp. 150–164, 2023.
  6. F. Meng, F. S. Hosseini, and C. Yang, “A self-test framework for detecting fault-induced accuracy drop in neural network accelerators,” in 26th ASP DAC, 2021.
  7. S. T. Ahmed et al., “Compact functional test generation for memristive deep learning implementations using approximate gradient ranking,” in 2022 IEEE International Test Conference (ITC), 2022.
  8. F. Meng and C. Yang, “Exploring image selection for self-testing in neural network accelerators,” in ISVLSI.   IEEE, 2022.
  9. Q. Liu, T. Liu, Z. Liu, W. Wen, and C. Yang, “Monitoring the health of emerging neural network accelerators with cost-effective concurrent test,” in DAC.   IEEE, 2020.
  10. C.-Y. Chen et al., “On-line functional testing of memristor-mapped deep neural networks using backdoored checksums,” in 2021 IEEE ITC, 2021.
  11. B. Luo, Y. Li, L. Wei, and Q. Xu, “On functional test generation for deep neural network ips,” in DATE.   IEEE, 2019.
  12. F. Su et al., “Testability and dependability of ai hardware: Survey, trends, challenges, and perspectives,” IEEE Design & Test, 2023.
  13. M. Lee, A. Lu, M. Mukherjee, S. Yu, and S. Mukhopadhyay, “Clue: Cross-layer uncertainty estimator for reliable neural perception using processing-in-memory accelerators,” in IJCNN, 2023.
  14. Z. Wang, H. Wu, G. W. Burr, C. S. Hwang, K. L. Wang, Q. Xia, and J. J. Yang, “Resistive switching materials for information processing,” Nature Reviews Materials, 2020.
  15. S. A. El-Sayed, T. Spyrou, L. A. Camuñas-Mesa, and H.-G. Stratigopoulos, “Compact functional testing for neuromorphic computing circuits,” IEEE TCAD, 2022.
  16. G. Gavarini, D. Stucchi, A. Ruospo, G. Boracchi, and E. Sanchez, “Open-set recognition: an inexpensive strategy to increase dnn reliability,” in 28th IOLTS.   IEEE, 2022, pp. 1–7.
  17. P. Liu, Z. You, J. Kuang, Z. Hu, H. Duan, and W. Wang, “Efficient march test algorithm for 1t1r cross-bar with complete fault coverage,” Electronics Letters, vol. 52, no. 18, pp. 1520–1522, 2016.
  18. N. Mirabella, M. Grosso, G. Franchino, S. Rinaudo, I. Deretzis, A. La Magna, and M. S. Reorda, “Comparing different solutions for testing resistive defects in low-power srams,” in LATS.   IEEE, 2021.
  19. J. Gawlikowski, C. R. N. Tassi, M. Ali, J. Lee, M. Humt, J. Feng, A. Kruspe, R. Triebel, P. Jung, R. Roscher et al., “A survey of uncertainty in deep neural networks,” Artificial Intelligence Review, vol. 56, no. Suppl 1, pp. 1513–1589, 2023.
  20. H. Kim, J.-H. Bae, S. Lim, S.-T. Lee, Y.-T. Seo, D. Kwon, B.-G. Park, and J.-H. Lee, “Efficient precise weight tuning protocol considering variation of the synaptic devices and target accuracy,” Neurocomputing, vol. 378, pp. 189–196, 2020.
  21. S. T. Ahmed, M. Mayahinia, M. Hefenbrock, C. Münch, and M. B. Tahoori, “Design-time reference current generation for robust spintronic-based neuromorphic architecture,” ACM JETC, vol. 20, no. 1, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Soyed Tuhin Ahmed (10 papers)
  2. Michael Hefenbrock (10 papers)
  3. Guillaume Prenat (8 papers)
  4. Lorena Anghel (9 papers)
  5. Mehdi B. Tahoori (25 papers)

Summary

We haven't generated a summary for this paper yet.