Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Single-Event Upset Analysis of a Systolic Array based Deep Neural Network Accelerator (2405.15381v1)

Published 24 May 2024 in cs.AR and eess.SP

Abstract: Deep Neural Network (DNN) accelerators are extensively used to improve the computational efficiency of DNNs, but are prone to faults through Single-Event Upsets (SEUs). In this work, we present an in-depth analysis of the impact of SEUs on a Systolic Array (SA) based DNN accelerator. A fault injection campaign is performed through a Register-Transfer Level (RTL) based simulation environment to improve the observability of each hardware block, including the SA itself as well as the post-processing pipeline. From this analysis, we present the sensitivity, independent of a DNN model architecture, for various flip-flop groups both in terms of fault propagation probability and fault magnitude. This allows us to draw detailed conclusions and determine optimal mitigation strategies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (8)
  1. G. Li, S. K. S. Hari, M. Sullivan, T. Tsai, K. Pattabiraman, J. Emer, and S. W. Keckler, “Understanding error propagation in deep learning neural network (DNN) accelerators and applications,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2017, pp. 1–12.
  2. P. Rech, “Artificial neural networks for space and safety-critical applications: Reliability issues and potential solutions,” IEEE Transactions on Nuclear Science, 2024.
  3. F. Libano, P. Rech, and J. Brunhaver, “On the reliability of Xilinx’s deep processing unit and systolic arrays for matrix multiplication,” in 2020 20th European Conference on Radiation and Its Effects on Components and Systems (RADECS), 2020, pp. 1–5.
  4. J. Hoefer, F. Kempf, T. Hotfilter, F. Kreß, T. Harbaum, and J. Becker, “Sifi-ai: A fast and flexible rtl fault simulation framework tailored for ai models and accelerators,” in Proceedings of the Great Lakes Symposium on VLSI 2023, 2023, pp. 287–292.
  5. S. Kundu, S. Banerjee, A. Raha, S. Natarajan, and K. Basu, “Toward functional safety of systolic array-based deep learning hardware accelerators,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 29, no. 3, pp. 485–498, 2021.
  6. M. Nagel, M. Fournarakis, R. A. Amjad, Y. Bondarenko, M. Van Baalen, and T. Blankevoort, “A white paper on neural network quantization,” arXiv preprint arXiv:2106.08295, 2021.
  7. H.-B. Wang, J. Kauppila, K. Lilja, M. Bounasser, L. Chen, M. Newton, Y.-Q. Li, R. Liu, B. Bhuva, S.-J. Wen et al., “Evaluation of SEU performance of 28-nm FDSOI flip-flop designs,” IEEE Transactions on Nuclear Science, vol. 64, no. 1, pp. 367–373, 2016.
  8. H. Schirmeier, C. Borchert, and O. Spinczyk, “Avoiding pitfalls in fault-injection based comparison of program susceptibility to soft errors,” in 2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks.   IEEE, 2015, pp. 319–330.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Naïn Jonckers (2 papers)
  2. Toon Vinck (2 papers)
  3. Gert Dekkers (4 papers)
  4. Peter Karsmakers (8 papers)
  5. Jeffrey Prinzie (2 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com