2000 character limit reached
Dependable Neural Networks Through Redundancy, A Comparison of Redundant Architectures (2108.02565v1)
Published 30 Jul 2021 in cs.LG, cs.AI, cs.AR, cs.SY, and eess.SY
Abstract: With edge-AI finding an increasing number of real-world applications, especially in industry, the question of functionally safe applications using AI has begun to be asked. In this body of work, we explore the issue of achieving dependable operation of neural networks. We discuss the issue of dependability in general implementation terms before examining lockstep solutions. We intuit that it is not necessarily a given that two similar neural networks generate results at precisely the same time and that synchronization between the platforms will be required. We perform some preliminary measurements that may support this intuition and introduce some work in implementing lockstep neural network engines.