Agile Autotuning of a Transprecision Tensor Accelerator Overlay for TVM Compiler Stack (2004.10854v1)
Abstract: Specialized accelerators for tensor-operations, such as blocked-matrix operations and multi-dimensional convolutions, have been emerged as powerful architecture choices for high-performance Deep-Learning computing. The rapid development of frameworks, models, and precision options challenges the adaptability of such tensor-accelerators since the adaptation to new requirements incurs significant engineering costs. Programmable tensor accelerators offer a promising alternative by allowing reconfiguration of a virtual architecture that overlays on top of the physical FPGA configurable fabric. We propose an overlay ({\tau}-VTA) and an optimization method guided by agile-inspired auto-tuning techniques. We achieve higher performance and faster convergence than state-of-art.
- Dionysios Diamantopoulos (10 papers)
- Burkhard Ringlein (5 papers)
- Mitra Purandare (2 papers)
- Gagandeep Singh (94 papers)
- Christoph Hagleitner (9 papers)