Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Sidebar: Scratchpad Based Communication Between CPUs and Accelerators (1910.10794v1)

Published 23 Oct 2019 in cs.AR

Abstract: Hardware accelerators for neural networks have shown great promise for both performance and power. These accelerators are at their most efficient when optimized for a fixed functionality. But this inflexibility limits the longevity of the hardware itself as the underlying neural network algorithms and structures undergo improvements and changes. We propose and evaluate a flexible design paradigm for accelerators with a close coordination with host processors. The relatively static matrix operations are implemented in specialized accelerators while fast-evolving functions, such as activations, are computed on the host processor. This architecture is enabled by a low latency shared buffer we call Sidebar. Sidebar memory is shared between the accelerator and host, exists outside of program address space and holds intermediate data only. We show that a generalised DMA dependent flexible accelerator design performs poorly in both perf and energy as compared to an equivalent fixed function accelerator. Sidebar based accelerator design achieves near identical performance and energy to equivalent fixed function accelerator while still providing all the flexibility of computing activations on the host processor.

Summary

We haven't generated a summary for this paper yet.