Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs (1903.01521v1)

Published 4 Mar 2019 in cs.LG and cs.ET

Abstract: The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs). Although there has been a lot of research done on model and algorithmic optimization of CNN, little attention has been paid to the efficient implementation of these algorithms on embedded CPUs, which usually have very limited memory and low power budget. This paper aims to fill this gap and focuses on the efficient implementation of Winograd or Cook-Toom based convolution on modern Arm Cortex-A CPUs, widely used in mobile devices today. Specifically, we demonstrate a reduction in inference latency by using a set of optimization strategies that improve the utilization of computational resources, and by effectively leveraging the ARMv8-A NEON SIMD instruction set. We evaluated our proposed region-wise multi-channel implementations on Arm Cortex-A73 platform using several representative CNNs. The results show significant performance improvements in full network, up to 60%, over existing im2row/im2col based optimization techniques

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Partha Maji (7 papers)
  2. Andrew Mundy (3 papers)
  3. Ganesh Dasika (7 papers)
  4. Jesse Beu (10 papers)
  5. Matthew Mattina (35 papers)
  6. Robert Mullins (38 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.