Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

High Performance and Portable Convolution Operators for ARM-based Multicore Processors (2005.06410v1)

Published 13 May 2020 in cs.PF

Abstract: The considerable impact of Convolutional Neural Networks on many Artificial Intelligence tasks has led to the development of various high performance algorithms for the convolution operator present in this type of networks. One of these approaches leverages the \imcol transform followed by a general matrix multiplication (GEMM) in order to take advantage of the highly optimized realizations of the GEMM kernel in many linear algebra libraries. The main problems of this approach are 1) the large memory workspace required to host the intermediate matrices generated by the IM2COL transform; and 2) the time to perform the IM2COL transform, which is not negligible for complex neural networks. This paper presents a portable high performance convolution algorithm based on the BLIS realization of the GEMM kernel that avoids the use of the intermediate memory by taking advantage of the BLIS structure. In addition, the proposed algorithm eliminates the cost of the explicit IM2COL transform, while maintaining the portability and performance of the underlying realization of GEMM in BLIS.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Pablo San Juan (1 paper)
  2. Adrián Castelló (7 papers)
  3. Manuel F. Dolz (5 papers)
  4. Pedro Alonso-Jordá (3 papers)
  5. Enrique S. Quintana-Ortí (31 papers)
Citations (18)

Summary

We haven't generated a summary for this paper yet.