Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Im2win: Memory Efficient Convolution On SIMD Architectures (2306.14320v1)

Published 25 Jun 2023 in cs.NE, cs.AI, and cs.LG

Abstract: Convolution is the most expensive operation among neural network operations, thus its performance is critical to the overall performance of neural networks. Commonly used convolution approaches, including general matrix multiplication (GEMM)-based convolution and direct convolution, rely on im2col for data transformation or do not use data transformation at all, respectively. However, the im2col data transformation can lead to at least 2$\times$ memory footprint compared to not using data transformation at all, thus limiting the size of neural network models running on memory-limited systems. Meanwhile, not using data transformation usually performs poorly due to nonconsecutive memory access although it consumes less memory. To solve those problems, we propose a new memory-efficient data transformation algorithm, called im2win. This algorithm refactorizes a row of square or rectangle dot product windows of the input image and flattens unique elements within these windows into a row in the output tensor, which enables consecutive memory access and data reuse, and thus greatly reduces the memory overhead. Furthermore, we propose a high-performance im2win-based convolution algorithm with various optimizations, including vectorization, loop reordering, etc. Our experimental results show that our algorithm reduces the memory overhead by average to 41.6% compared to the PyTorch's convolution implementation based on im2col, and achieves average to 3.6$\times$ and 5.3$\times$ speedup in performance compared to the im2col-based convolution and not using data transformation, respectively.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. E. J. Crowley, G. Gray, and A. J. Storkey, “Moonshine: Distilling with cheap convolutions,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  2. N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 116–131.
  3. V. Sze, Y.-H. Chen, T.-J. Yang, and J. S. Emer, “Efficient processing of deep neural networks: A tutorial and survey,” Proceedings of the IEEE, vol. 105, no. 12, pp. 2295–2329, 2017.
  4. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.
  5. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard et al., “TensorFlow: A system for Large-scale machine learning,” in 12th USENIX symposium on operating systems design and implementation (OSDI 16), 2016, pp. 265–283.
  6. J. J. Dongarra, J. Du Croz, S. Hammarling, and I. S. Duff, “A set of level 3 basic linear algebra subprograms,” ACM Transactions on Mathematical Software (TOMS), vol. 16, no. 1, pp. 1–17, 1990.
  7. J. Zhang, F. Franchetti, and T. M. Low, “High performance zero-memory overhead direct convolutions,” in International Conference on Machine Learning.   PMLR, 2018, pp. 5776–5785.
  8. K. Chellapilla, S. Puri, and P. Simard, “High performance convolutional neural networks for document processing,” in Tenth International Workshop on Frontiers in Handwriting Recognition.   Suvisoft, 2006.
  9. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp. 675–678.
  10. J. A. Gunnels, G. M. Henry, and R. A. Geijn, “A family of high-performance matrix multiplication algorithms,” in International Conference on Computational Science.   Springer, 2001, pp. 51–60.
  11. A. Anderson, A. Vasudevan, C. Keane, and D. Gregg, “Low-memory gemm-based convolution algorithms for deep neural networks,” CoRR, vol. abs/1709.03395, 2017.
  12. A. V. Trusov, E. E. Limonova, D. P. Nikolaev, and V. V. Arlazarov, “p-im2col: Simple yet efficient convolution algorithm with flexibly controlled memory overhead,” IEEE Access, vol. 9, pp. 168 162–168 184, 2021.
  13. M. Cho and D. Brand, “Mec: memory-efficient convolution for deep neural network,” in International Conference on Machine Learning.   PMLR, 2017, pp. 815–824.
  14. A. Heinecke, G. Henry, M. Hutchinson, and H. Pabst, “Libxsmm: accelerating small matrix multiplications by runtime code generation,” in SC’16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis.   IEEE, 2016, pp. 981–991.
  15. E. Georganas, S. Avancha, K. Banerjee, D. Kalamkar, G. Henry, H. Pabst, and A. Heinecke, “Anatomy of high-performance deep learning convolutions on simd architectures,” in SC18: International Conference for High Performance Computing, Networking, Storage and Analysis.   IEEE, 2018, pp. 830–841.
  16. J. J. Dongarra and A. Hinds, “Unrolling loops in fortran,” Software: Practice and Experience, vol. 9, no. 3, pp. 219–226, 1979.
  17. T. M. Low, F. D. Igual, T. M. Smith, and E. S. Quintana-Orti, “Analytical modeling is enough for high-performance blis,” ACM Transactions on Mathematical Software (TOMS), vol. 43, no. 2, pp. 1–18, 2016.
  18. M. Wolfe, “Iteration space tiling for memory hierarchies,” in Proceedings of the Third SIAM Conference on Parallel Processing for Scientific Computing.   USA: Society for Industrial and Applied Mathematics, 1989, p. 357–361.
  19. “PyTorch,” 2022. [Online]. Available: https://github.com/pytorch/pytorch
  20. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2015.
  21. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Shuai Lu (91 papers)
  2. Jun Chu (6 papers)
  3. Xu T. Liu (5 papers)
Citations (3)