Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 67 tok/s
Gemini 2.5 Pro 59 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 114 tok/s Pro
Kimi K2 201 tok/s Pro
GPT OSS 120B 465 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

LR-CNN: Lightweight Row-centric Convolutional Neural Network Training for Memory Reduction (2401.11471v1)

Published 21 Jan 2024 in cs.DC and cs.AI

Abstract: In the last decade, Convolutional Neural Network with a multi-layer architecture has advanced rapidly. However, training its complex network is very space-consuming, since a lot of intermediate data are preserved across layers, especially when processing high-dimension inputs with a big batch size. That poses great challenges to the limited memory capacity of current accelerators (e.g., GPUs). Existing efforts mitigate such bottleneck by external auxiliary solutions with additional hardware costs, and internal modifications with potential accuracy penalty. Differently, our analysis reveals that computations intra- and inter-layers exhibit the spatial-temporal weak dependency and even complete independency features. That inspires us to break the traditional layer-by-layer (column) dataflow rule. Now operations are novelly re-organized into rows throughout all convolution layers. This lightweight design allows a majority of intermediate data to be removed without any loss of accuracy. We particularly study the weak dependency between two consecutive rows. For the resulting skewed memory consumption, we give two solutions with different favorite scenarios. Evaluations on two representative networks confirm the effectiveness. We also validate that our middle dataflow optimization can be smoothly embraced by existing works for better memory reduction.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (43)
  1. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proceedings of the 3rd International Conference on Learning Representations, ICLR, 2015.
  2. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence.   AAAI Press, 2017, pp. 4278–4284.
  3. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR.   IEEE Computer Society, 2016, pp. 770–778.
  4. P. Chang, S. Zhang, G. Danabasoglu, S. G. Yeager et al., “An unprecedented set of high-resolution earth system simulations for understanding multiscale interactions in climate variability and change,” Journal of Advances in Modeling Earth Systems, vol. 12, no. 12, Article No. 12, 2020.
  5. H. Zhang, K. Dana, J. Shi, Z. Zhang, X. Wang, A. Tyagi, and A. Agrawal, “Context encoding for semantic segmentation,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2018, pp. 7151–7160.
  6. M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper, and B. Catanzaro, “Megatron-lm: Training multi-billion parameter language models using model parallelism,” arXiv preprint arXiv:1909.08053, 2019.
  7. Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu et al., “Gpipe: Efficient training of giant neural networks using pipeline parallelism,” Advances in neural information processing systems, vol. 32, 2019.
  8. M. Rhu, N. Gimelshein, J. Clemons, A. Zulfiqar, and S. W. Keckler, “vdnn: Virtualized deep neural networks for scalable, memory-efficient neural network design,” in 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO).   IEEE, 2016, pp. 1–13.
  9. J. Ren, S. Rajbhandari, R. Y. Aminabadi, O. Ruwase, S. Yang, M. Zhang, D. Li, and Y. He, “{{\{{ZeRO-Offload}}\}}: Democratizing {{\{{Billion-Scale}}\}} model training,” in 2021 USENIX Annual Technical Conference (USENIX ATC 21), 2021, pp. 551–564.
  10. T. Chen, B. Xu, C. Zhang, and C. Guestrin, “Training deep nets with sublinear memory cost,” arXiv preprint arXiv:1604.06174, 2016.
  11. G. S. Novikov, D. Bershatsky, J. Gusak, A. Shonenkov, D. V. Dimitrov, and I. Oseledets, “Few-bit backward: Quantized gradients of activation functions for memory footprint reduction,” in International Conference on Machine Learning.   PMLR, 2023, pp. 26 363–26 381.
  12. P. Micikevicius, S. Narang, J. Alben, G. Diamos, E. Elsen, D. Garcia, B. Ginsburg, M. Houston, O. Kuchaiev, G. Venkatesh et al., “Mixed precision training,” in International Conference on Learning Representations, 2018.
  13. X. Shi, X. Ning, L. Guo, T. Zhao, E. Liu, Y. Cai, Y. Dong, H. Yang, and Y. Wang, “Memory-oriented structural pruning for efficient image restoration,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 2, 2023, pp. 2245–2253.
  14. X. Ruan, Y. Liu, B. Li, C. Yuan, and W. Hu, “Dpfps: dynamic and progressive filter pruning for compressing convolutional neural networks from scratch,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 3, 2021, pp. 2495–2503.
  15. L. Wang, J. Ye, Y. Zhao, W. Wu, A. Li, S. L. Song, Z. Xu, and T. Kraska, “Superneurons: Dynamic gpu memory management for training deep neural networks,” in Proceedings of the 23rd ACM SIGPLAN symposium on principles and practice of parallel programming, 2018, pp. 41–53.
  16. X. Nie, X. Miao, Z. Yang, and B. Cui, “Tsplit: Fine-grained gpu memory management for efficient dnn training via tensor splitting,” in 2022 IEEE 38th International Conference on Data Engineering (ICDE).   IEEE, 2022, pp. 2615–2628.
  17. J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR.   IEEE Computer Society, 2009, pp. 248–255.
  18. W. Liang, “High-flyer|ai,” https://www.high-flyer.cn/en/#index, available on 23rd, Nevember, 2023.
  19. M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B. Su, “Scaling distributed machine learning with the parameter server,” in Proceedings of 11th USENIX Symposium on Operating Systems Design and Implementation, OSDI.   USENIX Association, 2014, pp. 583–598.
  20. A. Coates, B. Huval, T. Wang, D. Wu, B. Catanzaro, and N. Andrew, “Deep learning with cots hpc systems,” in International conference on machine learning.   PMLR, 2013, pp. 1337–1345.
  21. J. Mao, X. Chen, K. W. Nixon, C. Krieger, and Y. Chen, “Modnn: Local distributed mobile computing system for deep neural network,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017.   IEEE, 2017, pp. 1396–1401.
  22. T. Jin and S. Hong, “Split-cnn: Splitting window-based operations in convolutional neural networks for memory system optimization,” in Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, 2019, pp. 835–847.
  23. N. Shazeer, Y. Cheng, N. Parmar, D. Tran, A. Vaswani, P. Koanantakool, P. Hawkins, H. Lee, M. Hong, C. Young et al., “Mesh-tensorflow: Deep learning for supercomputers,” Advances in neural information processing systems, vol. 31, 2018.
  24. M. Wang, C.-c. Huang, and J. Li, “Supporting very large models using automatic dataflow graph partitioning,” in Proceedings of the Fourteenth EuroSys Conference 2019, 2019, pp. 1–17.
  25. S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He, “Zero: Memory optimizations toward training trillion parameter models,” in SC20: International Conference for High Performance Computing, Networking, Storage and Analysis.   IEEE, 2020, pp. 1–16.
  26. A. Harlap, D. Narayanan, A. Phanishayee, V. Seshadri, N. Devanur, G. Ganger, and P. Gibbons, “Pipedream: Fast and efficient pipeline parallel dnn training,” arXiv preprint arXiv:1806.03377, 2018.
  27. B. Pudipeddi, M. Mesmakhosroshahi, J. Xi, and S. Bharadwaj, “Training large neural networks with constant memory using a new execution algorithm,” arXiv preprint arXiv:2002.05645, 2020.
  28. S. Rajbhandari, O. Ruwase, J. Rasley, S. Smith, and Y. He, “Zero-infinity: Breaking the gpu memory wall for extreme scale deep learning,” in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 2021, pp. 1–14.
  29. W. Chen, J. Wilson, S. Tyree, K. Weinberger, and Y. Chen, “Compressing neural networks with the hashing trick,” in International conference on machine learning.   PMLR, 2015, pp. 2285–2294.
  30. M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “Xnor-net: Imagenet classification using binary convolutional neural networks,” in European conference on computer vision.   Springer, 2016, pp. 525–542.
  31. M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, “Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1,” arXiv preprint arXiv:1602.02830, 2016.
  32. T. Dettmers, “8-bit approximations for parallelism in deep learning,” arXiv preprint arXiv:1511.04561, 2015.
  33. T. Dettmers, M. Lewis, S. Shleifer, and L. Zettlemoyer, “8-bit optimizers via block-wise quantization,” in International Conference on Learning Representations, 2021.
  34. R. Zhao, Y. Hu, J. Dotzel, C. De Sa, and Z. Zhang, “Improving neural network quantization without retraining using outlier channel splitting,” in International conference on machine learning.   PMLR, 2019, pp. 7543–7552.
  35. G. Wang, H. Qin, S. A. Jacobs, C. Holmes, S. Rajbhandari, O. Ruwase, F. Yan, L. Yang, and Y. He, “Zero++: Extremely efficient collective communication for giant model training,” arXiv preprint arXiv:2306.10209, 2023.
  36. A. Jain, A. Phanishayee, J. Mars, L. Tang, and G. Pekhimenko, “Gist: Efficient data encoding for deep neural network training,” in 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA).   IEEE, 2018, pp. 776–789.
  37. S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” Advances in neural information processing systems, vol. 28, 2015.
  38. H. Hu, R. Peng, Y.-W. Tai, and C.-K. Tang, “Network trimming: A data-driven neuron pruning approach towards efficient deep architectures,” arXiv preprint arXiv:1607.03250, 2016.
  39. Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang, “Learning efficient convolutional networks through network slimming,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2736–2744.
  40. S. Li, Y. Zhao, R. Varma, O. Salpekar, P. Noordhuis, T. Li, A. Paszke, J. Smith, B. Vaughan, P. Damania et al., “Pytorch distributed: experiences on accelerating data parallel training,” Proceedings of the VLDB Endowment, vol. 13, no. 12, pp. 3005–3018, 2020.
  41. Y. Ro and J. Y. Choi, “Autolr: Layer-wise pruning and auto-tuning of learning rates in fine-tuning of deep networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 3, 2021, pp. 2486–2494.
  42. D. Joo, E. Yi, S. Baek, and J. Kim, “Linearly replaceable filters for deep network channel pruning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 35, no. 9, 2021, pp. 8021–8029.
  43. E. Yvinec, A. Dapogny, M. Cord, and K. Bailly, “Red: Looking for redundancies for data-freestructured compression of deep neural networks,” Advances in Neural Information Processing Systems, vol. 34, pp. 20 863–20 873, 2021.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.