Processing Energy Modeling for Neural Network Based Image Compression (2306.16755v1)
Abstract: Nowadays, the compression performance of neural-networkbased image compression algorithms outperforms state-of-the-art compression approaches such as JPEG or HEIC-based image compression. Unfortunately, most neural-network based compression methods are executed on GPUs and consume a high amount of energy during execution. Therefore, this paper performs an in-depth analysis on the energy consumption of state-of-the-art neural-network based compression methods on a GPU and show that the energy consumption of compression networks can be estimated using the image size with mean estimation errors of less than 7%. Finally, using a correlation analysis, we find that the number of operations per pixel is the main driving force for energy consumption and deduce that the network layers up to the second downsampling step are consuming most energy.
- “Enhanced invertible encoding for learned image compression,” in Proc. 29th ACM International Conference on Multimedia (MM ’21), New York, NY, USA, 2021, MM ’21, pp. 162–170.
- “Rdonet: Rate-distortion optimized learned image compression with variable depth,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 1759–1763.
- The Shift Project, “Climate crisis: The unsustainable use of online video,” Tech. Rep., 2019.
- “End-to-end optimized image compression,” in Proc. International Conference on Learning Representations (ICLR), Apr 2017, pp. 1 – 27.
- A. Krizhevsky and G. E. Hinton, “Using very deep autoencoders for content-based image retrieval.,” in Proc. European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), 2011, pp. 489–494.
- “Variational image compression with a scale hyperprior,” in Proc. International Conference on Learning Representations (ICLR), 2018, pp. 1–47.
- “Joint autoregressive and hierarchical priors for learned image compression,” in Advances in Neural Information Processing Systems, Dec. 2018, vol. 31, pp. 1–10.
- “Learned image compression with discretized gaussian mixture likelihoods and attention modules,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
- “GPU Computing,” Proc. IEEE, vol. 96, no. 5, pp. 879–899, May 2008.
- “ImageNet Classification with Deep Convolutional Neural Networks,” in Adv. Neural Inf. Process. Syst. 2012, vol. 25, Curran Associates, Inc.
- NVIDIA Corporation, “NVIDIA Tesla V100 GPU Architecture - The World’s Most Advanced Data Center GPU,” https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf, Aug. 2017.
- NVIDIA Corporation, “NVIDIA Turing GPU Architecture,” https://images.nvidia.com/aem-dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf, Sept. 2018.
- NVIDIA Developers, “NVIDIA System Management Interface,” https://developer.nvidia.com/nvidia-system-management-interface, accessed 2022-09.
- NVIDIA Corporation, “NVML Reference Manual vR470,” https://docs.nvidia.com/pdf/NVML_API_Reference_Guide.pdf, July 2021.
- “Modeling the energy consumption of the HEVC decoding process,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 1, pp. 217–229, Jan. 2018.
- PyTorch Contributors, “Reproducibility — PyTorch 1.12 documentation,” https://pytorch.org/docs/stable/notes/randomness.html, June 2022.
- “CompressAI: Library and evaluation platform for end-to-end video compression research,” https://github.com/InterDigitalInc/CompressAI, accessed 2022-10.
- “Challenge on learned image compression (CLIC),” https://clic.compression.cc/2021, accessed 2022-10.