Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RefQSR: Reference-based Quantization for Image Super-Resolution Networks (2404.01690v1)

Published 2 Apr 2024 in cs.CV

Abstract: Single image super-resolution (SISR) aims to reconstruct a high-resolution image from its low-resolution observation. Recent deep learning-based SISR models show high performance at the expense of increased computational costs, limiting their use in resource-constrained environments. As a promising solution for computationally efficient network design, network quantization has been extensively studied. However, existing quantization methods developed for SISR have yet to effectively exploit image self-similarity, which is a new direction for exploration in this study. We introduce a novel method called reference-based quantization for image super-resolution (RefQSR) that applies high-bit quantization to several representative patches and uses them as references for low-bit quantization of the rest of the patches in an image. To this end, we design dedicated patch clustering and reference-based quantization modules and integrate them into existing SISR network quantization methods. The experimental results demonstrate the effectiveness of RefQSR on various SISR networks and quantization methods.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (57)
  1. C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2, pp. 295–307, 2015.
  2. B. Lim, S. Son, H. Kim, S. Nah, and K. Mu Lee, “Enhanced deep residual networks for single image super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 136–144.
  3. N. Ahn, B. Kang, and K.-A. Sohn, “Fast, accurate, and lightweight super-resolution with cascading residual network,” in Proceedings of the European Conference on Computer Vision, 2018, pp. 252–268.
  4. F. Kong, M. Li, S. Liu, D. Liu, J. He, Y. Bai, F. Chen, and L. Fu, “Residual local feature network for efficient super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 766–776.
  5. J. Choi, Z. Wang, S. Venkataramani, P. I.-J. Chuang, V. Srinivasan, and K. Gopalakrishnan, “PACT: Parameterized clipping activation for quantized neural networks,” arXiv preprint arXiv:1805.06085, 2018.
  6. K. Wang, Z. Liu, Y. Lin, J. Lin, and S. Han, “HAQ: Hardware-aware automated quantization with mixed precision,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 8612–8620.
  7. Z. Dong, Z. Yao, A. Gholami, M. W. Mahoney, and K. Keutzer, “HAWQ: Hessian aware quantization of neural networks with mixed-precision,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 293–302.
  8. C. Liu, X. Zhang, R. Zhang, L. Li, S. Zhou, D. Huang, Z. Li, Z. Du, S. Liu, and T. Chen, “Rethinking the importance of quantization bias, toward full low-bit training,” IEEE Trans. Image Process., vol. 31, pp. 7006–7019, 2022.
  9. P. Peng, M. You, K. Jiang, Y. Lian, and W. Xu, “MBFQuant: A multiplier-bitwidth-fixed, mixed-precision quantization method for mobile CNN-based applications,” IEEE Trans. Image Process., vol. 32, pp. 2438–2453, 2023.
  10. H. Li, C. Yan, S. Lin, X. Zheng, B. Zhang, F. Yang, and R. Ji, “PAMS: Quantized super-resolution via parameterized max scale,” in Proceedings of the European Conference on Computer Vision, 2020, pp. 564–580.
  11. Y. Zhong, M. Lin, X. Li, K. Li, Y. Shen, F. Chao, Y. Wu, and R. Ji, “Dynamic dual trainable bounds for ultra-low precision super-resolution networks,” in Proceedings of the European Conference on Computer Vision, 2022, pp. 1–18.
  12. X. Kong, H. Zhao, Y. Qiao, and C. Dong, “ClassSR: A general framework to accelerate super-resolution networks by data characteristic,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 12 016–12 025.
  13. W. Xie, D. Song, C. Xu, C. Xu, H. Zhang, and Y. Wang, “Learning frequency-aware dynamic network for efficient super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4308–4317.
  14. C. Hong, S. Baik, H. Kim, S. Nah, and K. M. Lee, “CADyQ: Content-aware dynamic quantization for image super-resolution,” in Proceedings of the European Conference on Computer Vision, 2022, pp. 367–383.
  15. S. Tian, M. Lu, J. Liu, Y. Guo, Y. Chen, and S. Zhang, “CABM: Content-aware bit mapping for single image super-resolution network with large input,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 1756–1765.
  16. J.-B. Huang, A. Singh, and N. Ahuja, “Single image super-resolution from transformed self-exemplars,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2015, pp. 5197–5206.
  17. J.-S. Yoo, D.-W. Kim, Y. Lu, and S.-W. Jung, “RZSR: Reference-based zero-shot super-resolution with depth guided self-exemplars,” IEEE Trans. Multimedia, vol. 13, pp. 5972–5983, 2023.
  18. C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proceedings of the European Conference on Computer Vision, 2014, pp. 184–199.
  19. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
  20. Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 2472–2481.
  21. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proceedings of the Advances in Neural Information Processing Systems, vol. 30, 2017, pp. 6000–6010.
  22. S. Anwar, S. Khan, and N. Barnes, “A deep journey into super-resolution: A survey,” ACM Comput. Surv., vol. 53, no. 3, pp. 1–34, 2020.
  23. K. Nasrollahi and T. B. Moeslund, “Super-resolution: a comprehensive survey,” Mach. Vis. Appl., vol. 25, pp. 1423–1468, 2014.
  24. Z. Wang, J. Chen, and S. C. Hoi, “Deep learning for image super-resolution: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 10, pp. 3365–3387, 2020.
  25. H. Yue, X. Sun, J. Yang, and F. Wu, “Landmark image super-resolution by retrieving web images,” IEEE Trans. Image Process., vol. 22, no. 12, pp. 4865–4878, 2013.
  26. L. Lu, W. Li, X. Tao, J. Lu, and J. Jia, “MASA-SR: Matching acceleration and spatial adaptation for reference-based image super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 6368–6377.
  27. W. Sun and Z. Chen, “Learning discrete representations from reference images for large scale factor image super-resolution,” IEEE Trans. Image Process., vol. 31, pp. 1490–1503, 2022.
  28. J. Lu, W. Hu, and Y. Sun, “A deep learning method for image super-resolution based on geometric similarity,” Signal Process. Image Commun., vol. 70, pp. 210–219, 2019.
  29. G. Chantas, S. N. Nikolopoulos, and I. Kompatsiaris, “Heavy-tailed self-similarity modeling for single image super resolution,” IEEE Trans. Image Process., vol. 30, pp. 838–852, 2021.
  30. G. Gendy, N. Sabor, J. Hou, and G. He, “Mixer-based local residual network for lightweight image super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023, pp. 1593–1602.
  31. B. Chen, M. Lin, K. Sheng, M. Zhang, P. Chen, K. Li, L. Cao, and R. Ji, “Arm: Any-time super-resolution method,” in Proceedings of the European Conference on Computer Vision, 2022, pp. 254–270.
  32. S. Wang, J. Liu, K. Chen, X. Li, M. Lu, and Y. Guo, “Adaptive patch exiting for scalable single image super-resolution,” in Proceedings of the European Conference on Computer Vision, 2022, pp. 292–307.
  33. B. Zhuang, C. Shen, M. Tan, L. Liu, and I. Reid, “Towards effective low-bitwidth convolutional neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 7920–7928.
  34. Q. Jin, L. Yang, and Z. Liao, “Adabits: Neural network quantization with adaptive bit-widths,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2146–2156.
  35. “Apple describes 7nm a12 bionic chips,” https://www.eenewsanalog.com/news/apple-describes-7nm-a12-bionic-chip, 2018.
  36. “Nvidia turing gpu architecture,” https://images.nvidia.com/aem-dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf, 2018.
  37. Y. Umuroglu, L. Rasnayake, and M. Själander, “BISMO: A scalable bit-serial matrix multiplication overlay for reconfigurable computing,” in Proceedings of the International Conference on Field Programmable Logic and Applications, 2018, pp. 307–314.
  38. Z. Cai and N. Vasconcelos, “Rethinking differentiable search for mixed-precision neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2349–2358.
  39. L. Yang and Q. Jin, “Fracbits: Mixed precision quantization via fractional bit-widths,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 12, 2021, pp. 10 612–10 620.
  40. J. Xin, N. Wang, X. Jiang, J. Li, H. Huang, and X. Gao, “Binarized neural network for single image super resolution,” in Proceedings of the European Conference on Computer Vision, 2020, pp. 91–107.
  41. X. Jiang, N. Wang, J. Xin, K. Li, X. Yang, and X. Gao, “Training binary neural network without batch normalization for image super-resolution,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 2, 2021, pp. 1700–1707.
  42. M. Ayazoglu, “Extremely lightweight quantization robust real-time single-image super resolution for mobile devices,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021, pp. 2472–2479.
  43. Z. Du, J. Liu, J. Tang, and G. Wu, “Anchor-based plain net for mobile image super-resolution,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021, pp. 2494–2502.
  44. C. Hong, H. Kim, S. Baik, J. Oh, and K. M. Lee, “DAQ: Channel-wise distribution-aware quantization for deep image super-resolution networks,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022, pp. 2675–2684.
  45. M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi, “Describing textures in the wild,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2014, pp. 3606–3613.
  46. J. Lee, D. Kim, W. Lee, J. Ponce, and B. Ham, “Learning semantic correspondence exploiting an object-level prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 3, pp. 1399–1414, 2020.
  47. H. Lee, C. Han, and S.-W. Jung, “GPS-GLASS: Learning nighttime semantic segmentation using daytime video and gps data,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2023, pp. 4003–4012.
  48. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2017, pp. 4681–4690.
  49. X. Zhang, H. Zeng, S. Guo, and L. Zhang, “Efficient long-range attention network for image super-resolution,” in Proceedings of the European Conference on Computer Vision, 2022, pp. 649–667.
  50. N. P. Pandey, M. Fournarakis, C. Patel, and M. Nagel, “Softmax bias correction for quantized generative models,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2023, pp. 1453–1458.
  51. X. Li, Y. Liu, L. Lian, H. Yang, Z. Dong, D. Kang, S. Zhang, and K. Keutzer, “Q-Diffusion: Quantizing diffusion models,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 17 535–17 545.
  52. E. Agustsson and R. Timofte, “NTIRE 2017 challenge on single image super-resolution: Dataset and study,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, July 2017.
  53. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004.
  54. M. Van Baalen, C. Louizos, M. Nagel, R. A. Amjad, Y. Wang, T. Blankevoort, and M. Welling, “Bayesian bits: Unifying quantization and pruning,” in Proceedings of the Advances in Neural Information Processing Systems, vol. 33, 2020, pp. 5741–5752.
  55. W. Van Gansbeke, S. Vandenhende, S. Georgoulis, M. Proesmans, and L. Van Gool, “SCAN: Learning to classify images without labels,” in Proceedings of the European Conference on Computer Vision, 2020, pp. 268–285.
  56. Q. Qian, “Stable cluster discrimination for deep clustering,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 16 645–16 654.
  57. “Apache TVM website,” https://tvm.apache.org/, Feb 2024.
Citations (2)

Summary

We haven't generated a summary for this paper yet.