Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Image Quality Assessment: Integrating Model-Centric and Data-Centric Approaches (2207.14769v2)

Published 29 Jul 2022 in cs.CV

Abstract: Learning-based image quality assessment (IQA) has made remarkable progress in the past decade, but nearly all consider the two key components -- model and data -- in isolation. Specifically, model-centric IQA focuses on developing better'' objective quality methods on fixed and extensively reused datasets, with a great danger of overfitting. Data-centric IQA involves conducting psychophysical experiments to constructbetter'' human-annotated datasets, which unfortunately ignores current IQA models during dataset creation. In this paper, we first design a series of experiments to probe computationally that such isolation of model and data impedes further progress of IQA. We then describe a computational framework that integrates model-centric and data-centric IQA. As a specific example, we design computational modules to quantify the sampling-worthiness of candidate images. Experimental results show that the proposed sampling-worthiness module successfully spots diverse failures of the examined blind IQA models, which are indeed worthy samples to be included in next-generation datasets.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (58)
  1. Unsupervised feature learning framework for no-reference image quality assessment. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1098–1105, 2012.
  2. Blind image quality assessment based on high order statistics aggregation. IEEE Transactions on Image Processing, 25(9):4444–4457, 2016.
  3. dipIQ: Blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Transactions on Image Processing, 26(8):3951–3964, 2017a.
  4. No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 21(12):4695–4708, 2012.
  5. RankIQA: Learning from rankings for no-reference image quality assessment. In IEEE International Conference on Computer Vision, pages 1040–1049, 2017.
  6. NIMA: Neural image assessment. IEEE Transactions on Image Processing, 27(8):3998–4011, 2018.
  7. Blindly assess image quality in the wild guided by a self-adaptive hyper network. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3667–3676, 2020.
  8. Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Transactions on Image Processing, 30(258):3474–3486, 2021.
  9. Convolutional neural networks for no-reference image quality assessment. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1733–1740, 2014.
  10. KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing, 29(305):4041–4056, 2020.
  11. End-to-end blind image quality assessment using deep neural networks. IEEE Transactions on Image Processing, 27(3):1202–1213, 2017b.
  12. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology, 30(1):36–47, 2020.
  13. Learning to rank for blind image quality assessment. IEEE Transactions on Neural Networks and Learning Systems, 26(10):2275–2290, 2015.
  14. Tie-Yan Liu. Learning to Rank for Information Retrieval. Springer Science & Business Media, 2011.
  15. Norm-in-norm loss with faster convergence and better performance for image quality assessment. In ACM International Conference on Multimedia, pages 789–797, 2020.
  16. Semi-supervised deep ensembles for blind image quality assessment. In International Joint Conference on Artificial Intelligence Workshop on Weakly Supervised Representation Learning, pages 1–6, 2021.
  17. Mu Li. Scaling Distributed Machine Learning with System and Algorithm Co-design. PhD thesis, Carnegie Mellon University, 2017.
  18. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing, 15(11):3440–3451, 2006.
  19. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Transactions on Image Processing, 20(1):64–75, 2011.
  20. Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing, 25(1):372–387, 2016.
  21. Perceptual quality assessment of smartphone photography. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3677–3686, 2020.
  22. Comparison of four subjective methods for image quality assessment. Computer Graphics Forum, 31(8):2478–2491, 2012.
  23. HodgeRank on random graphs for subjective video quality assessment. IEEE Transactions on Multimedia, 14(3):844–857, 2012.
  24. Active sampling for subjective image quality assessment. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4249–4256, 2014.
  25. A probabilistic approach to people-centric photo selection and sequencing. IEEE Transactions on Multimedia, 19(11):2609–2624, 2017.
  26. Debiased subjective assessment of real-world image enhancement. In IEEE Conference on Computer Vision and Pattern Recognition, pages 711–721, 2021.
  27. Maximum differentiation (MAD) competition: A methodology for comparing computational models of perceptual quantities. Journal of Vision, 8(12):1–13, 2008.
  28. I am going MAD: Maximum discrepancy competition for comparing classifiers adaptively. In International Conference on Learning Representations, pages 1–13, 2020.
  29. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, pages 1–14, 2015.
  30. Deep ensemble Bayesian active learning: Addressing the mode collapse issue in Monte Carlo dropout via ensembles. arXiv preprint arXiv:1811.03897, 2018.
  31. Active learning for convolutional neural networks: A core-set approach. In International Conference on Learning Representations, pages 1–13, 2018.
  32. Active learning for regression based on query by committee. In International Conference on Intelligent Data Engineering and Automated Learning, pages 209–218, 2007.
  33. Dongrui Wu. Pool-based sequential active learning for regression. IEEE Transactions on Neural Networks and Learning Systems, 30(5):1348–1359, 2019.
  34. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Transactions on Image Processing, 20(12):3350–3364, 2011.
  35. Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Transactions on Image Processing, 21(8):3339–3352, 2012.
  36. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Transactions on Image Processing, 27(1):206–219, 2018.
  37. ITU-R Recommendation. BT.500-14: Methodologies for the subjective assessment of the quality of television images, 2019. URL https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.500-14-201910-S!!PDF-E.pdf.
  38. Subjective image quality assessment with boosted triplet comparisons. IEEE Access, 9:138939–138975, 2021.
  39. TID2008 - A database for evaluation of full-reference visual quality assessment metrics. Advances of Modern Radioelectronics, 10(4):30–45, 2009.
  40. Most apparent distortion: Full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging, 19(1):1–21, 2010.
  41. From patches to pictures (PaQ-2-PiQ): Mapping the perceptual space of picture quality. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3575–3585, 2020.
  42. FRank: A ranking method with fidelity loss. In International ACM SIGIR Conference on Research and Development in Information Retrieva, pages 383–390, 2007.
  43. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
  44. Deep Learning. MIT Press, 2016.
  45. Group maximum differentiation competition: Model comparison with few samples. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4):851–864, 2020.
  46. Generalizable no-reference image quality assessment via deep meta-learning. IEEE Transactions on Circuits and Systems for Video Technology, 32(3):1048–1060, 2022.
  47. Introduction to Information Retrieval. Cambridge University Press, 2008.
  48. Inconsistency and rank preservation. Journal of Mathematical Psychology, 28(2):205–214, 1984.
  49. Adaptive greedy approximations. Constructive Approximation, 13(1):57–98, 1997.
  50. Balas Kausik Natarajan. Sparse approximate solutions to linear systems. SIAM Journal on Computing, 24(2):227–234, 1995.
  51. Adam: A method for stochastic optimization. In International Conference on Learning Representations, pages 1–15, 2015.
  52. Max Welling. Herding dynamical weights to learn. In International Conference on Machine Learning, pages 1121–1128, 2009.
  53. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11):2765–2781, 2013.
  54. Data-driven exemplar model selection. In IEEE Winter Conference on Applications of Computer Vision, pages 339–346, 2014.
  55. Learning loss for active learning. In IEEE Conference on Computer Vision and Pattern Recognition, pages 93–102, 2019.
  56. Query by committee. In Annual Workshop on Computational Learning Theory, pages 287–297, 1992.
  57. Analyzing and modeling the maximum diversity problem by zero-one programming. Decision Sciences, 24(6):1171–1185, 1993.
  58. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In IEEE International Conference on Computer Vision, pages 1026–1034, 2015.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Peibei Cao (5 papers)
  2. Dingquan Li (18 papers)
  3. Kede Ma (57 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.