Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs (1904.06505v1)

Published 13 Apr 2019 in cs.CV and cs.MM

Abstract: Objective assessment of image quality is fundamentally important in many image processing tasks. In this work, we focus on learning blind image quality assessment (BIQA) models which predict the quality of a digital image with no access to its original pristine-quality counterpart as reference. One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training. Such data are typically collected via subjective testing, which is cumbersome, slow, and expensive. Here we first show that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIP) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content. We then learn an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model using RankNet, a pairwise learning-to-rank (L2R) algorithm, from millions of DIPs, each associated with a perceptual uncertainty level, leading to a DIP inferred quality (dipIQ) index. Extensive experiments on four benchmark IQA databases demonstrate that dipIQ outperforms state-of-the-art OU-BIQA models. The robustness of dipIQ is also significantly improved as confirmed by the group MAximum Differentiation (gMAD) competition method. Furthermore, we extend the proposed framework by learning models with ListNet (a listwise L2R algorithm) on quality-discriminable image lists (DIL). The resulting DIL Inferred Quality (dilIQ) index achieves an additional performance gain.

Overview of "dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs"

The paper "dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs" addresses a significant challenge in Blind Image Quality Assessment (BIQA) - the trade-off between the vast image space and the limited availability of reliable ground truth data for supervised learning. BIQA methods aim to evaluate image quality without referring to pristine counterparts, and their development has become crucial due to the unavailability of reference images in many real-world applications.

Main Contributions

1. Generation of Quality-Discriminable Image Pairs (DIPs):

The authors propose a novel method to automatically generate a vast amount of training data in the form of DIPs. This innovative generation process involves utilizing large-scale databases and well-established full-reference IQA models to measure differences between image pairs, thereby circumventing the expensive and slow process of subjective testing.

2. Learning Opinion-Unaware BIQA (OU-BIQA) Models:

The authors demonstrate the training of an OU-BIQA model, termed dipIQ, which does not rely on subjective scores for training. They use RankNet, a pairwise learning-to-rank (L2R) algorithm, to process millions of DIPs along with their associated perceptual uncertainties, thus learning a mapping from image features to their quality scores.

3. Extension to Listwise L2R through ListNet:

Furthermore, the paper explores an extension of their approach to a listwise L2R mechanism called ListNet. By forming image lists from DIPs, a marginal improvement in performance is achieved, leading to the DIL Inferred Quality (dilIQ) index.

Numerical and Experimental Results

The proposed dipIQ model has demonstrated improved performance over state-of-the-art OU-BIQA models across multiple benchmark databases including LIVE, CSIQ, and TID2013. On continuation through the exploratory validation using the group Maximum Differentiation (gMAD) method, dipIQ showed enhanced robustness by significantly reducing incorrect preference predictions.

The numerical strength of dipIQ is evidenced by outperforming existing models in both Spearman's rank-order correlation coefficient (SRCC) and Pearson linear correlation coefficient (PLCC) across various distortions. Notably, dipIQ was shown to perform competitively or better than some existing opinion-aware models when features were held consistent.

Implications and Future Developments

The work significantly impacts both practical and theoretical spheres of BIQA. Practically, dipIQ provides a robust framework for real-world applications by offering reliable image quality assessments where reference images are unfurnished. Theoretically, it challenges the traditional reliance on subjective MOS by demonstrating a feasible fully automated alternative based on large-scale data generation and machine learning.

Given the promising results of the dipIQ and dilIQ models, further refinement using end-to-end approaches such as deep learning architectures might enhance the generalizability and accuracy of similar BIQA systems. Moreover, enhancements in DIP generation, such as including indistinguishable quality pairs and uncovering improved pair or list selection strategies, could further bolster the reliability and applicability of such models in the training process.

The approach pioneered by Ma et al. in this paper opens new avenues for research into learning-to-rank methodologies tailored for image quality assessment, making a substantial contribution to the field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Kede Ma (57 papers)
  2. Wentao Liu (87 papers)
  3. Tongliang Liu (251 papers)
  4. Zhou Wang (98 papers)
  5. Dacheng Tao (826 papers)
Citations (282)