Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
121 tokens/sec
GPT-4o
9 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning a No-Reference Quality Metric for Single-Image Super-Resolution (1612.05890v1)

Published 18 Dec 2016 in cs.CV

Abstract: Numerous single-image super-resolution algorithms have been proposed in the literature, but few studies address the problem of performance evaluation based on visual perception. While most super-resolution images are evaluated by fullreference metrics, the effectiveness is not clear and the required ground-truth images are not always available in practice. To address these problems, we conduct human subject studies using a large set of super-resolution images and propose a no-reference metric learned from visual perceptual scores. Specifically, we design three types of low-level statistical features in both spatial and frequency domains to quantify super-resolved artifacts, and learn a two-stage regression model to predict the quality scores of super-resolution images without referring to ground-truth images. Extensive experimental results show that the proposed metric is effective and efficient to assess the quality of super-resolution images based on human perception.

Citations (477)

Summary

  • The paper introduces a no-reference quality metric that learns from human perceptual scores to assess single-image super-resolution outputs effectively.
  • It employs local frequency, global frequency, and spatial features integrated via a two-stage regression model to predict quality without needing ground-truth images.
  • Experimental results show a strong correlation with human ratings and superior performance compared to traditional metrics like PSNR and SSIM.

Summary of "Learning a No-Reference Quality Metric for Single-Image Super-Resolution"

This paper by Ma et al. addresses the challenge of assessing the visual quality of single-image super-resolution (SR) outputs without relying on reference images. Typical evaluation metrics like PSNR and SSIM necessitate ground-truth high-resolution images, which are frequently unavailable in practice. The authors propose a no-reference quality metric that learns from human visual perceptual scores to evaluate SR images effectively. This work introduces three types of low-level statistical features in both spatial and frequency domains, which are then used in a two-stage regression model to predict quality scores without the need for ground-truth images.

Approach and Implementation

  1. Dataset and Human Perceptual Scores Collection: The authors conducted extensive human subject studies, using a substantial dataset of SR images generated by multiple algorithms. Participants rated the quality of these SR images, and these ratings served as training data for the proposed no-reference metric.
  2. Feature Extraction:
    • Local Frequency Features: Utilizing DCT coefficients to capture the statistical distribution related to high-frequency artifacts introduced during SR.
    • Global Frequency Features: Implemented through wavelet transform and Gaussian scale mixture models to encapsulate band and across-band correlations.
    • Spatial Features: Based on analyzing the singular values derived from the spatial structure of image patches, adapting to intensity discontinuities.
  3. Two-Stage Regression Model: The initial stage involves learning distinct regression forests for each feature type to estimate partial quality scores. These partial scores are then linearly combined to predict the overall image quality score, correlating effectively with human perceptual scores.

Experimental Results

Extensive experiments demonstrate the efficacy of the proposed metric. The approach shows strong correlation with human scores in various validation settings, outperforming several existing no-reference and full-reference metrics such as BRISQUE, BLIINDS, and even traditional measures like PSNR and SSIM in scenarios where ground-truth images were available.

Implications and Future Work

The proposed metric enables robust evaluation of SR algorithms in practical settings where reference images are unavailable. It provides a critical tool for advancing image processing methodologies without the constraints of reference-dependent assessment techniques. Future avenues could explore enhancements in feature representation and model learning that further encapsulate human visual perception intricacies. The method could also be adapted to assess other types of image restoration or enhancement tasks, potentially leveraging advancements in machine learning models to refine the evaluation process.

In conclusion, this work presents a pertinent contribution to the domain of image quality assessment, particularly within the context of single-image super-resolution. By aligning evaluation metrics with human perception, the paper sets a foundation for both practical applications and further research into perceptual quality measurement methodologies.