Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning a No-Reference Quality Assessment Model of Enhanced Images With Big Data (1904.08632v1)

Published 18 Apr 2019 in cs.CV

Abstract: In this paper we investigate into the problem of image quality assessment (IQA) and enhancement via machine learning. This issue has long attracted a wide range of attention in computational intelligence and image processing communities, since, for many practical applications, e.g. object detection and recognition, raw images are usually needed to be appropriately enhanced to raise the visual quality (e.g. visibility and contrast). In fact, proper enhancement can noticeably improve the quality of input images, even better than originally captured images which are generally thought to be of the best quality. In this work, we present two most important contributions. The first contribution is to develop a new no-reference (NR) IQA model. Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measre of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image datasets. Results of experiments on nine datasets validate the superiority and efficiency of our blind metric compared with typical state-of-the-art full-, reduced- and no-reference IQA methods. The second contribution is that a robust image enhancement framework is established based on quality optimization. For an input image, by the guidance of the proposed NR-IQA measure, we conduct histogram modification to successively rectify image brightness and contrast to a proper level. Thorough tests demonstrate that our framework can well enhance natural images, low-contrast images, low-light images and dehazed images. The source code will be released at https://sites.google.com/site/guke198701/publications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ke Gu (7 papers)
  2. Dacheng Tao (829 papers)
  3. Junfei Qiao (2 papers)
  4. Weisi Lin (118 papers)
Citations (344)

Summary

Overview of "Learning a No-Reference Quality Assessment Model of Enhanced Images with Big Data"

The paper "Learning a No-Reference Quality Assessment Model of Enhanced Images with Big Data" by Ke Gu et al. addresses the challenge of assessing the quality of enhanced images without reference to original images, a task critical in computer vision and image processing for applications such as object recognition and detection. The two primary contributions of this work are the development of a novel no-reference image quality assessment (NR-IQA) model and a robust image enhancement framework guided by quality optimization.

Key Contributions

  1. Development of NR-IQA Model: The proposed no-reference model evaluates image quality by extracting 17 features indicative of visual quality, including contrast, sharpness, brightness, and naturalness. A significant advancement in this work is the use of a regression module trained on an extensive dataset, significantly larger than those typically used, to ensure robust and reliable quality prediction. The model's effectiveness is demonstrated across nine datasets, showing superiority and efficiency over existing IQA methods.
  2. Quality-based Image Enhancement Framework: Utilizing their NR-IQA model, the authors introduce an image enhancement framework that optimizes visual quality by appropriately adjusting brightness and contrast through histogram modification. This enhancement approach successfully improves the visual quality of natural, low-contrast, low-light, and dehazed images.

Experimental Results

The experimental validation includes rigorous testing against state-of-the-art full-reference (FR), reduced-reference (RR), and other no-reference (NR) IQA methods. The proposed NR-IQA measure consistently shows superior performance across several benchmarks. Key results include:

  • High correlation with human opinions, as evidenced by performance indices such as PLC, SRC, and KRC.
  • Applicability across various datasets, showcasing robustness and generalization capabilities.
  • The enhancement framework, guided by the NR-IQA model, provides tangible improvements to various types of images.

Implications and Future Work

The implications of this research are both practical and theoretical:

  • Practical Implications: The NR-IQA model can be directly integrated into real-world applications where reference images are unavailable, such as automated quality control in photography and real-time video streaming services.
  • Theoretical Implications: This work advances the understanding of NR-IQA by demonstrating the effectiveness of leveraging big data in training robust no-reference models. It also provides a meaningful exploration of how image attributes related to human perception can be quantitatively assessed and optimized.

For future advancements, the paper suggests exploring the integration of visual saliency into the assessment framework and adapting the model to handle complex distortions like denoising, deblurring, and super-resolution. Moreover, the model's performance could be enhanced by exploring parallel computations during feature extraction and incorporating additional features that capture more image characteristics.

The research presented in this paper provides a comprehensive approach to image quality assessment and enhancement, laying the foundation for further exploration and development in the area of automated image processing and computer vision.