Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Massive Online Crowdsourced Study of Subjective and Objective Picture Quality (1511.02919v1)

Published 9 Nov 2015 in cs.CV

Abstract: Most publicly available image quality databases have been created under highly controlled conditions by introducing graded simulated distortions onto high-quality photographs. However, images captured using typical real-world mobile camera devices are usually afflicted by complex mixtures of multiple distortions, which are not necessarily well-modeled by the synthetic distortions found in existing databases. The originators of existing legacy databases usually conducted human psychometric studies to obtain statistically meaningful sets of human opinion scores on images in a stringently controlled visual environment, resulting in small data collections relative to other kinds of image analysis databases. Towards overcoming these limitations, we designed and created a new database that we call the LIVE In the Wild Image Quality Challenge Database, which contains widely diverse authentic image distortions on a large number of images captured using a representative variety of modern mobile devices. We also designed and implemented a new online crowdsourcing system, which we have used to conduct a very large-scale, multi-month image quality assessment subjective study. Our database consists of over 350000 opinion scores on 1162 images evaluated by over 7000 unique human observers. Despite the lack of control over the experimental environments of the numerous study participants, we demonstrate excellent internal consistency of the subjective dataset. We also evaluate several top-performing blind Image Quality Assessment algorithms on it and present insights on how mixtures of distortions challenge both end users as well as automatic perceptual quality prediction models.

Overview of "Massive Online Crowdsourced Study of Subjective and Objective Picture Quality"

This paper presents an extensive paper on image quality assessment through the creation of the LIVE In the Wild Image Quality Challenge Database. The authors, Ghadiyaram and Bovik, address shortcomings in existing image quality databases, many of which rely on synthetic distortions applied in controlled environments. To bridge the gap, this research leverages real-world images captured with mobile devices, which contain authentic complex distortion mixtures.

Contributions and Methodology

The primary contributions of the paper include:

  1. Database Creation: The LIVE In the Wild Image Quality Challenge Database consists of 1,162 images afflicted by genuine distortions from diverse mobile devices.
  2. Crowdsourcing Framework: The authors deployed a crowdsourcing strategy on Amazon Mechanical Turk, gathering over 350,000 opinion scores from more than 8,100 unique observers.
  3. Data Reliability and Validation: Despite variability in paper conditions, the paper reports high internal consistency in the subjective dataset, validating this large-scale crowdsourced approach by achieving a Spearman correlation of 0.9851 between crowdsourced data and established lab results.

Insights into Perceptual Image Quality

The authors highlight that traditional databases typically focus on single, synthetic distortions. By contrast, the new database captures images affected by natural conditions such as lighting variations and device-specific artifacts, which are often misrepresented in controlled lab settings. This perspective allows for a broader understanding of how real-world distortions challenge both human observers and automated perceptual models.

Performance Evaluation of IQA Models

Several top-performing no-reference image quality assessment (NR IQA) models were tested on the new database. The results reveal that existing models, including state-of-the-art ones like BRISQUE, perform poorly on this dataset, with FRIQUEE, a new method proposed by the authors, showing relatively better performance.

Implications for Future Research

The findings suggest an urgent need for the development of more robust IQA models that can handle the complex and varied distortions observed in real-world scenarios. The crowdsourced approach also opens new avenues for large-scale subjective quality data collection, overcoming traditional limitations of lab-based studies.

Conclusion

This paper sets a precedent for future research in real-world image quality assessment, highlighting the necessity of combining authentic distortions with crowdsourced subjective assessments. The paper's contributions could potentially reshape methodologies in the field, encouraging the development of advanced models to ensure a satisfactory quality of experience (QoE) in everyday image consumption.

Future developments could explore similar methodologies in video quality assessment, extending the crowd-based strategy to capture a more comprehensive understanding of multimedia quality perceptions in real-world settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Deepti Ghadiyaram (23 papers)
  2. Alan C. Bovik (83 papers)
Citations (607)