Overview of "Massive Online Crowdsourced Study of Subjective and Objective Picture Quality"
This paper presents an extensive paper on image quality assessment through the creation of the LIVE In the Wild Image Quality Challenge Database. The authors, Ghadiyaram and Bovik, address shortcomings in existing image quality databases, many of which rely on synthetic distortions applied in controlled environments. To bridge the gap, this research leverages real-world images captured with mobile devices, which contain authentic complex distortion mixtures.
Contributions and Methodology
The primary contributions of the paper include:
- Database Creation: The LIVE In the Wild Image Quality Challenge Database consists of 1,162 images afflicted by genuine distortions from diverse mobile devices.
- Crowdsourcing Framework: The authors deployed a crowdsourcing strategy on Amazon Mechanical Turk, gathering over 350,000 opinion scores from more than 8,100 unique observers.
- Data Reliability and Validation: Despite variability in paper conditions, the paper reports high internal consistency in the subjective dataset, validating this large-scale crowdsourced approach by achieving a Spearman correlation of 0.9851 between crowdsourced data and established lab results.
Insights into Perceptual Image Quality
The authors highlight that traditional databases typically focus on single, synthetic distortions. By contrast, the new database captures images affected by natural conditions such as lighting variations and device-specific artifacts, which are often misrepresented in controlled lab settings. This perspective allows for a broader understanding of how real-world distortions challenge both human observers and automated perceptual models.
Performance Evaluation of IQA Models
Several top-performing no-reference image quality assessment (NR IQA) models were tested on the new database. The results reveal that existing models, including state-of-the-art ones like BRISQUE, perform poorly on this dataset, with FRIQUEE, a new method proposed by the authors, showing relatively better performance.
Implications for Future Research
The findings suggest an urgent need for the development of more robust IQA models that can handle the complex and varied distortions observed in real-world scenarios. The crowdsourced approach also opens new avenues for large-scale subjective quality data collection, overcoming traditional limitations of lab-based studies.
Conclusion
This paper sets a precedent for future research in real-world image quality assessment, highlighting the necessity of combining authentic distortions with crowdsourced subjective assessments. The paper's contributions could potentially reshape methodologies in the field, encouraging the development of advanced models to ensure a satisfactory quality of experience (QoE) in everyday image consumption.
Future developments could explore similar methodologies in video quality assessment, extending the crowd-based strategy to capture a more comprehensive understanding of multimedia quality perceptions in real-world settings.