Papers
Topics
Authors
Recent
Search
2000 character limit reached

UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content

Published 29 May 2020 in cs.CV and eess.IV | (2005.14354v2)

Abstract: Recent years have witnessed an explosion of user-generated content (UGC) videos shared and streamed over the Internet, thanks to the evolution of affordable and reliable consumer capture devices, and the tremendous popularity of social media platforms. Accordingly, there is a great need for accurate video quality assessment (VQA) models for UGC/consumer videos to monitor, control, and optimize this vast content. Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of UGC content are unpredictable, complicated, and often commingled. Here we contribute to advancing the UGC-VQA problem by conducting a comprehensive evaluation of leading no-reference/blind VQA (BVQA) features and models on a fixed evaluation architecture, yielding new empirical insights on both subjective video quality studies and VQA model design. By employing a feature selection strategy on top of leading VQA model features, we are able to extract 60 of the 763 statistical features used by the leading models to create a new fusion-based BVQA model, which we dub the \textbf{VID}eo quality \textbf{EVAL}uator (VIDEVAL), that effectively balances the trade-off between VQA performance and efficiency. Our experimental results show that VIDEVAL achieves state-of-the-art performance at considerably lower computational cost than other leading models. Our study protocol also defines a reliable benchmark for the UGC-VQA problem, which we believe will facilitate further research on deep learning-based VQA modeling, as well as perceptually-optimized efficient UGC video processing, transcoding, and streaming. To promote reproducible research and public evaluation, an implementation of VIDEVAL has been made available online: \url{https://github.com/tu184044109/VIDEVAL_release}.

Citations (230)

Summary

  • The paper introduces a novel fusion-based BVQA model, VIDEVAL, to enhance video quality assessment in UGC.
  • It leverages extensive real-world datasets and statistical feature selection to validate high predictive accuracy and computational efficiency.
  • The benchmark sets a new standard for blind video quality assessment, inspiring future integration of deep learning with traditional methods.

An Expert Overview of "UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content"

The rapid proliferation of user-generated content (UGC) videos has necessitated advanced methodologies for gauging their quality, termed Video Quality Assessment (VQA). The paper "UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content" embarks on addressing this need by exploring the less-explored field of No-Reference/Blind VQA (BVQA) models, emphasizing the unpredictability and complexity of quality degradations in UGC videos.

Objective and Methodological Framework

The authors aim to refine BVQA models, focusing on subjective and objective VQA theoretical design and implementation. A comparative evaluation of BVQA features and models is conducted within a fixed evaluation framework, utilizing statistical feature selection from existing models to propose a new fusion-based model called VIDEVAL. This model aims to optimize the balance between computational efficiency and performance efficacy.

Data and Benchmarking

The study utilizes extensive, large-scale databases of UGC videos such as KoNViD-1k, LIVE-VQC, and YouTube-UGC, which encapsulate authentic distortions reflecting real-world scenarios. Table 1 in the paper presents an elaborate taxonomy of these datasets, highlighting their evolution and pertinence to practical scenarios. By fostering a robust study protocol, the paper sets an industry-standard benchmark that encourages the deployment of deep learning-based VQA models and perceptual optimization in video processing and streaming.

Numerical Results and Claims

VIDEVAL, crafted through the extraction of 60 significant features from an original set of 763, achieves state-of-the-art performance with significant computational savings compared to other leading models. Across evaluations, VIDEVAL not only demonstrates exceptional predictive accuracy but also reflects robustness across different types of UGC distortions. This capability positions VIDEVAL as a reliable and efficient tool for commercial large-scale video quality analysis, especially in sectors where storage, bandwidth, and processing resources are constrained.

Implications and Future Prospects

This comprehensive benchmarking not only enriches the theoretical foundation of BVQA models but also provides practical implications for streaming platforms and content providers. As models like VIDEVAL become entrenched in industry practice, improvements in video delivery and consumer experience become achievable.

Looking to the future, the research suggests a pivotal exploration into integrating deep learning methodologies with traditional handcrafted features. The synergy between these could yield even more precise and computationally viable models for VQA. As dataset sizes increase and incorporate diverse video types and quality levels, the nuances of BVQA models must evolve, paving the way for innovations in UGC quality assessment technology.

Conclusion

This paper represents a critical advance in the domain of blind video quality assessment for user-generated content. By refining BVQA methods and providing actionable insights through the development of VIDEVAL, the authors offer a substantial contribution to both academic research and practical application within the field. This initiative will undoubtedly inspire subsequent research and technological advances in AI-driven video quality evaluation.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.