Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FAST-VQA: Efficient End-to-end Video Quality Assessment with Fragment Sampling (2207.02595v1)

Published 6 Jul 2022 in cs.CV and cs.MM

Abstract: Current deep video quality assessment (VQA) methods are usually with high computational costs when evaluating high-resolution videos. This cost hinders them from learning better video-quality-related representations via end-to-end training. Existing approaches typically consider naive sampling to reduce the computational cost, such as resizing and cropping. However, they obviously corrupt quality-related information in videos and are thus not optimal for learning good representations for VQA. Therefore, there is an eager need to design a new quality-retained sampling scheme for VQA. In this paper, we propose Grid Mini-patch Sampling (GMS), which allows consideration of local quality by sampling patches at their raw resolution and covers global quality with contextual relations via mini-patches sampled in uniform grids. These mini-patches are spliced and aligned temporally, named as fragments. We further build the Fragment Attention Network (FANet) specially designed to accommodate fragments as inputs. Consisting of fragments and FANet, the proposed FrAgment Sample Transformer for VQA (FAST-VQA) enables efficient end-to-end deep VQA and learns effective video-quality-related representations. It improves state-of-the-art accuracy by around 10% while reducing 99.5% FLOPs on 1080P high-resolution videos. The newly learned video-quality-related representations can also be transferred into smaller VQA datasets, boosting performance in these scenarios. Extensive experiments show that FAST-VQA has good performance on inputs of various resolutions while retaining high efficiency. We publish our code at https://github.com/timothyhtimothy/FAST-VQA.

Citations (129)

Summary

  • The paper presents FAST-VQA, leveraging Grid Mini-patch Sampling to preserve spatial and contextual quality cues while reducing computational load.
  • It employs FANet with a Video Swin Transformer backbone enhanced by novel modules like Gated Relative Position Biases and Intra-Patch Non-Linear Regression for precise quality scoring.
  • Experimental results demonstrate a 10% PLCC accuracy improvement and a 99.5% FLOP reduction on benchmarks such as LSVQ and LIVE-VQC, highlighting its practical efficiency.

Efficient End-to-End Video Quality Assessment with FAST-VQA

The paper presents FAST-VQA, a novel approach toward Video Quality Assessment (VQA) that confronts the computational burdens associated with high-resolution video analysis. With the proliferation of high-definition video content, assessing video quality effectively and efficiently using deep learning continued to be a substantial challenge given the computational intensity, especially in scenarios involving high-resolution inputs.

Key Contributions

This research proposes the Grid Mini-patch Sampling (GMS) technique, which serves as a better alternative to traditional downsizing and cropping methods that often compromise the intrinsic quality-related information. Unlike resizing or cropping approaches, which are shown to disrupt local and global quality representations, GMS maintains spatial textures and contextual relations by sampling raw-resolution mini-patches uniformly across the video frames. These mini-patches, termed as "fragments," are also temporally aligned to retain inter-frame variations indicative of video quality.

The Fragment Attention Network (FANet) is designed specifically to leverage these fragments. FANet employs a Video Swin Transformer Tiny (Swin-T) backbone integrated with novel modules such as Gated Relative Position Biases (GRPB) and Intra-Patch Non-Linear Regression (IP-NLR) to effectively process the fragment inputs. GRPB distinguishes genuine spatial discontinuity from artifacts, while IP-NLR allows for granular regression of quality scores by individual patch features, which preserves the spatial quality information until the final regression stage.

Experimental Insights

Empirical results demonstrate FAST-VQA's remarkable performance enhancements over state-of-the-art VQA methods across multiple benchmarks such as LSVQ and LIVE-VQC. When evaluating 1080P high-resolution videos, FAST-VQA achieves approximately a 10% accuracy improvement in PLCC and a dramatic reduction in computational load by 99.5% FLOPs in comparison to the leading previous methods. Such improvements reflect the efficacy of the proposed sampling strategy and network design in retaining video quality discerning features while drastically cutting down resource usage.

Implications and Future Developments

The implications of this research are manifold. Practically, FAST-VQA provides an avenue to apply high-performance VQA in resource-constrained environments, thanks to its operational efficiency and capability to handle varying video resolutions. Theoretically, the GMS concept and FANet architecture underscore significant progress in quality-centered video representation learning within deep networks.

The pretrain-finetune paradigm proposed illustrates the potential for transferring learned quality features between datasets, suggesting further exploration into generalizable video-quality-related feature representations. Future work could explore extensions of the fragment sampling technique to other spatiotemporal video analysis tasks and assess its adaptation to alternative transformer architectures beyond the Swin Transformer used herein.

Conclusion

The introduction of FAST-VQA marks a significant stride in VQA by harmonizing quality preservation with computational efficiency. By addressing the dual challenges of maintaining video quality fidelity and reducing computational impedance, this research sets a new paradigm in video analysis applicable across growing libraries of high-resolution video content. The transformative impact of FAST-VQA is poised to extend beyond VQA, serving as an archetype for efficient processing in video-based deep learning tasks in the era of increasingly high-resolution media.