Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ChipQA: No-Reference Video Quality Prediction via Space-Time Chips (2109.08726v1)

Published 17 Sep 2021 in eess.IV and cs.CV

Abstract: We propose a new model for no-reference video quality assessment (VQA). Our approach uses a new idea of highly-localized space-time (ST) slices called Space-Time Chips (ST Chips). ST Chips are localized cuts of video data along directions that \textit{implicitly} capture motion. We use perceptually-motivated bandpass and normalization models to first process the video data, and then select oriented ST Chips based on how closely they fit parametric models of natural video statistics. We show that the parameters that describe these statistics can be used to reliably predict the quality of videos, without the need for a reference video. The proposed method implicitly models ST video naturalness, and deviations from naturalness. We train and test our model on several large VQA databases, and show that our model achieves state-of-the-art performance at reduced cost, without requiring motion computation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Joshua P. Ebenezer (9 papers)
  2. Zaixi Shang (11 papers)
  3. Yongjun Wu (22 papers)
  4. Hai Wei (20 papers)
  5. Sriram Sethuraman (11 papers)
  6. Alan C. Bovik (83 papers)
Citations (39)

Summary

We haven't generated a summary for this paper yet.