Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

No-Reference Video Quality Assessment Using Space-Time Chips (2008.00031v3)

Published 31 Jul 2020 in eess.IV

Abstract: We propose a new prototype model for no-reference video quality assessment (VQA) based on the natural statistics of space-time chips of videos. Space-time chips (ST-chips) are a new, quality-aware feature space which we define as space-time localized cuts of video data in directions that are determined by the local motion flow. We use parametrized distribution fits to the bandpass histograms of space-time chips to characterize quality, and show that the parameters from these models are affected by distortion and can hence be used to objectively predict the quality of videos. Our prototype method, which we call ChipQA-0, is agnostic to the types of distortion affecting the video, and is based on identifying and quantifying deviations from the expected statistics of natural, undistorted ST-chips in order to predict video quality. We train and test our resulting model on several large VQA databases and show that our model achieves high correlation against human judgments of video quality and is competitive with state-of-the-art models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Joshua P. Ebenezer (9 papers)
  2. Zaixi Shang (11 papers)
  3. Yongjun Wu (22 papers)
  4. Hai Wei (21 papers)
  5. Alan C. Bovik (84 papers)
Citations (24)

Summary

We haven't generated a summary for this paper yet.