A Low-Complexity View Synthesis Distortion Estimation Method for 3D Video with Large Baseline Considerations (2510.17037v1)
Abstract: Depth-image-based rendering is a key view synthesis algorithm in 3D video systems, which enables the synthesis of virtual views from texture images and depth maps. An efficient view synthesis distortion estimation model is critical for optimizing resource allocation in real-time applications such as interactive free-viewpoint video and 3D video streaming services. However, existing estimation methods are often computationally intensive, require parameter training, or performance poorly in challenging large baseline configurations. This paper presents a novel, low-complexity, and training-free method to accurately estimate the distortion of synthesized views without performing the actual rendering process. Key contributions include: (1) A joint texture-depth classification method that accurately separates texture image into locally stationary and non-stationary regions, which mitigates misclassifications by using texture-only methods. (2) A novel baseline distance indicator is designed for the compensation scheme for distortions caused by large baseline configurations. (3) A region-based blending estimation strategy that geometrically identifies overlapping, single-view, and mutual disocclusion regions, predicting distortion in synthesized views from two reference views with differing synthesis conditions. Experiments on standard MPEG 3D video sequences validate the proposed method's high accuracy and efficiency, especially in large baseline configurations. This method enables more flexible camera arrangements in 3D content acquisition by accurately predicting synthesis quality under challenging geometric configurations.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.