Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Stereo Magnification: Learning View Synthesis using Multiplane Images (1805.09817v1)

Published 24 May 2018 in cs.CV and cs.GR

Abstract: The view synthesis problem--generating novel views of a scene from known imagery--has garnered recent attention due in part to compelling applications in virtual and augmented reality. In this paper, we explore an intriguing scenario for view synthesis: extrapolating views from imagery captured by narrow-baseline stereo cameras, including VR cameras and now-widespread dual-lens camera phones. We call this problem stereo magnification, and propose a learning framework that leverages a new layered representation that we call multiplane images (MPIs). Our method also uses a massive new data source for learning view extrapolation: online videos on YouTube. Using data mined from such videos, we train a deep network that predicts an MPI from an input stereo image pair. This inferred MPI can then be used to synthesize a range of novel views of the scene, including views that extrapolate significantly beyond the input baseline. We show that our method compares favorably with several recent view synthesis methods, and demonstrate applications in magnifying narrow-baseline stereo images.

Citations (375)

Summary

  • The paper introduces a deep learning framework that predicts Multiplane Images from stereo pairs to extrapolate novel views beyond the original capture.
  • It utilizes a layered scene representation with color and alpha maps across depth planes to efficiently render new perspectives.
  • The approach leverages vast online video data for training, achieving superior SSIM and PSNR metrics and promising enhanced 3D imaging and VR experiences.

In-Depth Evaluation of "Stereo Magnification: Learning View Synthesis using Multiplane Images"

The paper "Stereo Magnification: Learning View Synthesis using Multiplane Images" addresses the problem of view synthesis, specifically focusing on the extrapolation of scenes captured by narrow-baseline stereo cameras, such as those in dual-lens smartphones and VR cameras. The authors introduce a novel approach known as stereo magnification, leveraging a layered scene representation called Multiplane Images (MPIs) within a deep learning framework.

Summary of Contributions

The key contributions of the paper are threefold:

  1. Learning Framework for Stereo Magnification: The authors present a technique for extrapolating views from narrow-baseline stereo imagery by leveraging a large array of online video data. This involves training a deep neural network to predict MPIs from stereo image pairs, subsequently using these MPIs to generate novel views that extend significantly beyond the initial input images.
  2. Novel Representation with Multiplane Images: The MPI approach serves as an effective scene representation for view synthesis. By allocating scene textures and geometries across multiple planes at varying depths relative to a reference frame, MPIs enable efficient rendering of new viewpoints. This is particularly advantageous as it allows a shared scene representation to generate multiple outputs efficiently.
  3. Utilization of Online Video for Training: One innovative aspect is utilizing YouTube videos as a data source, providing massive amounts of diverse visual data. This is pivotal in the creation of a generalizable model capable of synthesizing realistic and consistent novel views by simply mining such data for relevant training sets.

Methodology and Results

The paper outlines a comprehensive pipeline for view synthesis. Initially, images and their camera parameters create a plane sweep volume. The input to the deep network is a concatenation of the stereo pair and this volume, which captures the scene's depth relationships. The network's output, an MPI, consists of color images and alpha maps arranged in depth layers. Novel views are synthesized by transforming and compositing these layered representations.

The proposed technique has been evaluated against state-of-the-art methods such as those by Kalantari et al. and non-learning approaches from Zhang et al., showing superior performance in terms of SSIM and PSNR metrics. Furthermore, compared to interpolation methods, the authors tackled the challenge of rendering scenes with occlusions and reflections effectively, structuring their evaluations to emphasize real-world applicability, including tests on uncurated datasets like those collected from smartphones and VR cameras.

Implications and Future Work

The implications of this paper span both practical and theoretical domains. In practical terms, the ability to magnify stereo baselines positions this technology well for enhancing 3D photography, particularly for devices with limited baseline widths, like smartphones, potentially leading to more immersive VR experiences as well. Theoretically, the introduction of MPI as a scene representation model could pave the way for future research in efficient, scalable view synthesis methodologies, offering a blend of traditional LDI concepts with modern deep learning capabilities.

Future developments may extend this framework to scenarios with more varied input configurations, such as multi-view sequences or even a single view with inferred depth. Such lines of research could potentially break new ground in concurrent domain applications, like autonomous navigation systems, that rely on robust scene understanding and rendering in diverse environmental conditions.

In conclusion, this paper presents a meaningful advancement in view synthesis through its novel application of MPIs, innovative utilization of publicly available data, and integration within a learning framework. As stereo imaging continues to gain traction across technologies, the significance of such methods in enhancing perceptual imaging capabilities will surely come to the fore.