Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion (2312.06725v3)

Published 11 Dec 2023 in cs.CV

Abstract: Generating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation speed and face challenges in maintaining generalizability and quality. To address this issue, we propose EpiDiff, a localized interactive multiview diffusion model. At the core of the proposed approach is to insert a lightweight epipolar attention block into the frozen diffusion model, leveraging epipolar constraints to enable cross-view interaction among feature maps of neighboring views. The newly initialized 3D modeling module preserves the original feature distribution of the diffusion model, exhibiting compatibility with a variety of base diffusion models. Experiments show that EpiDiff generates 16 multiview images in just 12 seconds, and it surpasses previous methods in quality evaluation metrics, including PSNR, SSIM and LPIPS. Additionally, EpiDiff can generate a more diverse distribution of views, improving the reconstruction quality from generated multiviews. Please see our project page at https://huanngzh.github.io/EpiDiff/.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (70)
  1. Renderdiffusion: Image diffusion for 3d reconstruction, inpainting and generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12608–12618, 2023.
  2. Large-vocabulary 3d diffusion model with transformer. arXiv preprint arXiv:2309.07920, 2023.
  3. Generative novel view synthesis with 3d-aware diffusion models. arXiv preprint arXiv:2304.02602, 2023.
  4. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14124–14133, 2021.
  5. Single-stage diffusion nerf: A unified approach to 3d generation and reconstruction. arXiv preprint arXiv:2304.06714, 2023a.
  6. Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation. arXiv preprint arXiv:2303.13873, 2023b.
  7. Sdfusion: Multimodal 3d shape completion, reconstruction, and generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4456–4465, 2023.
  8. Objaverse: A universe of annotated 3d objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13142–13153, 2023.
  9. Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20637–20647, 2023.
  10. Google scanned objects: A high-quality dataset of 3d scanned household items. In 2022 International Conference on Robotics and Automation (ICRA), pages 2553–2560. IEEE, 2022.
  11. Get3d: A generative model of high quality 3d textured shapes learned from images. Advances In Neural Information Processing Systems, 35:31841–31854, 2022.
  12. Learning controllable 3d diffusion models from single-view images. arXiv preprint arXiv:2304.06700, 2023a.
  13. Nerfdiff: Single-image view synthesis with nerf-guided distillation from 3d-aware diffusion. In International Conference on Machine Learning, pages 11808–11826. PMLR, 2023b.
  14. Yuan-Chen Guo. Instant neural surface reconstruction, 2022. https://github.com/bennyguo/instant-nsr-pl.
  15. 3dgen: Triplane latent diffusion for textured mesh generation. arXiv preprint arXiv:2303.05371, 2023.
  16. Unsupervised learning of 3d object categories from videos in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4700–4709, 2021.
  17. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  18. Zero-shot text-guided object generation with dream fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 867–876, 2022.
  19. Geonerf: Generalizing nerf with geometry priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18365–18375, 2022.
  20. Shap-e: Generating conditional 3d implicit functions. arXiv preprint arXiv:2305.02463, 2023.
  21. Neuralfield-ldm: Scene generation with hierarchical latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8496–8506, 2023.
  22. Viewformer: Nerf-free neural rendering from few images using transformers. In European Conference on Computer Vision, pages 198–216. Springer, 2022.
  23. Magic3d: High-resolution text-to-3d content creation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
  24. One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization, 2023a.
  25. Zero-1-to-3: Zero-shot one image to 3d object, 2023b.
  26. Neural rays for occlusion-aware image-based rendering. arxiv cs. CV, 2107:1, 2021.
  27. Syncdreamer: Learning to generate multiview-consistent images from a single-view image. arXiv preprint arXiv:2309.03453, 2023c.
  28. Meshdiffusion: Score-based generative 3d mesh modeling. arXiv preprint arXiv:2303.08133, 2023d.
  29. Sparseneus: Fast generalizable neural surface reconstruction from sparse views. In European Conference on Computer Vision, pages 210–227. Springer, 2022.
  30. Diffusion probabilistic models for 3d point cloud generation. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
  31. Realfusion: 360deg reconstruction of any object from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8446–8455, 2023.
  32. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
  33. Diffrf: Rendering-guided 3d radiance field diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4328–4338, 2023.
  34. Point-e: A system for generating 3d point clouds from complex prompts. arXiv preprint arXiv:2212.08751, 2022.
  35. Autodecoding latent 3d diffusion models. arXiv preprint arXiv:2307.05445, 2023.
  36. Dreamfusion: Text-to-3d using 2d diffusion. arXiv, 2022.
  37. Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors. arXiv preprint arXiv:2306.17843, 2023.
  38. Dreambooth3d: Subject-driven text-to-3d generation. arXiv preprint arXiv:2303.13508, 2023.
  39. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
  40. Common objects in 3d: Large-scale learning and evaluation of real-life 3d category reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10901–10911, 2021.
  41. High-resolution image synthesis with latent diffusion models, 2021.
  42. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35:36479–36494, 2022.
  43. Object scene representation transformer. Advances in Neural Information Processing Systems, 35:9512–9524, 2022a.
  44. Scene representation transformer: Geometry-free novel view synthesis through set-latent scene representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6229–6238, 2022b.
  45. Ditto-nerf: Diffusion-based iterative text to omni-directional 3d model. arXiv preprint arXiv:2304.02827, 2023a.
  46. Let 2d diffusion model know 3d-consistency for robust text-to-3d generation. arXiv preprint arXiv:2303.07937, 2023b.
  47. Anything-3d: Towards single-view anything reconstruction in the wild. arXiv preprint arXiv:2304.10261, 2023.
  48. Light field networks: Neural scene representations with single-evaluation rendering. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 19313–19325, 2021.
  49. Generalizable patch-based neural rendering. In European Conference on Computer Vision, pages 156–174. Springer, 2022.
  50. Make-it-3d: High-fidelity 3d creation from a single image with diffusion prior. arXiv preprint arXiv:2303.14184, 2023a.
  51. Mvdiffusion: Enabling holistic multi-view image generation with correspondence-aware diffusion. arXiv preprint arXiv:2307.01097, 2023b.
  52. GRF: learning a general radiance field for 3d scene representation and rendering. CoRR, abs/2010.04595, 2020.
  53. Textmesh: Generation of realistic 3d meshes from text prompts. arXiv preprint arXiv:2304.12439, 2023.
  54. Consistent view synthesis with pose-guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16773–16783, 2023.
  55. Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12619–12629, 2023a.
  56. Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv preprint arXiv:2106.10689, 2021a.
  57. Ibrnet: Learning multi-view image-based rendering. In CVPR, 2021b.
  58. Rodin: A generative model for sculpting 3d digital avatars using diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4563–4573, 2023b.
  59. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
  60. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. arXiv preprint arXiv:2305.16213, 2023c.
  61. Novel view synthesis with diffusion models. arXiv preprint arXiv:2210.04628, 2022.
  62. 3d-aware image generation using 2d diffusion models. arXiv preprint arXiv:2303.17905, 2023.
  63. Contranerf: Generalizable neural radiance fields for synthetic-to-real novel view synthesis via contrastive learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16508–16517, 2023.
  64. pixelnerf: Neural radiance fields from one or few images. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 4578–4587. Computer Vision Foundation / IEEE, 2021.
  65. Points-to-3d: Bridging the gap between sparse points and shape-controllable text-to-3d generation. In Proceedings of the 31st ACM International Conference on Multimedia, pages 6841–6850, 2023a.
  66. Long-term photometric consistent novel view synthesis with diffusion models. arXiv preprint arXiv:2304.10700, 2023b.
  67. Pushing the limits of 3d shape generation at scale. arXiv preprint arXiv:2306.11510, 2023c.
  68. LION: latent point diffusion models for 3d shape generation. In NeurIPS, 2022.
  69. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018.
  70. Nerfusion: Fusing radiance fields for large-scale scene reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5449–5458, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Zehuan Huang (9 papers)
  2. Hao Wen (52 papers)
  3. Junting Dong (19 papers)
  4. Yaohui Wang (50 papers)
  5. Yangguang Li (44 papers)
  6. Xinyuan Chen (50 papers)
  7. Yan-Pei Cao (58 papers)
  8. Ding Liang (39 papers)
  9. Yu Qiao (563 papers)
  10. Bo Dai (245 papers)
  11. Lu Sheng (63 papers)
Citations (20)

Summary

  • The paper introduces a novel diffusion framework that integrates a lightweight epipolar attention block to enhance multi-view image synthesis.
  • It achieves rapid generation by synthesizing 16 views in 12 seconds and outperforms traditional methods on PSNR, SSIM, and LPIPS metrics.
  • The approach improves 3D reconstruction by ensuring spatially consistent views and offers seamless integration with existing diffusion models.

Enhancing Multi-View Image Synthesis

Introduction to Multi-View Synthesis

The creation of multiple images of an object from different viewpoints, using just a single image, is an important technological advancement with applications in augmented reality, gaming, and robotics. Traditional methods accomplish this but often trade off between speed, quality, and consistency. A recent development, titled EpiDiff, presents a novel approach to this synthesis problem, focusing on enhancing both the quality and speed of generating these multi-view images.

EpiDiff Framework

EpiDiff distinguishes itself by incorporating a "lightweight epipolar attention block" into a pre-existing diffusion model. This block uses the geometric rule of epipolar constraints, commonly applied in stereo vision, to understand the spatial relationships between different views. This technique encourages the synthesis of new images that are consistent with one another, and it improves upon the operation speed of former methods. EpiDiff achieves compatibility with a range of existing base diffusion models, integrating seamlessly without requiring extensive modifications.

Performance and Advantages

This method's efficiency is reflected in its ability to produce 16 different views in a mere 12 seconds. According to standard quality evaluation metrics such as PSNR, SSIM, and LPIPS, EpiDiff shows superior performance compared to its predecessors. Not only does it rapidly generate multiview images, but the model’s adaptability allows for the creation of views with more varied distribution, which is crucial for enhancing 3D object reconstruction from these generated images.

Application Potential and Further Research

The effective 3D shape recovery from synthesized multi-view images opens many doors for practical applications. However, EpiDiff still has limitations, especially in handling significant changes in viewpoint or larger scene contexts. Additionally, the current pipeline separates the steps of multiview image synthesis and 3D reconstruction, which could be streamlined in future versions. Despite these constraints, EpiDiff presents a significant step forward in the field of multi-view image synthesis, combining speed with high-quality image generation. Further research and development are expected to expand its capabilities and refine its practical utility.

Github Logo Streamline Icon: https://streamlinehq.com