Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BlockFusion: Expandable 3D Scene Generation using Latent Tri-plane Extrapolation (2401.17053v4)

Published 30 Jan 2024 in cs.CV, cs.AI, and cs.GR

Abstract: We present BlockFusion, a diffusion-based model that generates 3D scenes as unit blocks and seamlessly incorporates new blocks to extend the scene. BlockFusion is trained using datasets of 3D blocks that are randomly cropped from complete 3D scene meshes. Through per-block fitting, all training blocks are converted into the hybrid neural fields: with a tri-plane containing the geometry features, followed by a Multi-layer Perceptron (MLP) for decoding the signed distance values. A variational auto-encoder is employed to compress the tri-planes into the latent tri-plane space, on which the denoising diffusion process is performed. Diffusion applied to the latent representations allows for high-quality and diverse 3D scene generation. To expand a scene during generation, one needs only to append empty blocks to overlap with the current scene and extrapolate existing latent tri-planes to populate new blocks. The extrapolation is done by conditioning the generation process with the feature samples from the overlapping tri-planes during the denoising iterations. Latent tri-plane extrapolation produces semantically and geometrically meaningful transitions that harmoniously blend with the existing scene. A 2D layout conditioning mechanism is used to control the placement and arrangement of scene elements. Experimental results indicate that BlockFusion is capable of generating diverse, geometrically consistent and unbounded large 3D scenes with unprecedented high-quality shapes in both indoor and outdoor scenarios.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (73)
  1. Matan Atzmon and Yaron Lipman. 2020. Sal: Sign agnostic learning of shapes from raw data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2565–2574.
  2. Spatext: Spatio-textual representation for controllable image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18370–18380.
  3. Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18208–18218.
  4. Multidiffusion: Fusing diffusion paths for controlled image generation. (2023).
  5. Masksketch: Unpaired structure-guided masked image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1879–1889.
  6. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18392–18402.
  7. Efficient geometry-aware 3D generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16123–16133.
  8. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012 (2015).
  9. Text2tex: Text-driven texture synthesis via diffusion models. arXiv preprint arXiv:2303.11396 (2023).
  10. Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction. arXiv preprint arXiv:2304.06714 (2023).
  11. Scenedreamer: Unbounded 3d scene generation from 2d image collections. arXiv preprint arXiv:2302.01330 (2023).
  12. Diffusion-sdf: Conditional generative modeling of signed distance functions. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 2262–2272.
  13. CityGen: Infinite and Controllable 3D City Layout Generation. arXiv preprint arXiv:2312.01508 (2023).
  14. Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34 (2021), 8780–8794.
  15. Hyperdiffusion: Generating implicit neural fields with weight-space diffusion. arXiv preprint arXiv:2303.17015 (2023).
  16. Ctrl-Room: Controllable Text-to-3D Room Meshes Generation with Layout Constraints. arXiv preprint arXiv:2310.03602 (2023).
  17. SceneScape: Text-Driven Consistent Scene Generation. arXiv preprint arXiv:2302.01133 (2023).
  18. 3d-front: 3d furnished rooms with layouts and semantics. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 10933–10942.
  19. 3d-future: 3d furniture shape with texture. International Journal of Computer Vision (2021), 1–25.
  20. An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618 (2022).
  21. Get3d: A generative model of high quality 3d textured shapes learned from images. Advances In Neural Information Processing Systems 35 (2022), 31841–31854.
  22. Implicit geometric regularization for learning shapes. arXiv preprint arXiv:2002.10099 (2020).
  23. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626 (2022).
  24. Denoising diffusion probabilistic models. Advances in neural information processing systems 33 (2020), 6840–6851.
  25. Text2room: Extracting textured 3d meshes from 2d text-to-image models. arXiv preprint arXiv:2303.11989 (2023).
  26. Lrm: Large reconstruction model for single image to 3d. arXiv preprint arXiv:2311.04400 (2023).
  27. Composer: Creative and controllable image synthesis with composable conditions. arXiv preprint arXiv:2302.09778 (2023).
  28. Heewoo Jun and Alex Nichol. 2023. Shap-e: Generating conditional 3d implicit functions. arXiv preprint arXiv:2305.02463 (2023).
  29. Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013).
  30. Instant3d: Fast text-to-3d with sparse-view generation and large reconstruction model. arXiv preprint arXiv:2311.06214 (2023).
  31. Yang Li and Tatsuya Harada. 2022. Non-rigid point cloud registration with neural deformation pyramid. Advances in Neural Information Processing Systems 35 (2022), 27757–27768.
  32. Gligen: Open-set grounded text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 22511–22521.
  33. Magic3d: High-resolution text-to-3d content creation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 300–309.
  34. One-2-3-45++: Fast single image to 3d objects with consistent multi-view generation and 3d diffusion. arXiv preprint arXiv:2311.07885 (2023).
  35. One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization. arXiv preprint arXiv:2306.16928 (2023).
  36. Zero-1-to-3: Zero-shot one image to 3d object. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9298–9309.
  37. SyncDreamer: Generating Multiview-consistent Images from a Single-view Image. arXiv preprint arXiv:2309.03453 (2023).
  38. MeshDiffusion: Score-based Generative 3D Mesh Modeling. In International Conference on Learning Representations. https://openreview.net/forum?id=0cpM2ApF9p6
  39. Wonder3d: Single image to 3d using cross-domain diffusion. arXiv preprint arXiv:2310.15008 (2023).
  40. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11461–11471.
  41. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 65, 1 (2021), 99–106.
  42. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453 (2023).
  43. Diffrf: Rendering-guided 3d radiance field diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4328–4338.
  44. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021).
  45. Point-e: A system for generating 3d point clouds from complex prompts. arXiv preprint arXiv:2212.08751 (2022).
  46. Alexander Quinn Nichol and Prafulla Dhariwal. 2021. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning. PMLR, 8162–8171.
  47. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 165–174.
  48. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988 (2022).
  49. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 1, 2 (2022), 3.
  50. Zero-shot text-to-image generation. In International Conference on Machine Learning. PMLR, 8821–8831.
  51. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 10684–10695.
  52. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 22500–22510.
  53. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems 35 (2022), 36479–36494.
  54. ControlRoom3D: Room Generation using Semantic Proxy Rooms. arXiv:2312.05208 (2023).
  55. 3d neural field generation using triplane diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20875–20886.
  56. MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers. arXiv preprint arXiv:2311.15475 (2023).
  57. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning. PMLR, 2256–2265.
  58. Diffuscene: Scene graph denoising diffusion probabilistic model for generative indoor scene synthesis. arXiv preprint arXiv:2303.14207 (2023).
  59. MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion. arXiv (2023).
  60. Plug-and-play diffusion features for text-driven image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1921–1930.
  61. Attention is all you need. Advances in neural information processing systems 30 (2017).
  62. Sketch-guided text-to-image diffusion models. In ACM SIGGRAPH 2023 Conference Proceedings. 1–11.
  63. Rodin: A generative model for sculpting 3d digital avatars using diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4563–4573.
  64. Pretraining is all you need for image-to-image translation. arXiv preprint arXiv:2205.12952 (2022).
  65. Semantic image synthesis via diffusion models. arXiv preprint arXiv:2207.00050 (2022).
  66. Sceneformer: Indoor scene generation with transformers. In 2021 International Conference on 3D Vision (3DV). IEEE, 106–115.
  67. Fake it till you make it: face analysis in the wild using synthetic data alone. In Proceedings of the IEEE/CVF international conference on computer vision. 3681–3691.
  68. Dmv3d: Denoising multi-view diffusion using 3d large reconstruction model. arXiv preprint arXiv:2311.09217 (2023).
  69. Han Yan et al. 2024. Frankenstein: Generating Semantic-Compositional 3D Room in One Triplane. (2024).
  70. ConsistNet: Enforcing 3D Consistency for Multi-view Images Diffusion. arXiv preprint arXiv:2310.10343 (2023).
  71. LION: Latent point diffusion models for 3D shape generation. arXiv preprint arXiv:2210.06978 (2022).
  72. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 3836–3847.
  73. Locally attentional sdf diffusion for controllable 3d shape generation. arXiv preprint arXiv:2305.04461 (2023).
Citations (20)

Summary

  • The paper presents a latent tri-plane diffusion model that generates high-quality 3D scenes by representing structures as expandable unit blocks.
  • It employs an extrapolation mechanism that integrates new blocks using overlapping latent features to maintain geometric consistency across scenes.
  • A 2D layout conditioning method enables precise control over scene element placement, enhancing applications in gaming, AR/VR, and filmmaking.

Introduction

3D scene generation has become a critical area of research, fueled by its applications in industries such as gaming, filmmaking, and AR/VR. While 2D diffusion models have achieved remarkable successes in image synthesis, translating these advancements to the 3D domain presents unique challenges. Traditional methods have predominantly focused on generating 3D content within a fixed spatial extent—leaving the task of creating expandable 3D scenes relatively unexplored.

BlockFusion: Innovation in 3D Scene Generation

The paper introduces BlockFusion, an innovative model leveraging a tri-plane based architecture to generate and expand 3D scenes in a coherent and systematic manner. This method operates by generating scenes as unit blocks and enabling seamless incorporation of new blocks to extend the scene. Uniquely, BlockFusion is trained on datasets of randomly cropped 3D blocks from complete scene meshes, extracting high-level features into a hybrid neural field structure before encoding these into a more compact latent space for diffusion.

Key Contributions

The method proposed brings three significant advancements to the table:

  1. A latent tri-plane diffusion model that generalizes for high-quality 3D shape generation at the scene level.
  2. An extrapolation mechanism that allows for harmonious expansion of scenes by conditioning the generative process on features from existing blocks' latent representations.
  3. A 2D layout conditioning mechanism that grants users control over the placement and configuration of scene elements.

Experimental results underscore BlockFusion's capability of generating geometrically consistent and diverse large 3D scenes, outperforming methods that either generate fixed-size content or utilize less direct approaches for extendable scenes.

Methodology

BlockFusion's approach entails several steps, starting from the transformation of training 3D blocks to raw tri-planes, compression into a latent tri-plane space, and subsequently training a denoising diffusion probabilistic model (DDPM). To extend a scene, BlockFusion extrapolates these latent tri-planes to new blocks, syncing the generation process with overlapping features. The method also incorporates a straightforward 2D layout input to guide the arrangement of objects within the scene.

Performance

The paper presents robust quantitative results, revealing BlockFusion's strong generative performance. The approach demonstrates the ability to generate scenes with consistent geometry and novel layouts that weren’t present in the training set. The method outperforms previous work in creating large-scale scenes and shows potential as an industry-level content creation tool. However, it's worth noting the method's limitations in generating finer geometric details and the challenge of producing consistent textures for expansive scenes—areas identified for future enhancement.

BlockFusion embodies a significant step towards the automated creation of large-scale 3D content, with potential applications ranging from video game design to virtual exploration. Its expandable nature breaks free from the confines of predetermining world boundaries, providing a sophisticated toolset for the ongoing development in the field of generative 3D modeling.

Youtube Logo Streamline Icon: https://streamlinehq.com