Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
GPT-4o
12 tokens/sec
Gemini 2.5 Pro Pro
o3 Pro
5 tokens/sec
GPT-4.1 Pro
37 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
Gemini 2.5 Flash Deprecated
12 tokens/sec
2000 character limit reached

DreamStone: Image as Stepping Stone for Text-Guided 3D Shape Generation (2303.15181v3)

Published 24 Mar 2023 in cs.CV

Abstract: In this paper, we present a new text-guided 3D shape generation approach DreamStone that uses images as a stepping stone to bridge the gap between text and shape modalities for generating 3D shapes without requiring paired text and 3D data. The core of our approach is a two-stage feature-space alignment strategy that leverages a pre-trained single-view reconstruction (SVR) model to map CLIP features to shapes: to begin with, map the CLIP image feature to the detail-rich 3D shape space of the SVR model, then map the CLIP text feature to the 3D shape space through encouraging the CLIP-consistency between rendered images and the input text. Besides, to extend beyond the generative capability of the SVR model, we design a text-guided 3D shape stylization module that can enhance the output shapes with novel structures and textures. Further, we exploit pre-trained text-to-image diffusion models to enhance the generative diversity, fidelity, and stylization capability. Our approach is generic, flexible, and scalable, and it can be easily integrated with various SVR models to expand the generative space and improve the generative fidelity. Extensive experimental results demonstrate that our approach outperforms the state-of-the-art methods in terms of generative quality and consistency with the input text. Codes and models are released at https://github.com/liuzhengzhe/DreamStone-ISS.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. N. Agarwal and M. Gopi. Gamesh: Guided and augmented meshing for deep point networks. In 3DV, 2020.
  2. Pre-train, self-train, distill: A simple recipe for supersizing 3D reconstruction. CVPR, 2022.
  3. ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR], 2015.
  4. Text2shape: Generating shapes from natural language by learning joint embeddings. In ACCV, 2018.
  5. Tango: Text-driven photorealistic and robust 3D stylization via lighting decomposition. NeurIPS, 2022.
  6. Cogview: Mastering text-to-image generation via transformers. NeurIPS, 2021.
  7. Get3D: A generative model of high quality 3D textured shapes learned from images. NeurIPS, 2022.
  8. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. NIPS, 2017.
  9. Avatarclip: Zero-shot text-driven generation and animation of 3d avatars. ACM TOG (SIGGRAPH), 2022.
  10. Semantics-guided latent space exploration for shape generation. In COMPUT GRAPH FORUM, 2021.
  11. Zero-shot text-guided object generation with drefam fields. In CVPR, 2022.
  12. N. Jetchev. ClipMatrix: Text-controlled creation of 3D textured meshes. arXiv preprint arXiv:2109.12922, 2021.
  13. Controllable text-to-image generation. NeurIPS, 2019.
  14. ManiGAN: Text-guided image manipulation. In CVPR, 2020.
  15. Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning. NeurIPS, 2022.
  16. FuseDream: Training-free text-to-image generation with improved CLIP+ GAN space optimization. arXiv preprint arXiv:2112.01573, 2021.
  17. ISS: Image as stetting stone for text-guided 3D shape generation. ICLR, 2023.
  18. Towards implicit text-guided 3d𝑑ditalic_d shape generation. In CVPR, 2022.
  19. Text2mesh: Text-driven neural stylization for meshes. In CVPR, 2022.
  20. Clip-mesh: Generating textured meshes from text using pretrained image-text models. In SIGGRAPH Asia Conference Paper, 2022.
  21. Extracting triangular 3D models, materials, and lighting from images. In CVPR, 2022.
  22. GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models. ICLM, 2022.
  23. Differentiable volumetric rendering: Learning implicit 3D representations without 3D supervision. In CVPR, 2020.
  24. StyleCLIP: Text-driven manipulation of StyleGAN imagery. ICCV, 2021.
  25. DreamFusion: Text-to-3D using 2D diffusion. ICLR, 2023.
  26. MirrorGAN: Learning text-to-image generation by redescription. In CVPR, 2019.
  27. Learning transferable visual models from natural language supervision. In ICML, 2021.
  28. Hierarchical text-conditional image generation with CLIP latents. arXiv preprint arXiv:2204.06125, 2022.
  29. Zero-shot text-to-image generation. In ICML, 2021.
  30. Generative adversarial text to image synthesis. In ICML, 2016.
  31. Learning what and where to draw. NIPS, 2016.
  32. Common objects in 3D: Large-scale learning and evaluation of real-life 3D category reconstruction. In ICCV, 2021.
  33. High-resolution image synthesis with latent diffusion models. In CVPR, 2022.
  34. Network-to-network translation with conditional invertible neural networks. NeurIPS, 2020.
  35. Photorealistic text-to-image diffusion models with deep language understanding. NeurIPS, 2022.
  36. CLIP-Forge: Towards zero-shot text-to-shape generation. In CVPR, 2022.
  37. Efficient neural architecture for text-to-image synthesis. In IJCNN, 2020.
  38. Conditional image generation and manipulation for user-specified content. CVPRW, 2020.
  39. J. Tang. Stable-DreamFusion: Text-to-3D with stable-diffusion, 2022. https://github.com/ashawkey/stable-dreamfusion.
  40. CLIP-NeRF: Text-and-image driven manipulation of neural radiance fields. In CVPR, 2022.
  41. Cycle-consistent inverse GAN for text-to-image synthesis. ACM MM, 2021.
  42. CLIP-GEN: Language-free training of a text-to-image generator with CLIP. In arXiv preprint arXiv:2203.00386, 2022.
  43. Text to image synthesis with bidirectional generative adversarial network. In ICME, 2020.
  44. TediGAN: Text-guided diverse face image generation and manipulation. In CVPR, 2021.
  45. AttnGAN: Fine-grained text to image generation with attentional generative adversarial networks. In CVPR, 2018.
  46. M. Yuan and Y. Peng. Bridge-GAN: Interpretable representation learning for text-to-image synthesis. IEEE TCSVT, 2019.
  47. StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, 2017.
  48. StackGAN++: Realistic image synthesis with stacked generative adversarial networks. IEEE TPAMI, 2018.
  49. LAFITE: Towards language-free training for text-to-image generation. In CVPR, 2022.
  50. N. Zubić and P. Liò. An effective loss function for generating 3D models from single 2D image without rendering. arXiv preprint arXiv:2103.03390, 2021.
Citations (8)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com