Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Gaussian Splashing: Unified Particles for Versatile Motion Synthesis and Rendering (2401.15318v2)

Published 27 Jan 2024 in cs.GR, cs.AI, cs.CV, and cs.LG

Abstract: We demonstrate the feasibility of integrating physics-based animations of solids and fluids with 3D Gaussian Splatting (3DGS) to create novel effects in virtual scenes reconstructed using 3DGS. Leveraging the coherence of the Gaussian Splatting and Position-Based Dynamics (PBD) in the underlying representation, we manage rendering, view synthesis, and the dynamics of solids and fluids in a cohesive manner. Similar to GaussianShader, we enhance each Gaussian kernel with an added normal, aligning the kernel's orientation with the surface normal to refine the PBD simulation. This approach effectively eliminates spiky noises that arise from rotational deformation in solids. It also allows us to integrate physically based rendering to augment the dynamic surface reflections on fluids. Consequently, our framework is capable of realistically reproducing surface highlights on dynamic fluids and facilitating interactions between scene objects and fluids from new views. For more information, please visit our project page at \url{https://gaussiansplashing.github.io/}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (78)
  1. Point-based computer graphics. In ACM SIGGRAPH 2004 Course Notes. 7–es.
  2. Sine: Semantic-driven image-based nerf editing with prior-guided editing field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20919–20929.
  3. Jonathan T Barron and Jitendra Malik. 2014. Shape, illumination, and reflectance from shading. IEEE transactions on pattern analysis and machine intelligence 37, 8 (2014), 1670–1687.
  4. Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. CVPR (2022).
  5. Markus Becker and Matthias Teschner. 2007. Weakly compressible SPH for free surface flows. In Proceedings of the 2007 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (San Diego, California) (SCA ’07). Eurographics Association, Goslar, DEU, 209–217.
  6. Neural reflectance fields for appearance acquisition. arXiv preprint arXiv:2008.03824 (2020).
  7. Nerd: Neural reflectance decomposition from image collections. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 12684–12694.
  8. Neural-pil: Neural pre-integrated lighting for reflectance decomposition. Advances in Neural Information Processing Systems 34 (2021), 10691–10704.
  9. Robert Bridson. 2007. Fast Poisson disk sampling in arbitrary dimensions. SIGGRAPH sketches 10, 1 (2007), 1.
  10. Point-based rendering enhancement via deep learning. The Visual Computer 34 (2018), 829–841.
  11. Putting the Object Back into Video Object Segmentation. In arXiv.
  12. Auto splats: Dynamic point cloud visualization on the gpu. In Proc. Eurographics Symp. Parallel Graph. Vis. 1–10.
  13. Jiahua Dong and Yu-Xiong Wang. 2023. ViCA-NeRF: View-Consistency-Aware 3D Editing of Neural Radiance Fields. In Thirty-seventh Conference on Neural Information Processing Systems.
  14. Neural radiance flow for 4d view synthesis and video processing. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE Computer Society, 14304–14314.
  15. MD-Splatting: Learning Metric Deformation from 4D Gaussians in Highly Deformable Scenes. arXiv preprint arXiv:2312.00583 (2023).
  16. PIE-NeRF: Physics-based Interactive Elastodynamics with NeRF. arXiv:2311.13099 [cs.CV]
  17. Dynamic view synthesis from dynamic monocular video. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5712–5721.
  18. Fastnerf: High-fidelity neural rendering at 200fps. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14346–14355.
  19. Jeffrey P Grossman and William J Dally. 1998. Point sample rendering. In Rendering Techniques’ 98: Proceedings of the Eurographics Workshop in Vienna, Austria, June 29—July 1, 1998 9. Springer, 181–192.
  20. Antoine Guédon and Vincent Lepetit. 2023. SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering. arXiv preprint arXiv:2311.12775 (2023).
  21. Forward Flow for Novel View Synthesis of Dynamic Scenes. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 16022–16033.
  22. PyTorch. Programming with TensorFlow: Solution for Edge Computing Applications (2021), 87–104.
  23. Putting nerf on a diet: Semantically consistent few-shot view synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5885–5894.
  24. NeRFFaceEditing: Disentangled face editing in neural radiance fields. In SIGGRAPH Asia 2022 Conference Papers. 1–9.
  25. GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces. arXiv:2311.17977 [cs.CV]
  26. View-dependent scene appearance synthesis using inverse rendering from light fields. In 2021 IEEE International Conference on Computational Photography (ICCP). IEEE, 1–12.
  27. 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Transactions on Graphics 42, 4 (July 2023). https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
  28. Segment anything. arXiv preprint arXiv:2304.02643 (2023).
  29. Marc Levoy and Turner Whitted. 1985. The use of points as a display primitive. (1985).
  30. PAC-NeRF: Physics Augmented Continuum Neural Radiance Fields for Geometry-Agnostic System Identification. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=tVkrbkz42vc
  31. Neural scene flow fields for space-time view synthesis of dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6498–6508.
  32. ENVIDR: Implicit Differentiable Renderer with Neural Environment Lighting. arXiv preprint arXiv:2303.13022 (2023).
  33. Spidr: Sdf-based neural point fields for illumination and deformation. arXiv preprint arXiv:2210.08398 (2022).
  34. GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View Synthesis. arXiv:2312.11458 [cs.CV]
  35. Devrf: Fast deformable voxel radiance fields for dynamic scenes. Advances in Neural Information Processing Systems 35 (2022), 36762–36775.
  36. Neural sparse voxel fields. Advances in Neural Information Processing Systems 33 (2020), 15651–15663.
  37. NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images. arXiv preprint arXiv:2305.17398 (2023).
  38. Miles Macklin and Matthias Müller. 2013. Position based fluids. ACM Transactions on Graphics (TOG) 32, 4 (2013), 1–12.
  39. Miles Macklin and Matthias Muller. 2021. A constraint-based formulation of stable neo-hookean materials. In Proceedings of the 14th ACM SIGGRAPH conference on motion, interaction and games. 1–7.
  40. XPBD: position-based simulation of compliant constrained dynamics. In Proceedings of the 9th International Conference on Motion in Games. 49–54.
  41. NeRF: Representing scenes as neural radiance fields for view synthesis. In The European Conference on Computer Vision (ECCV).
  42. Joe J Monaghan. 1992. Smoothed particle hydrodynamics. Annual review of astronomy and astrophysics 30, 1 (1992), 543–574.
  43. Human Gaussian Splatting: Real-time Rendering of Animatable Avatars. arXiv preprint arXiv:2311.17113 (2023).
  44. Matthias Müller and Nuttapong Chentanez. 2011. Solid simulation with oriented particles. In ACM SIGGRAPH 2011 papers. 1–10.
  45. Position based dynamics. Journal of Visual Communication and Image Representation 18, 2 (2007), 109–118.
  46. Detailed rigid body simulation with extended position based dynamics. In Computer graphics forum, Vol. 39. Wiley Online Library, 101–112.
  47. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ACM Trans. Graph. 41, 4, Article 102 (jul 2022), 15 pages. https://doi.org/10.1145/3528223.3530127
  48. Mitsuba 2: A retargetable forward and inverse renderer. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1–17.
  49. Nerfies: Deformable neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5865–5874.
  50. Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields. arXiv preprint arXiv:2106.13228 (2021).
  51. CageNeRF: Cage-based Neural Radiance Field for Generalized 3D Deformation and Animation. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 31402–31415. https://proceedings.neurips.cc/paper_files/paper/2022/file/cb78e6b5246b03e0b82b4acc8b11cc21-Paper-Conference.pdf
  52. Real-time rendering of massive unstructured raw point clouds using screen-space operators. In Proceedings of the 12th International conference on Virtual Reality, Archaeology and Cultural Heritage. 105–112.
  53. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10318–10327.
  54. Dynamic mesh-aware radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 385–396.
  55. Barbara Solenthaler and Renato Pajarola. 2009. Predictive-corrective incompressible SPH. In ACM SIGGRAPH 2009 papers. 1–6.
  56. Nerv: Neural reflectance and visibility fields for relighting and view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7495–7504.
  57. Resolution-robust large mask inpainting with fourier convolutions. In Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2149–2159.
  58. Donald F Swinehart. 1962. The beer-lambert law. Journal of chemical education 39, 7 (1962), 333.
  59. Non-rigid neural radiance fields: Reconstruction and novel view synthesis of a dynamic scene from monocular video. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 12959–12970.
  60. Screen space fluid rendering with curvature flow. In Proceedings of the 2009 Symposium on Interactive 3D Graphics and Games (Boston, Massachusetts) (I3D ’09). Association for Computing Machinery, New York, NY, USA, 91–98. https://doi.org/10.1145/1507149.1507164
  61. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. NeurIPS (2021).
  62. Humannerf: Free-viewpoint rendering of moving people from monocular video. In Proceedings of the IEEE/CVF conference on computer vision and pattern Recognition. 16210–16220.
  63. 4d gaussian splatting for real-time dynamic scene rendering. arXiv preprint arXiv:2310.08528 (2023).
  64. Space-time neural irradiance fields for free-viewpoint video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9421–9431.
  65. PhysGaussian: Physics-Integrated 3D Gaussians for Generative Dynamics. arXiv preprint arXiv:2311.12198 (2023).
  66. Position-Based Surface Tension Flow. ACM Trans. Graph. 41, 6, Article 244 (nov 2022), 12 pages. https://doi.org/10.1145/3550454.3555476
  67. Point-nerf: Point-based neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5438–5448.
  68. Tianhan Xu and Tatsuya Harada. 2022. Deforming Radiance Fields with Cages. In ECCV.
  69. Gaussian Head Avatar: Ultra High-fidelity Head Avatar via Dynamic Gaussians. arXiv:2312.03029 [cs.CV]
  70. Surfelgan: Synthesizing realistic sensor data for autonomous driving. 2020 IEEE. In CVF Conference on Computer Vision and Pattern Recognition (CVPR). 11115–11124.
  71. Real-time photorealistic dynamic scene representation and rendering with 4d gaussian splatting. arXiv preprint arXiv:2310.10642 (2023).
  72. Plenoctrees for real-time rendering of neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5752–5761.
  73. Neural radiance fields from sparse RGB-D images for high-quality view synthesis. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022).
  74. NeRF-editing: geometry editing of neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18353–18364.
  75. Physg: Inverse rendering with spherical gaussians for physics-based material editing and relighting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5453–5462.
  76. Nerfactor: Neural factorization of shape and reflectance under an unknown illumination. ACM Transactions on Graphics (ToG) 40, 6 (2021), 1–18.
  77. Drivable 3d gaussian avatars. arXiv preprint arXiv:2311.08581 (2023).
  78. Surface splatting. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques. 371–378.
Citations (2)

Summary

  • The paper introduces a unified framework that integrates position-based dynamics with 3D Gaussian Splatting for realistic fluid-solid interactions.
  • It leverages Gaussian kernels with anisotropy regularization and inpainting to maintain rendering quality and simulate surface tension in dynamic fluids.
  • Experimental results demonstrate efficient dynamic scene synthesis, highlighting potential applications in VR, gaming, and visual effects.

Gaussian Splashing: Dynamic Fluid Synthesis with Gaussian Splatting

The paper "Gaussian Splashing: Dynamic Fluid Synthesis with Gaussian Splatting" presents a comprehensive framework that integrates position-based dynamics (PBD) with 3D Gaussian Splatting (3DGS) to simulate and render interactions between solids and fluids. This work leverages the cohesive representation of these techniques to manage rendering, view synthesis, and dynamic simulations within a unified system.

Methodology

The authors employ a collection of Gaussian kernels to represent the scene's geometry, color, and dynamic properties. The framework facilitates interaction between solid objects and fluids using PBD. In the PBD framework, the dynamic system is represented as vertices and constraints, which are efficiently simulated through constraint projections. This integration enables realistic two-way coupling dynamics between solids and fluids, allowing for novel scene interactions.

3D Gaussian Splatting is exploited for the rasterization of these kernels, enabling high-quality rendering of dynamic scenes. Gaussian Shader further enhances this by incorporating material properties such as diffuse and specular components, providing accurate representation of reflective surfaces.

Technical Contributions

Several technical contributions are included in the paper:

  • Anisotropy Regularization: The authors introduce an anisotropy regularization term to prevent excessive elongation or compression of Gaussian kernels, maintaining rendering quality under large deformations.
  • Surface Tension in Fluids: Using position-based surface tension models, the framework captures fluid surface dynamics effectively. By estimating the surface normals of fluid particles, the framework synthesizes realistic surface reflections.
  • Inpainting for Texture Recovery: The occasional exposure of unseen areas during object displacement is tackled using generative AI for inpainting, ameliorating black smudges and rendering artifacts.

Experimental Results

The framework is validated through a range of experiments, demonstrating its capability to synthesize dynamic scenes with complex fluid-solid interactions. The experiments include scenarios where objects transform state from solid to fluid, enabling interesting effects such as indoor pools and flowing water.

Rendering results show that specular highlights greatly enhance realism, and anistropy regularization effectively reduces artifacts caused by large deformative movements. Evaluations reveal the system's efficiency in both simulation and rendering times.

Implications and Future Directions

The integration of PBD with 3DGS offers a scalable approach for dynamic scene synthesis, with potential applications in virtual reality, gaming, and visual effects. However, physiological accuracy in fluid dynamics remains a limitation, given PBD's inherent simplifications.

Future research may explore extending the framework to support more complex material properties, improve fluid rendering by incorporating physical models for refraction, and optimize performance for larger scenes with high fluid particle counts.

In conclusion, Gaussian Splashing exemplifies how unified representations can bridge simulation and rendering capabilities, offering a robust platform for exploring dynamic interactions in reconstructed 3D scenes. The framework sets a foundation for advancing physically-based rendering techniques, integrating machine learning with traditional graphics, and opening avenues for innovative interactive environments.

Youtube Logo Streamline Icon: https://streamlinehq.com