Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interactive Volume Visualization via Multi-Resolution Hash Encoding based Neural Representation (2207.11620v3)

Published 23 Jul 2022 in cs.GR and cs.LG

Abstract: Neural networks have shown great potential in compressing volume data for visualization. However, due to the high cost of training and inference, such volumetric neural representations have thus far only been applied to offline data processing and non-interactive rendering. In this paper, we demonstrate that by simultaneously leveraging modern GPU tensor cores, a native CUDA neural network framework, and a well-designed rendering algorithm with macro-cell acceleration, we can interactively ray trace volumetric neural representations (10-60fps). Our neural representations are also high-fidelity (PSNR > 30dB) and compact (10-1000x smaller). Additionally, we show that it is possible to fit the entire training step inside a rendering loop and skip the pre-training process completely. To support extreme-scale volume data, we also develop an efficient out-of-core training strategy, which allows our volumetric neural representation training to potentially scale up to terascale using only an NVIDIA RTX 3090 workstation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein, “Implicit neural representations with periodic activation functions,” Advances in Neural Information Processing Systems, vol. 33, pp. 7462–7473, 2020.
  2. Y. Lu, K. Jiang, J. A. Levine, and M. Berger, “Compressive neural representations of volumetric scalar fields,” in Computer Graphics Forum, vol. 40.   Wiley Online Library, 2021, pp. 135–146.
  3. T. Müller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,” arXiv preprint arXiv:2201.05989, 2022.
  4. T. Takikawa, J. Litalien, K. Yin, K. Kreis, C. Loop, D. Nowrouzezahrai, A. Jacobson, M. McGuire, and S. Fidler, “Neural geometric level of detail: Real-time rendering with implicit 3d shapes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 11 358–11 367.
  5. J. N. Martel, D. B. Lindell, C. Z. Lin, E. R. Chan, M. Monteiro, and G. Wetzstein, “Acorn: Adaptive coordinate networks for neural scene representation,” arXiv preprint arXiv:2105.02788, 2021.
  6. S. Weiss, P. Hermüller, and R. Westermann, “Fast neural representations for direct volume rendering,” arXiv preprint arXiv:2112.01579, 2021.
  7. T. Müller, “Tiny CUDA neural network framework,” 2021 (Online), https://github.com/nvlabs/tiny-cuda-nn.
  8. J. Novák, A. Selle, and W. Jarosz, “Residual ratio tracking for estimating attenuation in participating media.” ACM Trans. Graph., vol. 33, no. 6, pp. 179–1, 2014.
  9. L. Szirmay-Kalos, B. Tóth, and M. Magdics, “Free path sampling in high resolution inhomogeneous participating media,” in Computer Graphics Forum, vol. 30, no. 1.   Wiley Online Library, 2011, pp. 85–97.
  10. S. Jain, W. Griffin, A. Godil, J. W. Bullard, J. Terrill, and A. Varshney, “Compressed volume rendering using deep learning,” in Proceedings of the Large Scale Data Analysis and Visualization (LDAV) Symposium. Phoenix, AZ, 2017.
  11. S. W. Wurster, H.-W. Shen, H. Guo, T. Peterka, M. Raj, and J. Xu, “Deep hierarchical super-resolution for scientific data reduction and visualization,” arXiv preprint arXiv:2107.00462, 2021.
  12. Z. Zhou, Y. Hou, Q. Wang, G. Chen, J. Lu, Y. Tao, and H. Lin, “Volume upscaling with convolutional neural networks,” in Proceedings of the Computer Graphics International Conference, 2017, pp. 1–6.
  13. J. Han and C. Wang, “SSR-TVD: Spatial super-resolution for time-varying data analysis and visualization,” IEEE Transactions on Visualization and Computer Graphics, 2020.
  14. L. Guo, S. Ye, J. Han, H. Zheng, H. Gao, D. Z. Chen, J.-X. Wang, and C. Wang, “SSR-VFD: Spatial super-resolution for vector field data analysis and visualization,” in Proceedings of IEEE Pacific Visualization Symposium, 2020.
  15. J. Han and C. Wang, “TSR-TVD: Temporal super-resolution for time-varying data analysis and visualization,” IEEE transactions on visualization and computer graphics, vol. 26, no. 1, pp. 205–215, 2019.
  16. J. Han, H. Zheng, D. Z. Chen, and C. Wang, “STNet: An end-to-end generative framework for synthesizing spatiotemporal super-resolution volumes,” IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 1, pp. 270–280, 2021.
  17. Y. Xie, E. Franz, M. Chu, and N. Thuerey, “TempoGAN: A temporally coherent, volumetric gan for super-resolution fluid flow,” ACM Transactions on Graphics (TOG), vol. 37, no. 4, pp. 1–15, 2018.
  18. D. Kim, M. Lee, and K. Museth, “Neuralvdb: High-resolution sparse volume representation using hierarchical neural networks,” arXiv preprint arXiv:2208.04448, 2022.
  19. E. LaMar, B. Hamann, and K. Joy, “Multiresolution techniques for interactive texture-based volume visualization,” Proceedings of the International Society for Optical Engineering, 11 2000.
  20. K. Zimmermann, R. Westermann, T. Ertl, C. Hansen, and M. Weiler, “Level-of-detail volume rendering via 3d textures,” in IEEE Symposium on Volume Visualization, 2000, pp. 7–13.
  21. E. Gobbetti, F. Marton, and J. A. I. Guitián, “A single-pass gpu ray casting framework for interactive out-of-core rendering of massive volumetric datasets,” The Visual Computer, vol. 24, no. 7, pp. 797–806, 2008.
  22. C. Crassin, F. Neyret, S. Lefebvre, and E. Eisemann, “Gigavoxels: Ray-guided streaming for efficient and detailed voxel rendering,” in Proceedings of the symposium on Interactive 3D graphics and games, 2009, pp. 15–22.
  23. K. Engel, “Cera-tvr: A framework for interactive high-quality teravoxel volume visualization on standard pcs,” in IEEE Symposium on Large Data Analysis and Visualization, 2011, pp. 123–124.
  24. T. Fogal and J. H. Krüger, “Tuvok, an architecture for large scale volume rendering.” in VMV, vol. 10, 2010, pp. 139–146.
  25. M. Hadwiger, J. Beyer, W.-K. Jeong, and H. Pfister, “Interactive volume exploration of petascale microscopy data streams using a visualization-driven virtual memory approach,” IEEE Transactions on Visualization and Computer Graphics, vol. 18, no. 12, pp. 2285–2294, 2012.
  26. J. Sarton, N. Courilleau, Y. Rémion, and L. Lucas, “Interactive visualization and on-demand processing of large volume data: a fully gpu-based out-of-core approach,” IEEE transactions on visualization and computer graphics, vol. 26, no. 10, pp. 3008–3021, 2019.
  27. Q. Wu, M. J. Doyle, and K.-L. Ma, “A Flexible Data Streaming Design for Interactive Visualization of Large-Scale Volume Data,” in Eurographics Symposium on Parallel Graphics and Visualization, R. Bujack, J. Tierny, and F. Sadlo, Eds.   The Eurographics Association, 2022.
  28. J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin, “Convolutional sequence to sequence learning,” in International Conference on Machine Learning.   PMLR, 2017, pp. 1243–1252.
  29. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017.
  30. M. Tancik, P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. Barron, and R. Ng, “Fourier features let networks learn high frequency functions in low dimensional domains,” Advances in Neural Information Processing Systems, vol. 33, pp. 7537–7547, 2020.
  31. J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan, “Mip-NeRF: A multiscale representation for anti-aliasing neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5855–5864.
  32. T. Müller, B. McWilliams, F. Rousselle, M. Gross, and J. Novák, “Neural importance sampling,” ACM Transactions on Graphics (TOG), vol. 38, no. 5, pp. 1–19, 2019.
  33. S. Hadadan, S. Chen, and M. Zwicker, “Neural radiosity,” ACM Transactions on Graphics (TOG), vol. 40, no. 6, pp. 1–11, 2021.
  34. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” in European conference on computer vision.   Springer, 2020, pp. 405–421.
  35. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  36. Intel Corporation, “Intel open volume kernel library,” 2022 (Online), https://www.openvkl.org, version 0.9.0, accessed on 2022-03-09.
  37. M. Lee and R. D. Moser, “Direct numerical simulation of turbulent channel flow up to R⁢eτ≈5200𝑅subscript𝑒𝜏5200{R}e_{\tau}\approx 5200italic_R italic_e start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT ≈ 5200,” Journal of Fluid Mechanics, vol. 774, pp. 395–415, Jul. 2015.
  38. T. Müller, F. Rousselle, J. Novák, and A. Keller, “Real-time neural radiance caching for path tracing,” arXiv preprint arXiv:2106.12372, 2021.
  39. S. Laine, T. Karras, and T. Aila, “Megakernels considered harmful: Wavefront path tracing on gpus,” in Proceedings of the 5th High-Performance Graphics Conference, 2013, pp. 137–143.
  40. E. Woodock, T. Murphy, H. P., and L. T.C., “Techniques used in the GEM code for Monte Carlo neutronics calculation in reactors and other systems of complex geometry,” Argonne National Laboratory, Tech. Rep., 1965.
  41. N. Hofmann and A. Evans, “Efficient unbiased volume path tracing on the gpu,” in Ray Tracing Gems II.   Springer, 2021, pp. 699–711.
  42. N. Morrical, W. Usher, I. Wald, and V. Pascucci, “Efficient space skipping and adaptive sampling of unstructured volumes using hardware accelerated ray tracing,” in 2019 IEEE Visualization Conference (VIS).   IEEE, 2019, pp. 256–260.
  43. M. Rieth, A. Gruber, F. A. Williams, and J. H. Chen, “Enhanced burning rates in hydrogen-enriched turbulent premixed flames by diffusion of molecular and atomic hydrogen,” Combustion and Flame, p. 111740, 2021.
  44. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in CVPR, 2018.
  45. P. Klacansky, “Open scientific visualization datasets,” https://klacansky.com/open-scivis-datasets/.
  46. R. Ballester-Ripoll, P. Lindstrom, and R. Pajarola, “Tthresh: Tensor compression for multidimensional visual data,” IEEE transactions on visualization and computer graphics, vol. 26, no. 9, pp. 2891–2903, 2019.
  47. J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “Deepsdf: Learning continuous signed distance functions for shape representation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 165–174.
Citations (11)

Summary

We haven't generated a summary for this paper yet.