Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization (2408.00150v1)

Published 31 Jul 2024 in cs.GR, cs.AI, and cs.CV

Abstract: In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (64)
  1. Pexels - the best free stock photos, royalty free images & videos shared by creators. https://www.pexels.com/.
  2. WikiArt - visual art encyclopedia. https://www.wikiart.org/.
  3. Mip-NeRF: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of IEEE/CVF International Conference on Computer Vision, pp. 5835–5844, 2021. doi: 10 . 1109/ICCV48922 . 2021 . 00580
  4. Mip-NeRF 360: Unbounded anti-aliased neural radiance fields. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5460–5469, 2022. doi: 10 . 1109/CVPR52688 . 2022 . 00539
  5. A generative model for volume rendering. IEEE Transactions on Visualization and Computer Graphics, 25(4):1636–1650, 2019. doi: 10 . 1109/TVCG . 2018 . 2816059
  6. S. Bruckner and M. E. Gröller. Style transfer functions for illustrative volume rendering. Computer Graphics Forum, 26(3):715–724, 2007. doi: 10 . 1111/j . 1467-8659 . 2007 . 01095 . x
  7. Advances in 3D neural stylization: A survey. arXiv preprint arXiv:2311.18328, 2023. doi: 10 . 48550/arXiv . 2311 . 18328
  8. Stylizing 3D scene via implicit representation and hypernetwork. In Proceedings of IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 215–224, 2022. doi: 10 . 1109/WACV51458 . 2022 . 00029
  9. One is all: Bridging the gap between neural radiance fields architectures with progressive volume distillation. In Proceedings of AAAI Conference on Artificial Intelligence, pp. 597–605, 2023. doi: 10 . 1609/aaai . v37i1 . 25135
  10. Plenoxels: Radiance fields without neural networks. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5491–5500, 2022. doi: 10 . 1109/CVPR52688 . 2022 . 00542
  11. Image style transfer using convolutional neural networks. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2414–2423, 2016. doi: 10 . 1109/CVPR . 2016 . 265
  12. RecolorNeRF: Layer decomposed radiance fields for efficient color editing of 3D scenes. In Proceedings of ACM International Conference on Multimedia, pp. 8004–8015, 2023. doi: 10 . 1145/3581783 . 3611957
  13. NeRVI: Compressive neural representation of visualization images for communicating volume visualization results. Computers & Graphics, 116:216–227, 2023. doi: 10 . 1016/J . CAG . 2023 . 08 . 024
  14. J. Han and C. Wang. TSR-TVD: Temporal super-resolution for time-varying data analysis and visualization. IEEE Transactions on Visualization and Computer Graphics, 26(1):205–215, 2020. doi: 10 . 1109/TVCG . 2019 . 2934255
  15. J. Han and C. Wang. SSR-TVD: Spatial super-resolution for time-varying data analysis and visualization. IEEE Transactions on Visualization and Computer Graphics, 28(6):2445–2456, 2022. doi: 10 . 1109/TVCG . 2020 . 3032123
  16. J. Han and C. Wang. VCNet: A generative model for volume completion. Visual Informatics, 6(2):62–73, 2022. doi: 10 . 1016/J . VISINF . 2022 . 04 . 004
  17. J. Han and C. Wang. CoordNet: Data generation and visualization generation for time-varying volumes via a coordinate-based neural network. IEEE Transactions on Visualization and Computer Graphics, 29(12):4951–4963, 2023. doi: 10 . 1109/TVCG . 2022 . 3197203
  18. KD-INR: Time-varying volumetric data compression via knowledge distillation-based implicit neural representation. IEEE Transactions on Visualization and Computer Graphics, 2023. Accepted. doi: 10 . 1109/TVCG . 2023 . 3345373
  19. STNet: An end-to-end generative framework for synthesizing spatiotemporal super-resolution volumes. IEEE Transactions on Visualization and Computer Graphics, 28(1):270–280, 2022. doi: 10 . 1109/TVCG . 2021 . 3114815
  20. V2V: A deep learning approach to variable-to-variable selection and translation for multivariate time-varying data. IEEE Transactions on Visualization and Computer Graphics, 27(2):1290–1300, 2021. doi: 10 . 1109/TVCG . 2020 . 3030346
  21. InSituNet: Deep image synthesis for parameter space exploration of ensemble simulations. IEEE Transactions on Visualization and Computer Graphics, 26(1):23–33, 2020. doi: 10 . 1109/TVCG . 2019 . 2934312
  22. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. doi: 10 . 48550/arXiv . 1503 . 02531
  23. DNN-VolVis: Interactive volume visualization supported by deep neural network. In Proceedings of IEEE Pacific Visualization Symposium, pp. 282–291, 2019. doi: 10 . 1109/PACIFICVIS . 2019 . 00041
  24. Tri-MipRF: Tri-mip representation for efficient anti-aliasing neural radiance fields. In Proceedings of IEEE/CVF International Conference on Computer Vision, pp. 19717–19726, 2023. doi: 10 . 1109/ICCV51070 . 2023 . 01811
  25. X. Huang and S. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of IEEE/CVF International Conference on Computer Vision, pp. 1510–1519, 2017. doi: 10 . 1109/ICCV . 2017 . 167
  26. StylizedNeRF: Consistent 3D scene stylization as stylized NeRF via 2D-3D mutual learning. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18321–18331, 2022. doi: 10 . 1109/CVPR52688 . 2022 . 01780
  27. Neural style transfer: A review. IEEE Transactions on Visualization and Computer Graphics, 26(11):3365–3385, 2020. doi: 10 . 1109/TVCG . 2019 . 2921336
  28. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of European Conference on Computer Vision, pp. 694–711, 2016. doi: 10 . 1007/978-3-319-46475-6_43
  29. 3D Gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 42(4):139:1–139:14, 2023. doi: 10 . 1145/3592433
  30. Segment anything. arXiv preprint arXiv:2304.02643, 2023. doi: 10 . 48550/arXiv . 2304 . 02643
  31. Neural neighbor style transfer. arXiv preprint arXiv:2203.13215, 2022. doi: 10 . 48550/arXiv . 2203 . 13215
  32. PaletteNeRF: Palette-based appearance editing of neural radiance fields. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20691–20700, 2023. doi: 10 . 1109/CVPR52729 . 2023 . 01982
  33. ICE-NeRF: Interactive color editing of NeRFs via decomposition-aware weight optimization. In Proceedings of IEEE/CVF International Conference on Computer Vision, pp. 3468–3478, 2023. doi: 10 . 1109/ICCV51070 . 2023 . 00323
  34. StyleRF: Zero-shot 3D style transfer of neural radiance fields. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8338–8348, 2023. doi: 10 . 1109/CVPR52729 . 2023 . 00806
  35. Non-photorealistic volume rendering using stippling techniques. In Proceedings of IEEE Visualization Conference, pp. 211–218, 2002. doi: 10 . 1109/VISUAL . 2002 . 1183777
  36. FCNR: Fast compressive neural representation of visualization images. In Proceedings of IEEE VIS Conference (Short Papers), 2024. Accepted.
  37. Compressive neural representations of volumetric scalar fields. Computer Graphics Forum, 40(3):135–146, 2021. doi: 10 . 1111/CGF . 14295
  38. Face model compression by distilling knowledge from neurons. In Proceedings of AAAI Conference on Artificial Intelligence, pp. 3560–3566, 2016. doi: 10 . 1609/aaai . v30i1 . 10449
  39. NeRF: Representing scenes as neural radiance fields for view synthesis. In Proceedings of European Conference on Computer Vision, pp. 405–421, 2020. doi: 10 . 1007/978-3-030-58452-8_24
  40. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics, 41(4):102:1–102:15, 2022. doi: 10 . 1145/3528223 . 3530127
  41. SNeRF: Stylized neural implicit representations for 3D scenes. ACM Transactions on Graphics, 41(4):142:1–142:11, 2022. doi: 10 . 1145/3528223 . 3530107
  42. K. Nichol and W. Kan. Painter by numbers - does every painter leave a fingerprint? https://kaggle.com/competitions/painter-by-numbers, 2016.
  43. S. Niklaus and F. Liu. Softmax splatting for video frame interpolation. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5436–5445, 2020. doi: 10 . 1109/CVPR42600 . 2020 . 00548
  44. KiloNeRF: Speeding up neural radiance fields with thousands of tiny MLPs. In Proceedings of IEEE/CVF International Conference on Computer Vision, pp. 14315–14325, 2021. doi: 10 . 1109/ICCV48922 . 2021 . 01407
  45. Artistic style transfer for videos. In Proceedings of German Conference on Pattern Recognition, pp. 26–36, 2016. doi: 10 . 1007/978-3-319-45886-1_3
  46. VDL-Surrogate: A view-dependent latent-based model for parameter space exploration of ensemble simulations. IEEE Transactions on Visualization and Computer Graphics, 29(1):820–830, 2023. doi: 10 . 1109/TVCG . 2022 . 3209413
  47. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image regnition. In Proceedings of International Conference on Learning Representation, 2015.
  48. M. Stone. A Field Guide to Digital Color. AK Peters, 2003.
  49. Efficient palette-based decomposition and recoloring of images via RGBXY-space geometry. ACM Transactions on Graphics, 37(6):262:1–262:10, 2018. doi: 10 . 1145/3272127 . 3275054
  50. K. Tang and C. Wang. ECNR: Efficient compressive neural representation of time-varying volumetric datasets. In Proceedings of IEEE Pacific Visualization Conference, pp. 72–81, 2024. doi: 10 . 1109/PACIFICVIS60374 . 2024 . 00017
  51. K. Tang and C. Wang. STSR-INR: Spatiotemporal super-resolution for time-varying multivariate volumetric data via implicit neural representation. Computers & Graphics, 119:103874, 2024. doi: 10 . 1016/J . CAG . 2024 . 01 . 001
  52. Z. Teed and J. Deng. RAFT: Recurrent all-pairs field transforms for optical flow. In Proceedings of European Conference on Computer Vision, pp. 402–419, 2020. doi: 10 . 1007/978-3-030-58536-5_24
  53. Advances in neural rendering. Computer Graphics Forum, 41(2):703–735, 2022. doi: 10 . 1111/cgf . 14507
  54. K. Tojo and N. Umetani. Recolorable posterization of volumetric radiance fields using visibility-weighted palette extraction. Computer Graphics Forum, 41(4):149–160, 2022. doi: 10 . 1111/cgf . 14594
  55. C. Wang and J. Han. DL4SciVis: A state-of-the-art survey on deep learning for scientific visualization. IEEE Transactions on Visualization and Computer Graphics, 29(8):3714–3733, 2023. doi: 10 . 1109/TVCG . 2022 . 3167896
  56. R2L: Distilling neural radiance field to neural light field for efficient novel view synthesis. In Proceedings of European Conference on Computer Vision, pp. 612–629, 2022. doi: 10 . 1007/978-3-031-19821-2_35
  57. Consistent video style transfer via relaxation and regularization. IEEE Transactions on Image Processing, 29:9125–9139, 2020. doi: 10 . 1109/TIP . 2020 . 3024018
  58. Volumetric isosurface rendering with deep learning-based super-resolution. IEEE Transactions on Visualization and Computer Graphics, 27(6):3064–3078, 2021. doi: 10 . 1109/TVCG . 2019 . 2956697
  59. Fast neural representations for direct volume rendering. Computer Graphics Forum, 41(6):196–211, 2022. doi: 10 . 1111/cgf . 14578
  60. Interactive volume visualization via multi-resolution hash encoding based neural representation. IEEE Transactions on Visualization and Computer Graphics, 2023. Accepted. doi: 10 . 1109/TVCG . 2023 . 3293121
  61. Adaptively placed multi-grid scene representation networks for large-scale data visualization. IEEE Transactions on Visualization and Computer Graphics, 30(1):965–974, 2024. doi: 10 . 1109/TVCG . 2023 . 3327194
  62. GMT: A deep learning approach to generalized multivariate translation for scientific data analysis and visualization. Computers & Graphics, 112:92–104, 2023. doi: 10 . 1016/J . CAG . 2023 . 04 . 002
  63. ARF: Artistic radiance fields. In Proceedings of European Conference on Computer Vision, pp. 717–733, 2022. doi: 10 . 1007/978-3-031-19821-2_41
  64. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 586–595, 2018. doi: 10 . 1109/CVPR . 2018 . 00068

Summary

We haven't generated a summary for this paper yet.