Neural directional distance field object representation for uni-directional path-traced rendering (2306.16142v1)
Abstract: Faster rendering of synthetic images is a core problem in the field of computer graphics. Rendering algorithms, such as path-tracing is dependent on parameters like size of the image, number of light bounces, number of samples per pixel, all of which, are fixed if one wants to obtain a image of a desired quality. It is also dependent on the size and complexity of the scene being rendered. One of the largest bottleneck in rendering, particularly when the scene is very large, is querying for objects in the path of a given ray in the scene. By changing the data type that represents the objects in the scene, one may reduce render time, however, a different representation of a scene requires the modification of the rendering algorithm. In this paper, (a) we introduce directed distance field, as a functional representation of a object; (b) how the directed distance functions, when stored as a neural network, be optimized and; (c) how such an object can be rendered with a modified path-tracing algorithm.
- Y. Zhou, L. Wu, R. Ramamoorthi, and L.-Q. Yan, “Vectorization for fast, analytic, and differentiable visibility,” ACM Transactions on Graphics (TOG), vol. 40, no. 3, pp. 1–21, 2021.
- L. Xiao, S. Nouri, M. Chapman, A. Fix, D. Lanman, and A. Kaplanyan, “Neural supersampling for real-time rendering,” ACM Transactions on Graphics (TOG), vol. 39, no. 4, pp. 142–1, 2020.
- T. Karras, “Maximizing parallelism in the construction of bvhs, octrees, and k-d trees,” in Proceedings of the Fourth ACM SIGGRAPH/Eurographics conference on High-Performance Graphics, 2012, pp. 33–37.
- J. C. Hart, “Sphere tracing: A geometric method for the antialiased ray tracing of implicit surfaces,” The Visual Computer, vol. 12, no. 10, pp. 527–545, 1996.
- B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
- K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural networks, vol. 2, no. 5, pp. 359–366, 1989.
- E. Chernyaev, “Marching cubes 33: Construction of topologically correct isosurfaces,” Tech. Rep., 1995.
- T. Aumentado-Armstrong, S. Tsogkas, S. Dickinson, and A. D. Jepson, “Representing 3d shapes with probabilistic directed distance fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 19 343–19 354.
- T. Yenamandra, A. Tewari, N. Yang, F. Bernard, C. Theobalt, and D. Cremers, “Hdsdf: Hybrid directional and signed distance functions for fast inverse rendering,” arXiv preprint arXiv:2203.16284, 2022.
- H. Liu, Y. Cong, S. Wang, H. Fan, D. Tian, and Y. Tang, “Deep learning of directional truncated signed distance function for robust 3d object recognition,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2017, pp. 5934–5940.
- P. Zins, Y. Xu, E. Boyer, S. Wuhrer, and T. Tung, “Multi-view reconstruction using signed ray distance functions (srdf),” arXiv preprint arXiv:2209.00082, 2022.
- E. Zobeidi and N. Atanasov, “A deep signed directional distance function for object shape representation,” arXiv preprint arXiv:2107.11024, 2021.
- Y. Yoshitake, M. Nishimura, S. Nobuhara, and K. Nishino, “TransPoser: Transformer as an Optimizer for Joint Object Shape and Pose Estimation,” Mar. 2023.
- S. Liu, Y. Zhang, S. Peng, B. Shi, M. Pollefeys, and Z. Cui, “Dist: Rendering deep implicit signed distance function with differentiable sphere tracing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2019–2028.
- J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “Deepsdf: Learning continuous signed distance functions for shape representation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 165–174.
- S. Van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “scikit-image: image processing in python,” PeerJ, vol. 2, p. e453, 2014.
- C. Fuji Tsang, M. Shugrina, J. F. Lafleche, T. Takikawa, J. Wang, C. Loop, W. Chen, K. M. Jatavallabhula, E. Smith, A. Rozantsev, O. Perel, T. Shen, J. Gao, S. Fidler, G. State, J. Gorski, T. Xiang, J. Li, M. Li, and R. Lebaredian, “Kaolin: A pytorch library for accelerating 3d deep learning research,” https://github.com/NVIDIAGameWorks/kaolin, 2022.
- A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su et al., “Shapenet: An information-rich 3d model repository,” arXiv preprint arXiv:1512.03012, 2015.
Collections
Sign up for free to add this paper to one or more collections.