AG-NeRF: Attention-guided Neural Radiance Fields for Multi-height Large-scale Outdoor Scene Rendering (2404.11897v1)
Abstract: Existing neural radiance fields (NeRF)-based novel view synthesis methods for large-scale outdoor scenes are mainly built on a single altitude. Moreover, they often require a priori camera shooting height and scene scope, leading to inefficient and impractical applications when camera altitude changes. In this work, we propose an end-to-end framework, termed AG-NeRF, and seek to reduce the training cost of building good reconstructions by synthesizing free-viewpoint images based on varying altitudes of scenes. Specifically, to tackle the detail variation problem from low altitude (drone-level) to high altitude (satellite-level), a source image selection method and an attention-based feature fusion approach are developed to extract and fuse the most relevant features of target view from multi-height images for high-fidelity rendering. Extensive experiments demonstrate that AG-NeRF achieves SOTA performance on 56 Leonard and Transamerica benchmarks and only requires a half hour of training time to reach the competitive PSNR as compared to the latest BungeeNeRF.
- “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis,” in European Conference on Computer Vision. Springer, 2020, pp. 405–421.
- “Block-NeRF: Scalable Large Scene Neural View Synthesis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 8248–8258.
- “Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual Fly-Throughs,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12922–12931.
- MI Zhenxing and Dan Xu, “Switch-NeRF: Learning Scene Decomposition with Mixture of Experts for Large-scale Neural Radiance Fields,” in The Eleventh International Conference on Learning Representations, 2022.
- “Grid-guided Neural Radiance Fields for Large Urban Scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8296–8306.
- “Efficient Large-scale Scene Representation with a Hybrid of High-resolution Grid and Plane Features,” arXiv preprint arXiv:2303.03003, 2023.
- “BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale Scene Rendering,” in European conference on computer vision. Springer, 2022, pp. 106–122.
- “NeRF++: Analyzing and Improving Neural Radiance Fields,” arXiv preprint arXiv:2010.07492, 2020.
- “NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 7210–7219.
- “Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5470–5479.
- “D-NeRF: Neural Radiance Fields for Dynamic Scenes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10318–10327.
- “Efficient Neural Radiance Fields for Interactive Free-viewpoint Video,” in SIGGRAPH Asia 2022 Conference Papers, 2022, pp. 1–9.
- “Dynibar: Neural dynamic image-based rendering,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2023, pp. 4273–4284.
- “Alignerf: High-fidelity neural radiance fields via alignment-aware training,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 46–55.
- “pixelNeRF: Neural Radiance Fields from One or Few Images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4578–4587.
- “FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8254–8263.
- “Dense depth priors for neural radiance fields from sparse input views,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12892–12901.
- “Depth-supervised nerf: Fewer views and faster training for free,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12882–12891.
- “Neural radiance fields from sparse rgb-d images for high-quality view synthesis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
- “Instant neural graphics primitives with a multiresolution hash encoding,” ACM Transactions on Graphics (ToG), vol. 41, no. 4, pp. 1–15, 2022.
- “Tensorf: Tensorial radiance fields,” in European Conference on Computer Vision. Springer, 2022, pp. 333–350.
- “Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5459–5469.
- “Neural sparse voxel fields,” Advances in Neural Information Processing Systems, vol. 33, pp. 15651–15663, 2020.
- “Plenoxels: Radiance fields without neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5501–5510.
- “3d gaussian splatting for real-time radiance field rendering,” ACM Transactions on Graphics, vol. 42, no. 4, July 2023.
- “Structure-from-Motion Revisited,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4104–4113.
- “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
- “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 586–595.
- “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- “Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5855–5864.