SDPL: Shifting-Dense Partition Learning for UAV-View Geo-Localization (2403.04172v2)
Abstract: Cross-view geo-localization aims to match images of the same target from different platforms, e.g., drone and satellite. It is a challenging task due to the changing appearance of targets and environmental content from different views. Most methods focus on obtaining more comprehensive information through feature map segmentation, while inevitably destroying the image structure, and are sensitive to the shifting and scale of the target in the query. To address the above issues, we introduce simple yet effective part-based representation learning, shifting-dense partition learning (SDPL). We propose a dense partition strategy (DPS), dividing the image into multiple parts to explore contextual information while explicitly maintaining the global structure. To handle scenarios with non-centered targets, we further propose the shifting-fusion strategy, which generates multiple sets of parts in parallel based on various segmentation centers, and then adaptively fuses all features to integrate their anti-offset ability. Extensive experiments show that SDPL is robust to position shifting, and performs com-petitively on two prevailing benchmarks, University-1652 and SUES-200. In addition, SDPL shows satisfactory compatibility with a variety of backbone networks (e.g., ResNet and Swin). https://github.com/C-water/SDPL release.
- D. Dissanayaka, T. R. Wanasinghe, O. De Silva, A. Jayasiri, and G. K. Mann, “Review of navigation methods for uav-based parcel delivery,” IEEE Transactions on Automation Science and Engineering, 2023.
- F. B. Sorbelli, F. Corò, L. Palazzetti, C. M. Pinotti, and G. Rigoni, “How the wind can be leveraged for saving energy in a truck-drone delivery system,” IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 4, pp. 4038–4049, 2023.
- M. A. Khan, W. Ectors, T. Bellemans, D. Janssens, and G. Wets, “Uav-based traffic analysis: A universal guiding framework based on literature survey,” Transportation research procedia, vol. 22, pp. 541–550, 2017.
- S. Wang, F. Jiang, B. Zhang, R. Ma, and Q. Hao, “Development of uav-based target tracking and recognition systems,” IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 8, pp. 3409–3422, 2019.
- L. Deng, Z. Mao, X. Li, Z. Hu, F. Duan, and Y. Yan, “Uav-based multispectral remote sensing for precision agriculture: A comparison between different cameras,” ISPRS journal of photogrammetry and remote sensing, vol. 146, pp. 124–136, 2018.
- C. A. Rokhmana, “The potential of uav-based remote sensing for supporting precision agriculture in indonesia,” Procedia Environmental Sciences, vol. 24, pp. 245–253, 2015.
- M. Rieke, T. Foerster, J. Geipel, and T. Prinz, “High-precision positioning and real-time data processing of uav-systems,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 38, pp. 119–124, 2012.
- F. Zimmermann, C. Eling, L. Klingbeil, and H. Kuhlmann, “Precise positioning of uavs–dealing with challenging rtk-gps measurement conditions during automated uav flights,” ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 4, pp. 95–102, 2017.
- Z. Zheng, Y. Wei, and Y. Yang, “University-1652: A multi-view multi-source benchmark for drone-based geo-localization,” in Proceedings of the 28th ACM international conference on Multimedia, 2020, pp. 1395–1403.
- R. Zhu, M. Yang, L. Yin, F. Wu, and Y. Yang, “Uav’s status is worth considering: A fusion representations matching method for geo-localization,” Sensors, vol. 23, no. 2, p. 720, 2023.
- T. Wang, Z. Zheng, C. Yan, J. Zhang, Y. Sun, B. Zheng, and Y. Yang, “Each part matters: Local patterns facilitate cross-view geo-localization,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 2, pp. 867–879, 2022.
- Z. Zheng, Y. Shi, T. Wang, J. Liu, J. Fang, Y. Wei, and T.-s. Chua, “Uavm’23: 2023 workshop on uavs in multimedia: Capturing the world from a new perspective,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 9715–9717.
- J. Lin, Z. Zheng, Z. Zhong, Z. Luo, S. Li, Y. Yang, and N. Sebe, “Joint representation learning and keypoint detection for cross-view geo-localization,” IEEE Transactions on Image Processing, vol. 31, pp. 3780–3792, 2022.
- B. Sun, G. Liu, and Y. Yuan, “F3-net: Multi-view scene matching for drone-based geo-localization,” IEEE Transactions on Geoscience and Remote Sensing, 2023.
- L. Ding, J. Zhou, L. Meng, and Z. Long, “A practical cross-view image matching method between uav and satellite for uav-based geo-localization,” Remote Sensing, vol. 13, no. 1, p. 47, 2020.
- T. Wang, Z. Zheng, Y. Sun, T.-S. Chua, Y. Yang, and C. Yan, “Multiple-environment self-adaptive network for aerial-view geo-localization,” arXiv preprint arXiv:2204.08381, 2022.
- H. Zhao, K. Ren, T. Yue, C. Zhang, and S. Yuan, “Transfg: A cross-view geo-localization of satellite and uavs imagery pipeline using transformer-based feature aggregation and gradient guidance,” IEEE Transactions on Geoscience and Remote Sensing, 2024.
- M. Dai, J. Hu, J. Zhuang, and E. Zheng, “A transformer-based feature segmentation and region alignment method for uav-view geo-localization,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 7, pp. 4376–4389, 2021.
- J. Zhuang, M. Dai, X. Chen, and E. Zheng, “A faster and more effective cross-view matching method of uav and satellite images for uav geolocalization,” Remote Sensing, vol. 13, no. 19, p. 3979, 2021.
- H. Li, Q. Chen, Z. Yang, and J. Yin, “Drone satellite matching based on multi-scale local pattern network,” in Proceedings of the 2023 Workshop on UAVs in Multimedia: Capturing the World from a New Perspective, 2023, pp. 51–55.
- R. Zhu, L. Yin, M. Yang, F. Wu, Y. Yang, and W. Hu, “Sues-200: A multi-height multi-scene cross-view image benchmark across drone and satellite,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
- F. Castaldo, A. Zamir, R. Angst, F. Palmieri, and S. Savarese, “Semantic cross-view matching,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2015, pp. 9–17.
- T.-Y. Lin, S. Belongie, and J. Hays, “Cross-view image geolocalization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 891–898.
- T. Senlet and A. Elgammal, “A framework for global vehicle localization using stereo images and satellite and road maps,” in 2011 IEEE international conference on computer vision workshops (ICCV Workshops). IEEE, 2011, pp. 2034–2041.
- S. Workman and N. Jacobs, “On the location dependence of convolutional neural network features,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, pp. 70–78.
- S. Workman, R. Souvenir, and N. Jacobs, “Wide-area image geolocalization with aerial reference imagery,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3961–3969.
- T.-Y. Lin, Y. Cui, S. Belongie, and J. Hays, “Learning deep representations for ground-to-aerial geolocalization,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 5007–5015.
- M. Zhai, Z. Bessinger, S. Workman, and N. Jacobs, “Predicting ground-level scene layout from aerial imagery,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 867–875.
- L. Liu and H. Li, “Lending orientation to neural networks for cross-view geo-localization,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 5624–5633.
- S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), vol. 1. IEEE, 2005, pp. 539–546.
- R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “Netvlad: Cnn architecture for weakly supervised place recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 5297–5307.
- Y. Shi, L. Liu, X. Yu, and H. Li, “Spatial-aware feature aggregation for image based cross-view geo-localization,” Advances in Neural Information Processing Systems, vol. 32, 2019.
- A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in International Conference on Learning Representations, 2020.
- T. Shen, Y. Wei, L. Kang, S. Wan, and Y.-H. Yang, “Mccg: A convnext-based multiple-classifier method for cross-view geo-localization,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
- B. Leibe, A. Leonardis, and B. Schiele, “Robust object detection with interleaved categorization and segmentation,” International journal of computer vision, vol. 77, pp. 259–289, 2008.
- M. Weber, M. Welling, and P. Perona, “Towards automatic discovery of object categories,” in Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), vol. 2. IEEE, 2000, pp. 101–108.
- Y. Amit and A. Trouvé, “Pop: Patchwork of parts models for object recognition,” International Journal of Computer Vision, vol. 75, pp. 267–282, 2007.
- D. Crandall, P. Felzenszwalb, and D. Huttenlocher, “Spatial priors for part-based recognition using statistical models,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1. IEEE, 2005, pp. 10–17.
- R. Fergus, P. Perona, and A. Zisserman, “Object class recognition by unsupervised scale-invariant learning,” in 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings., vol. 2. IEEE, 2003, pp. II–II.
- T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on pattern analysis and machine intelligence, vol. 24, no. 7, pp. 971–987, 2002.
- D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of the seventh IEEE international conference on computer vision, vol. 2. Ieee, 1999, pp. 1150–1157.
- J. Qian, W. Jiang, H. Luo, and H. Yu, “Stripe-based and attribute-aware network: A two-branch deep model for vehicle re-identification,” Measurement Science and Technology, vol. 31, no. 9, p. 095401, 2020.
- X. Sun and L. Zheng, “Dissecting person re-identification from the viewpoint of viewpoint,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 608–617.
- Z. Zhong, L. Zheng, Z. Luo, S. Li, and Y. Yang, “Invariance matters: Exemplar memory for domain adaptive person re-identification,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 598–607.
- J. Song, Y. Yang, Y.-Z. Song, T. Xiang, and T. M. Hospedales, “Generalizable person re-identification by domain-invariant mapping network,” in Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, 2019, pp. 719–728.
- X. Tian, J. Shao, D. Ouyang, and H. T. Shen, “Uav-satellite view synthesis for cross-view geo-localization,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 7, pp. 4804–4815, 2021.
- H. Zhao, M. Tian, S. Sun, J. Shao, J. Yan, S. Yi, X. Wang, and X. Tang, “Spindle net: Person re-identification with human body region guided feature decomposition and fusion,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1077–1085.
- J. Xu, R. Zhao, F. Zhu, H. Wang, and W. Ouyang, “Attention-aware compositional network for person re-identification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2119–2128.
- Y. Sun, L. Zheng, Y. Yang, Q. Tian, and S. Wang, “Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline),” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 480–496.
- Y. Chen, Z. Yang, and Q. Chen, “A cross-view matching method based on dense partition strategy for uav geolocalization,” in Proceedings of the 2023 Workshop on UAVs in Multimedia: Capturing the World from a New Perspective, ser. UAVM ’23. New York, NY, USA: Association for Computing Machinery, 2023, p. 19–23. [Online]. Available: https://doi.org/10.1145/3607834.3616571
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
- Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012–10 022.
- J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248–255.
- Y. Gu, C. Li, and J. Xie, “Attention-aware generalized mean pooling for image retrieval,” arXiv preprint arXiv:1811.00202, 2018.
- A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
- Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong et al., “Swin transformer v2: Scaling up capacity and resolution,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 12 009–12 019.
- S. Hu, M. Feng, R. M. Nguyen, and G. H. Lee, “Cvm-net: Cross-view matching network for image-based ground-to-aerial geo-localization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7258–7267.
- M. Dai, E. Zheng, Z. Feng, L. Qi, J. Zhuang, and W. Yang, “Vision-based uav self-positioning in low-altitude urban environments,” IEEE Transactions on Image Processing, 2023.
- A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural information processing systems, vol. 32, 2019.