Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FewSAR: A Few-shot SAR Image Classification Benchmark (2306.09592v1)

Published 16 Jun 2023 in cs.CV and cs.LG

Abstract: Few-shot learning (FSL) is one of the significant and hard problems in the field of image classification. However, in contrast to the rapid development of the visible light dataset, the progress in SAR target image classification is much slower. The lack of unified benchmark is a key reason for this phenomenon, which may be severely overlooked by the current literature. The researchers of SAR target image classification always report their new results on their own datasets and experimental setup. It leads to inefficiency in result comparison and impedes the further progress of this area. Motivated by this observation, we propose a novel few-shot SAR image classification benchmark (FewSAR) to address this issue. FewSAR consists of an open-source Python code library of 15 classic methods in three categories for few-shot SAR image classification. It provides an accessible and customizable testbed for different few-shot SAR image classification task. To further understanding the performance of different few-shot methods, we establish evaluation protocols and conduct extensive experiments within the benchmark. By analyzing the quantitative results and runtime under the same setting, we observe that the accuracy of metric learning methods can achieve the best results. Meta-learning methods and fine-tuning methods perform poorly on few-shot SAR images, which is primarily due to the bias of existing datasets. We believe that FewSAR will open up a new avenue for future research and development, on real-world challenges at the intersection of SAR image classification and few-shot deep learning. We will provide our code for the proposed FewSAR at https://github.com/solarlee/FewSAR.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Trans. Syst., Man, Cybern, vol. 3, no. 6, pp. 610–621, 1973.
  2. F. Melgani and L. Bruzzone, “Classification of hyperspectral remote sensing images with support vector machines,” IEEE Transactions on Geoscience and Remote Sensing, vol. 42, pp. 1778–1790, 2004.
  3. X. Xie, C. Fu, L. Lv, Q. Ye, Y. Yu, Q. Fang, L. Zhang, L. Hou, and C. Wu, “Deep convolutional neural network-based classification of cancer cells on cytological pleural effusion images,” Modern Pathology, vol. 35, pp. 609–614, 2022.
  4. H. Takiyama, T. Ozawa, S. Ishihara, M. Fujishiro, S. Shichijo, S. Nomura, M. Miura, and T. Tada, “Automatic anatomical classification of esophagogastroduodenoscopy images using deep convolutional neural networks,” Scientific Reports, vol. 8, 2018.
  5. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Conference on Neural Information Processing Systems (NeurIPS), vol. 60, 2012, pp. 1106–1114.
  6. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
  7. Y. Wang, Q. Yao, J. T.-Y. Kwok, and L. M. Ni, “Generalizing from a few examples: A survey on few-shot learning,” ACM Comput. Surv., vol. 53, no. 3, pp. 63:1–63:34, 2020.
  8. Y. Jian and L. Torresani, “Label hallucination for few-shot classification,” in Association for the Advancement of Artificial Intelligence (AAAI), 2022, pp. 7005–7014.
  9. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009, pp. 248–255.
  10. Z. Cao, Y. Ge, and J. Feng, “Sar image classification with a sample reusable domain adaptation algorithm based on svm classifier,” Pattern Recognition, 2017.
  11. A. Moreira, P. Prats-Iraola, M. Younis, G. Krieger, I. Hajnsek, and K. P. Papathanassiou, “A tutorial on synthetic aperture radar,” IEEE Geoscience and Remote Sensing Magazine, vol. 1, pp. 6–43, 2013.
  12. S. Chen, H. Wang, F. Xu, and Y. Jin, “Target classification using the deep convolutional networks for sar images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, pp. 4806–4817, 2016.
  13. C. Clemente, L. Pallotta, I. K. Proudler, A. D. Maio, J. J. Soraghan, and A. Farina, “Pseudo-zernike based multi-pass automatic target recognition from multi-channel sar,” ArXiv, vol. abs/1404.1682, 2014.
  14. W. Li, C. Dong, P. Tian, T. Qin, X. Yang, Z. Wang, J. Huo, Y. Shi, L. Wang, Y. Gao, and J. Luo, “Libfewshot: A comprehensive library for few-shot learning,” ArXiv, vol. abs/2109.04898, 2021.
  15. M. Salman and S. E. Yüksel, “Fusion of hyperspectral image and lidar data and classification using deep convolutional neural networks,” in 26th Signal Processing and Communications Applications Conference (SIU), 2018, pp. 1–4.
  16. L. Fei-Fei, R. Fergus, and P. Perona, “One-shot learning of object categories,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, pp. 594–611, 2006.
  17. ——, “A bayesian approach to unsupervised one-shot learning of object categories,” in International Conference on Computer Vision (ICCV), 2003, pp. 1134–1141.
  18. Y. Aytar and A. Zisserman, “Tabula rasa: Model transfer for object category detection,” in International Conference on Computer Vision (ICCV), 2011, pp. 2252–2259.
  19. T. Tommasi, F. Orabona, and B. Caputo, “Safety in numbers: Learning categories from few examples with multi model knowledge transfer,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 3081–3088.
  20. W.-Y. Chen, Y.-C. Liu, Z. Kira, Y. Wang, and J.-B. Huang, “A closer look at few-shot classification,” in 7th International Conference on Learning Representations, ICLR, New Orleans, LA, USA, May 6-9, 2019.
  21. J. Rajasegaran, S. H. Khan, M. Hayat, F. S. Khan, and M. Shah, “Self-supervised knowledge distillation for few-shot learning,” in 32nd British Machine Vision Conference, BMVC, Online, November 22-25, 2021, p. 179.
  22. Y. Tian, Y. Wang, D. Krishnan, J. B. Tenenbaum, and P. Isola, “Rethinking few-shot image classification: a good embedding is all you need?” in Computer Vision - ECCV,16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XIV, vol. 12359, 2020, pp. 266–282.
  23. C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proceedings of the 34th International Conference on Machine Learning, ICML, Sydney, NSW, Australia, 6-11 August, 2017, pp. 1126–1135.
  24. J. Gordon, J. Bronskill, M. Bauer, S. Nowozin, and R. E. Turner, “Meta-learning probabilistic inference for prediction,” in 7th International Conference on Learning Representations, ICLR, New Orleans, LA, USA, May 6-9, 2019.
  25. L. Bertinetto, J. F. Henriques, P. H. S. Torr, and A. Vedaldi, “Meta-learning with differentiable closed-form solvers,” in 7th International Conference on Learning Representations, ICLR, New Orleans, LA, USA, May 6-9, 2019.
  26. Q. Sun, Y. Liu, T.-S. Chua, and B. Schiele, “Meta-transfer learning for few-shot learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 403–412.
  27. A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell, “Meta-learning with latent embedding optimization,” in 7th International Conference on Learning Representations, ICLR, New Orleans, LA, USA, May 6-9, 2019.
  28. A. Raghu, M. Raghu, S. Bengio, and O. Vinyals, “Rapid learning or feature reuse? towards understanding the effectiveness of maml,” in 8th International Conference on Learning Representations, ICLR, Addis Ababa, Ethiopia, April 26-30, 2020.
  29. J. Snell, K. Swersky, and R. S. Zemel, “Prototypical networks for few-shot learning,” in Conference on Neural Information Processing Systems (NeurIPS), 2017, pp. 4077–4087.
  30. H.-J. Ye, H. Hu, D.-C. Zhan, and F. Sha, “Few-shot learning via embedding adaptation with set-to-set functions,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 8805–8814.
  31. F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales, “Learning to compare: Relation network for few-shot learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 1199–1208.
  32. W. Li, L. Wang, J. Xu, J. Huo, Y. Gao, and J. Luo, “Revisiting local descriptor based image-to-class measure for few-shot learning,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 7253–7260.
  33. C. Dong, W. Li, J. Huo, Z. Gu, and Y. Gao, “Learning task-aware local representations for few-shot learning,” in International Joint Conference on Artificial Intelligence (IJCAI), 2020, pp. 716–722.
  34. W. Li, J. Xu, J. Huo, L. Wang, Y. Gao, and J. Luo, “Distribution consistency based covariance metric networks for few-shot learning,” in Association for the Advancement of Artificial Intelligence (AAAI), 2019, pp. 8642–8649.
  35. L. Zhang, X. Leng, S. Feng, X. Ma, K. Ji, G. Kuang, and L. Liu, “Domain knowledge powered two-stream deep network for few-shot sar vehicle recognition,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022.
  36. X. Sun, Y. Lv, Z. Wang, and K. Fu, “Scan: Scattering characteristics analysis network for few-shot aircraft classification in high-resolution sar images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–17, 2022.
  37. Y. Song, T.-Y. Wang, S. K. Mondal, and J. P. Sahoo, “A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities,” ArXiv, vol. abs/2205.06743, 2022.
  38. E. Keydel, S. W. Lee, and J. T. Moore, “Mstar extended operating conditions: a tutorial,” in Defense, Security, and Sensing, 1996.
  39. A. Vaswani, N. M. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Conference on Neural Information Processing Systems (NeurIPS), 2017, pp. 5998–6008.
  40. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR, San Diego, CA, USA, May 7-9, Conference Track Proceedings, 2015.
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com

GitHub