Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring Hyperspectral Anomaly Detection with Human Vision: A Small Target Aware Detector (2401.01093v1)

Published 2 Jan 2024 in cs.CV

Abstract: Hyperspectral anomaly detection (HAD) aims to localize pixel points whose spectral features differ from the background. HAD is essential in scenarios of unknown or camouflaged target features, such as water quality monitoring, crop growth monitoring and camouflaged target detection, where prior information of targets is difficult to obtain. Existing HAD methods aim to objectively detect and distinguish background and anomalous spectra, which can be achieved almost effortlessly by human perception. However, the underlying processes of human visual perception are thought to be quite complex. In this paper, we analyze hyperspectral image (HSI) features under human visual perception, and transfer the solution process of HAD to the more robust feature space for the first time. Specifically, we propose a small target aware detector (STAD), which introduces saliency maps to capture HSI features closer to human visual perception. STAD not only extracts more anomalous representations, but also reduces the impact of low-confidence regions through a proposed small target filter (STF). Furthermore, considering the possibility of HAD algorithms being applied to edge devices, we propose a full connected network to convolutional network knowledge distillation strategy. It can learn the spectral and spatial features of the HSI while lightening the network. We train the network on the HAD100 training set and validate the proposed method on the HAD100 test set. Our method provides a new solution space for HAD that is closer to human visual perception with high confidence. Sufficient experiments on real HSI with multiple method comparisons demonstrate the excellent performance and unique potential of the proposed method. The code is available at https://github.com/majitao-xd/STAD-HAD.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (61)
  1. B. Datt, T. R. McVicar, T. G. Van Niel, D. L. Jupp, and J. S. Pearlman, “Preprocessing eo-1 hyperion hyperspectral data to support the application of agricultural indexes,” IEEE Transactions on Geoscience and Remote Sensing, vol. 41, no. 6, pp. 1246–1259, 2003.
  2. B. Hörig, F. Kühn, F. Oschütz, and F. Lehmann, “Hymap hyperspectral remote sensing to detect hydrocarbons,” International Journal of Remote Sensing, vol. 22, no. 8, pp. 1413–1422, 2001.
  3. N. Zhang, G. Yang, Y. Pan, X. Yang, L. Chen, and C. Zhao, “A review of advanced technologies and development for hyperspectral-based plant disease detection in the past three decades,” Remote Sensing, vol. 12, no. 19, p. 3188, 2020.
  4. M. T. Eismann, A. D. Stocker, and N. M. Nasrabadi, “Automated hyperspectral cueing for civilian search and rescue,” Proceedings of the IEEE, vol. 97, no. 6, pp. 1031–1055, 2009.
  5. L. Li, W. Li, Q. Du, and R. Tao, “Low-rank and sparse decomposition with mixture of gaussian for hyperspectral anomaly detection,” IEEE Transactions on Cybernetics, vol. 51, no. 9, pp. 4363–4372, 2020.
  6. X. Yang, W. Cao, Y. Lu, and Y. Zhou, “QTN: Quaternion transformer network for hyperspectral image classification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 12, pp. 7370–7384, 2023.
  7. M. Li, Y. Liu, G. Xue, Y. Huang, and G. Yang, “Exploring the relationship between center and neighborhoods: Central vector oriented self-similarity network for hyperspectral image classification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 4, pp. 1979–1993, 2023.
  8. J. Fan, T. Chen, and S. Lu, “Superpixel guided deep-sparse-representation learning for hyperspectral image classification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 11, pp. 3163–3173, 2018.
  9. J. Xie, N. He, L. Fang, and P. Ghamisi, “Multiscale densely-connected fusion networks for hyperspectral images classification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 1, pp. 246–259, 2021.
  10. D. Li, W. Xie, Y. Li, and L. Fang, “Fedfusion: Manifold driven federated learning for multi-satellite and multi-modality fusion,” arXiv preprint arXiv:2311.09540, 2023.
  11. W. Dong, T. Yang, J. Qu, T. Zhang, S. Xiao, and Y. Li, “Joint contextual representation model-informed interpretable network with dictionary aligning for hyperspectral and lidar classification,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 11, pp. 6804–6818, 2023.
  12. N. M. Nasrabadi, “Hyperspectral target detection: An overview of current and future challenges,” IEEE Signal Processing Magazine, vol. 31, no. 1, pp. 34–44, 2013.
  13. I. S. Reed and X. Yu, “Adaptive multiple-band cfar detection of an optical pattern with unknown spectral distribution,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, no. 10, pp. 1760–1770, 1990.
  14. J. M. Molero, E. M. Garzon, I. Garcia, and A. Plaza, “Analysis and optimizations of global and local versions of the rx algorithm for anomaly detection in hyperspectral data,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 6, no. 2, pp. 801–814, 2013.
  15. C. E. Caefer, J. Silverman, O. Orthal, D. Antonelli, Y. Sharoni, and S. R. Rotman, “Improved covariance matrices for point target detection in hyperspectral data,” Optical Engineering, vol. 47, no. 7, pp. 076 402–076 402, 2008.
  16. Y. P. Taitano, B. A. Geier, and K. W. Bauer, “A locally adaptable iterative rx detector,” EURASIP Journal on Advances in Signal Processing, vol. 2010, pp. 1–10, 2010.
  17. W. Li and Q. Du, “Collaborative representation for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 3, pp. 1463–1474, 2014.
  18. R. Wang, H. Hu, F. He, F. Nie, S. Cai, and Z. Ming, “Self-weighted collaborative representation for hyperspectral anomaly detection,” Signal Processing, vol. 177, p. 107718, 2020.
  19. J. Li, H. Zhang, L. Zhang, and L. Ma, “Hyperspectral anomaly detection by the use of background joint sparse representation,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 8, no. 6, pp. 2523–2533, 2015.
  20. Y. Xu, Z. Wu, J. Li, A. Plaza, and Z. Wei, “Anomaly detection in hyperspectral images based on low-rank and sparse representation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 4, pp. 1990–2000, 2015.
  21. X. Kang, X. Zhang, S. Li, K. Li, J. Li, and J. A. Benediktsson, “Hyperspectral anomaly detection with attribute and edge-preserving filters,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 10, pp. 5600–5611, 2017.
  22. W. Xie, T. Jiang, Y. Li, X. Jia, and J. Lei, “Structure tensor and guided filtering-based algorithm for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 7, pp. 4218–4230, 2019.
  23. X. Zhang, G. Wen, and W. Dai, “A tensor decomposition-based anomaly detection algorithm for hyperspectral image,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 10, pp. 5801–5820, 2016.
  24. Z. Chen, B. Yang, and B. Wang, “A preprocessing method for hyperspectral target detection based on tensor principal component analysis,” Remote Sensing, vol. 10, no. 7, p. 1033, 2018.
  25. L. Li, W. Li, Y. Qu, C. Zhao, R. Tao, and Q. Du, “Prior-based tensor approximation for anomaly detection in hyperspectral imagery,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 3, pp. 1037–1050, 2020.
  26. R. Tao, X. Zhao, W. Li, H.-C. Li, and Q. Du, “Hyperspectral anomaly detection by fractional fourier entropy,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 12, pp. 4920–4929, 2019.
  27. S. Chang, B. Du, and L. Zhang, “A subspace selection-based discriminative forest method for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 6, pp. 4033–4046, 2020.
  28. G. E. Hinton and R. Zemel, “Autoencoders, minimum description length and helmholtz free energy,” Advances in Neural Information Processing Systems, vol. 6, 1993.
  29. A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” arXiv preprint arXiv:1511.05644, 2015.
  30. Y. Hong, U. Hwang, J. Yoo, and S. Yoon, “How generative adversarial networks and their variants work: An overview,” ACM Computing Surveys (CSUR), vol. 52, no. 1, pp. 1–43, 2019.
  31. W. Xie, B. Liu, Y. Li, J. Lei, C.-I. Chang, and G. He, “Spectral adversarial feature learning for anomaly detection in hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 4, pp. 2352–2365, 2019.
  32. J. Lei, S. Fang, W. Xie, Y. Li, and C.-I. Chang, “Discriminative reconstruction for hyperspectral anomaly detection with spectral learning,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 10, pp. 7406–7417, 2020.
  33. T. Jiang, Y. Li, W. Xie, and Q. Du, “Discriminative reconstruction constrained generative adversarial network for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 7, pp. 4666–4679, 2020.
  34. W. Li, G. Wu, and Q. Du, “Transferred deep learning for anomaly detection in hyperspectral imagery,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 597–601, 2017.
  35. S. Wang, X. Wang, L. Zhang, and Y. Zhong, “Auto-ad: Autonomous hyperspectral anomaly detection network based on fully convolutional autoencoder,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–14, 2021.
  36. G. Fan, Y. Ma, X. Mei, F. Fan, J. Huang, and J. Ma, “Hyperspectral anomaly detection with robust graph autoencoders,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–14, 2021.
  37. K. Jiang, W. Xie, J. Lei, Z. Li, Y. Li, T. Jiang, and Q. Du, “E2e-liade: End-to-end local invariant autoencoding density estimation model for anomaly target detection in hyperspectral image,” IEEE Transactions on Cybernetics, vol. 52, no. 11, pp. 11 385–11 396, 2021.
  38. K. Jiang, W. Xie, J. Lei, T. Jiang, and Y. Li, “Lren: Low-rank embedded network for sample-free hyperspectral anomaly detection,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 5, 2021, pp. 4139–4146.
  39. T. Jiang, W. Xie, Y. Li, J. Lei, and Q. Du, “Weakly supervised discriminative learning with spectral constrained generative adversarial network for hyperspectral anomaly detection,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 11, pp. 6504–6517, 2021.
  40. Y. Liu, W. Xie, Y. Li, Z. Li, and Q. Du, “Dual-frequency autoencoder for anomaly detection in transformed hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–13, 2022.
  41. Z. Li, Y. Wang, C. Xiao, Q. Ling, Z. Lin, and W. An, “You only train once: Learning a general anomaly enhancement network with random masks for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–18, 2023.
  42. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 586–595.
  43. K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv preprint arXiv:1312.6034, 2013.
  44. J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” arXiv preprint arXiv:1412.6806, 2014.
  45. B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2921–2929.
  46. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 618–626.
  47. A. Chattopadhay, A. Sarkar, P. Howlader, and V. N. Balasubramanian, “Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).   IEEE, 2018, pp. 839–847.
  48. H. Wang, Z. Wang, M. Du, F. Yang, Z. Zhang, S. Ding, P. Mardziel, and X. Hu, “Score-cam: Score-weighted visual explanations for convolutional neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 24–25.
  49. H. Wang, R. Naidu, J. Michael, and S. S. Kundu, “Ss-cam: Smoothed score-cam for sharper visual feature localization,” arXiv preprint arXiv:2006.14255, 2020.
  50. G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, 2015.
  51. J. Ba and R. Caruana, “Do deep nets really need to be deep?” Advances in Neural Information Processing Systems, vol. 27, 2014.
  52. A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, “Fitnets: Hints for thin deep nets,” arXiv preprint arXiv:1412.6550, 2014.
  53. N. Komodakis and S. Zagoruyko, “Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer,” in ICLR, 2017.
  54. J. Yim, D. Joo, J. Bae, and J. Kim, “A gift from knowledge distillation: Fast optimization, network minimization and transfer learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4133–4141.
  55. X. Wang, R. Zhang, Y. Sun, and J. Qi, “Kdgan: Knowledge distillation with generative adversarial networks,” Advances in Neural Information Processing Systems, vol. 31, 2018.
  56. H. Lin, G. Han, J. Ma, S. Huang, X. Lin, and S.-F. Chang, “Supervised masked knowledge distillation for few-shot transformers,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 19 649–19 659.
  57. P. Flach, J. Hernández-Orallo, and C. Ferri, “A coherent interpretation of auc as a measure of aggregated classification performance,” in Proceedings of the 28th International Conference on International Conference on Machine Learning, 2011, p. 657–664.
  58. “The use of the area under the roc curve in the evaluation of machine learning algorithms,” Pattern Recognition, vol. 30, no. 7, pp. 1145–1159, 1997.
  59. D. Manolakis and G. Shaw, “Detection algorithms for hyperspectral imaging applications,” IEEE Signal Processing Magazine, vol. 19, no. 1, pp. 29–43, 2002.
  60. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  61. A. Tarvainen and H. Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” in Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., vol. 30.   Curran Associates, Inc., 2017.

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com