Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Backdoor Attacks for Remote Sensing Data with Wavelet Transform (2211.08044v2)

Published 15 Nov 2022 in cs.CV

Abstract: Recent years have witnessed the great success of deep learning algorithms in the geoscience and remote sensing realm. Nevertheless, the security and robustness of deep learning models deserve special attention when addressing safety-critical remote sensing tasks. In this paper, we provide a systematic analysis of backdoor attacks for remote sensing data, where both scene classification and semantic segmentation tasks are considered. While most of the existing backdoor attack algorithms rely on visible triggers like squared patches with well-designed patterns, we propose a novel wavelet transform-based attack (WABA) method, which can achieve invisible attacks by injecting the trigger image into the poisoned image in the low-frequency domain. In this way, the high-frequency information in the trigger image can be filtered out in the attack, resulting in stealthy data poisoning. Despite its simplicity, the proposed method can significantly cheat the current state-of-the-art deep learning models with a high attack success rate. We further analyze how different trigger images and the hyper-parameters in the wavelet transform would influence the performance of the proposed method. Extensive experiments on four benchmark remote sensing datasets demonstrate the effectiveness of the proposed method for both scene classification and semantic segmentation tasks and thus highlight the importance of designing advanced backdoor defense algorithms to address this threat in remote sensing scenarios. The code will be available online at \url{https://github.com/ndraeger/waba}.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (69)
  1. P. Ghamisi, N. Yokoya, J. Li, W. Liao, S. Liu, J. Plaza, B. Rasti, and A. Plaza, “Advances in hyperspectral image and signal processing: A comprehensive overview of the state of the art,” IEEE Geosci. Remote Sens. Mag., vol. 5, no. 4, pp. 37–78, 2017.
  2. L. Zhang, L. Zhang, and B. Du, “Deep learning for remote sensing data: A technical tutorial on the state of the art,” IEEE Geosci. Remote Sens. Mag., vol. 4, no. 2, pp. 22–40, 2016.
  3. G. Cheng, J. Han, and X. Lu, “Remote sensing image scene classification: Benchmark and state of the art,” Proc. IEEE, vol. 105, no. 10, pp. 1865–1883, 2017.
  4. G. Cheng, C. Yang, X. Yao, L. Guo, and J. Han, “When deep learning meets metric learning: Remote sensing image scene classification via learning discriminative cnns,” IEEE Trans. Geosci. Remote Sens., vol. 56, no. 5, pp. 2811–2821, 2018.
  5. Y. Xu, B. Du, L. Zhang, D. Cerra, M. Pato, E. Carmona, S. Prasad, N. Yokoya, R. Hänsch, and B. Le Saux, “Advanced multi-sensor optical remote sensing for urban land use and land cover classification: Outcome of the 2018 IEEE GRSS data fusion contest,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 12, no. 6, pp. 1709–1724, 2019.
  6. J. Ding, N. Xue, G.-S. Xia, X. Bai, W. Yang, M. Yang, S. Belongie, J. Luo, M. Datcu, M. Pelillo et al., “Object detection in aerial images: A large-scale benchmark and challenges,” IEEE Trans. Pattern Anal. Mach. Intell., 2021.
  7. L. Zhang and L. Zhang, “Artificial intelligence for remote sensing data analysis: A review of challenges and opportunities,” IEEE Geosci. Remote Sens. Mag., vol. 10, no. 2, pp. 270–294, 2022.
  8. W. Czaja, N. Fendley, M. Pekala, C. Ratto, and I.-J. Wang, “Adversarial examples in remote sensing,” in Proc. SIGSPATIAL Int. Conf. Adv. Geogr. Inf. Syst., 2018, pp. 408–411.
  9. Y. Xu, B. Du, and L. Zhang, “Assessing the threat of adversarial examples on deep neural networks for remote sensing scene classification: Attacks and defenses,” IEEE Trans. Geos. Remote Sens., vol. 59, no. 2, pp. 1604–1617, 2021.
  10. L. Chen, Z. Xu, Q. Li, J. Peng, S. Wang, and H. Li, “An empirical study of adversarial examples on remote sensing image scene classification,” IEEE Trans. Geos. Remote Sens., vol. 59, no. 9, pp. 7419–7433, 2021.
  11. G. Cheng, X. Sun, K. Li, L. Guo, and J. Han, “Perturbation-seeking generative adversarial networks: A defense framework for remote sensing image scene classification,” IEEE Trans. Geos. Remote Sens., vol. 60, pp. 1–11, 2021.
  12. Y. Xu, W. Yu, and P. Ghamisi, “Task-guided denoising network for adversarial defense of remote sensing scene classification,” in Proc. Int. Joint Conf. Artif. Intell. Workshop, 2022.
  13. A. Ortiz, O. Fuentes, D. Rosario, and C. Kiekintveld, “On the defense against adversarial examples beyond the visible spectrum,” in Proc. IEEE Mil. Commun. Conf., 2018, pp. 1–5.
  14. A. Du, Y. W. Law, M. Sasdelli, B. Chen, K. Clarke, M. Brown, and T.-J. Chin, “Adversarial attacks against a satellite-borne multispectral cloud detector,” arXiv preprint arXiv:2112.01723, 2021.
  15. S. Park, H. J. Lee, and Y. M. Ro, “Adversarially robust hyperspectral image classification via random spectral sampling and spectral shape encoding,” IEEE Access, vol. 9, pp. 66 791–66 804, 2021.
  16. Y. Xu, B. Du, and L. Zhang, “Self-attention context network: Addressing the threat of adversarial attacks for hyperspectral image classification,” IEEE Trans. Image Process., vol. 30, pp. 8671–8685, 2021.
  17. C. Shi, Y. Dang, L. Fang, Z. Lv, and M. Zhao, “Hyperspectral image classification with adversarial attack,” IEEE Geosci. Remote Sens. Lett., vol. 19, pp. 1–5, 2021.
  18. H. Li, H. Huang, L. Chen, J. Peng, H. Huang, Z. Cui, X. Mei, and G. Wu, “Adversarial examples for CNN-based SAR image classification: An experience study,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 14, pp. 1333–1347, 2020.
  19. B. Peng, B. Peng, J. Zhou, J. Xia, and L. Liu, “Speckle-variant attack: Toward transferable adversarial attack to SAR target recognition,” IEEE Geosci. Remote Sens. Lett., vol. 19, pp. 1–5, 2022.
  20. W. Xia, Z. Liu, and Y. Li, “Sar-pega: A generation method of adversarial examples for SAR image target recognition network,” IEEE Trans. Aerosp. Electron. Syst., pp. 1–11, 2022.
  21. J. Tu, M. Ren, S. Manivasagam, M. Liang, B. Yang, R. Du, F. Cheng, and R. Urtasun, “Physically realizable adversarial examples for lidar object detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2020, pp. 13 716–13 725.
  22. N. Akhtar, A. Mian, N. Kardan, and M. Shah, “Advances in adversarial attacks and defenses in computer vision: A survey,” IEEE Access, vol. 9, pp. 155 161–155 196, 2021.
  23. Y. Li, Y. Jiang, Z. Li, and S.-T. Xia, “Backdoor learning: A survey,” IEEE Trans. Neural Netw. Learn. Syst., pp. 1–18, 2022.
  24. X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted backdoor attacks on deep learning systems using data poisoning,” arXiv preprint arXiv:1712.05526, 2017.
  25. A. S. Rakin, Z. He, and D. Fan, “TBT: Targeted neural network attack with bit trojan,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2020, pp. 13 198–13 207.
  26. T. Gu, B. Dolan-Gavitt, and S. Garg, “BadNets: Identifying vulnerabilities in the machine learning model supply chain,” arXiv preprint arXiv:1708.06733, 2017.
  27. A. Nguyen and A. Tran, “WaNet–imperceptible warping-based backdoor attack,” arXiv preprint arXiv:2102.10369, 2021.
  28. E. Brewer, J. Lin, and D. Runfola, “Susceptibility & defense of satellite image-trained convolutional networks to backdoor attacks,” Information Sciences, vol. 603, pp. 244–261, 2022.
  29. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
  30. X. Sun, G. Cheng, H. Li, L. Pei, and J. Han, “Exploring effective data for surrogate training towards black-box attack,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 15 355–15 364.
  31. X. Sun, G. Cheng, L. Pei, and J. Han, “Query-efficient decision-based attack via sampling distribution reshaping,” Pattern Recognition, vol. 129, p. 108728, 2022.
  32. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
  33. T. Miyato, S.-i. Maeda, M. Koyama, and S. Ishii, “Virtual adversarial training: a regularization method for supervised and semi-supervised learning,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 41, no. 8, pp. 1979–1993, 2018.
  34. A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” arXiv preprint arXiv:1611.01236, 2016.
  35. Y. Xu and P. Ghamisi, “Universal adversarial examples in remote sensing: Methodology and benchmark,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–15, 2022.
  36. Y. Li, Y. Li, Y. Lv, Y. Jiang, and S.-T. Xia, “Hidden backdoor attack against semantic segmentation models,” arXiv preprint arXiv:2103.04038, 2021.
  37. Y. Feng, B. Ma, J. Zhang, S. Zhao, Y. Xia, and D. Tao, “FIBA: Frequency-injection based backdoor attack in medical image analysis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2022, pp. 20 876–20 885.
  38. A. Karami, M. Yazdi, and G. Mercier, “Compression of hyperspectral images using discerete wavelet transform and tucker decomposition,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 5, no. 2, pp. 444–450, 2012.
  39. L. M. Bruce, C. H. Koger, and J. Li, “Dimensionality reduction of hyperspectral data using discrete wavelet transform feature extraction,” IEEE Trans. Geos. Remote Sens., vol. 40, no. 10, pp. 2331–2338, 2002.
  40. X. Guo, X. Huang, and L. Zhang, “Three-dimensional wavelet texture feature extraction and classification for multi/hyperspectral imagery,” IEEE Geosci. Remote Sens. Lett., vol. 11, no. 12, pp. 2183–2187, 2014.
  41. H. Demirel, C. Ozcinar, and G. Anbarjafari, “Satellite image contrast enhancement using discrete wavelet transform and singular value decomposition,” IEEE Geosci. Remote Sens. Lett., vol. 7, no. 2, pp. 333–337, 2009.
  42. S. Prasad, W. Li, J. E. Fowler, and L. M. Bruce, “Information fusion in the redundant-wavelet-transform domain for noise-robust hyperspectral classification,” IEEE Trans. Geos. Remote Sens., vol. 50, no. 9, pp. 3474–3486, 2012.
  43. T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg, “Badnets: Evaluating backdooring attacks on deep neural networks,” IEEE Access, vol. 7, pp. 47 230–47 244, 2019.
  44. K. Doan, Y. Lao, W. Zhao, and P. Li, “LIRA: Learnable, imperceptible and robust backdoor attacks,” in Proc. IEEE Int. Conf. Comput. Vis., 2021, pp. 11 966–11 976.
  45. T. A. Nguyen and A. Tran, “Input-aware dynamic backdoor attack,” Proc. Neural Inf. Process. Syst., vol. 33, pp. 3454–3464, 2020.
  46. H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “Mixup: Beyond empirical risk minimization,” arXiv preprint arXiv:1710.09412, 2017.
  47. Y. Yang and S. Newsam, “Bag-of-visual-words and spatial extensions for land-use classification,” in Proc. SIGSPATIAL Int. Conf. Adv. Geogr. Inf. Syst., 2010, pp. 270–279.
  48. G.-S. Xia, J. Hu, F. Hu, B. Shi, X. Bai, Y. Zhong, L. Zhang, and X. Lu, “AID: A benchmark data set for performance evaluation of aerial scene classification,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 7, pp. 3965–3981, 2017.
  49. M. Cramer, “The DGPF-test on digital airborne camera evaluation overview and test design,” PFG Photogrammetrie, Fernerkundung, Geoinformation, pp. 73–82, 2010.
  50. M. Volpi and V. Ferrari, “Semantic segmentation of urban scenes by learning local class interactions,” in Proc. IEEE Int. Conf. Comput. Vis. Workshops, 2015, pp. 1–9.
  51. Y. Xu and P. Ghamisi, “Consistency-regularized region-growing network for semantic segmentation of urban scenes with point-level annotations,” IEEE Trans. Image Process., vol. 31, pp. 5038–5051, 2022.
  52. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Proc. Neural Inf. Process. Syst., vol. 25, pp. 1097–1105, 2012.
  53. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  54. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2818–2826.
  55. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.
  56. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1492–1500.
  57. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 4700–4708.
  58. I. Radosavovic, R. P. Kosaraju, R. Girshick, K. He, and P. Dollár, “Designing network design spaces,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2020, pp. 10 428–10 436.
  59. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2015, pp. 3431–3440.
  60. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 4, pp. 834–848, 2017.
  61. L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 801–818.
  62. V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481–2495, 2017.
  63. H. Zhao, X. Qi, X. Shen, J. Shi, and J. Jia, “ICNet for real-time semantic segmentation on high-resolution images,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 405–420.
  64. R. P. Poudel, U. Bonde, S. Liwicki, and C. Zach, “ContextNet: Exploring context and detail for semantic segmentation in real-time,” arXiv preprint arXiv:1805.04554, 2018.
  65. M. Treml, J. Arjona-Medina, T. Unterthiner, R. Durgesh, F. Friedmann, P. Schuberth, A. Mayr, M. Heusel, M. Hofmarcher, M. Widrich et al., “Speeding up semantic segmentation for autonomous driving,” in Proc. Neural Inf. Process. Syst. Workshops, 2016.
  66. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 2881–2890.
  67. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput. Assist. Intervent.   Springer, 2015, pp. 234–241.
  68. A. Chaurasia and E. Culurciello, “LinkNet: Exploiting encoder representations for efficient semantic segmentation,” in Proc. IEEE Vis. Commun. Image Process., 2017, pp. 1–4.
  69. T. Pohlen, A. Hermans, M. Mathias, and B. Leibe, “Full-resolution residual networks for semantic segmentation in street scenes,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 4151–4160.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Nikolaus Dräger (2 papers)
  2. Yonghao Xu (18 papers)
  3. Pedram Ghamisi (59 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com