Susceptibility of Continual Learning Against Adversarial Attacks (2207.05225v5)
Abstract: Recent continual learning approaches have primarily focused on mitigating catastrophic forgetting. Nevertheless, two critical areas have remained relatively unexplored: 1) evaluating the robustness of proposed methods and 2) ensuring the security of learned tasks. This paper investigates the susceptibility of continually learned tasks, including current and previously acquired tasks, to adversarial attacks. Specifically, we have observed that any class belonging to any task can be easily targeted and misclassified as the desired target class of any other task. Such susceptibility or vulnerability of learned tasks to adversarial attacks raises profound concerns regarding data integrity and privacy. To assess the robustness of continual learning approaches, we consider continual learning approaches in all three scenarios, i.e., task-incremental learning, domain-incremental learning, and class-incremental learning. In this regard, we explore the robustness of three regularization-based methods, three replay-based approaches, and one hybrid technique that combines replay and exemplar approaches. We empirically demonstrated that in any setting of continual learning, any class, whether belonging to the current or previously learned tasks, is susceptible to misclassification. Our observations identify potential limitations of continual learning approaches against adversarial attacks and highlight that current continual learning algorithms could not be suitable for deployment in real-world settings.
- A review of the application of deep learning in medical image classification and segmentation. Annals of translational medicine, 8(11), 2020.
- A survey of the usages of deep learning for natural language processing. IEEE transactions on neural networks and learning systems, 32(2):604–624, 2020.
- Systematic review of generative adversarial networks (gans) for medical image classification and segmentation. Journal of Digital Imaging, pages 1–16, 2022.
- A comprehensive survey of deep learning in the field of medical imaging and medical natural language processing: Challenges and research directions. Journal of King Saud University-Computer and Information Sciences, 2021.
- Sentiment analysis using deep learning architectures: a review. Artificial Intelligence Review, 53(6):4335–4385, 2020.
- Deep reinforcement learning in computer vision: a comprehensive survey. Artificial Intelligence Review, pages 1–87, 2021.
- Cascading handcrafted features and convolutional neural network for iot-enabled brain tumor segmentation. Computer Communications, 153:196–207, 2020.
- Rotorcraft flight information inference from cockpit videos using deep learning. American Helicopter Society 75th Annual Forum, Philadelphia, Pennsylvania, USA, May 2019.
- Explainable ai: Rotorcraft attitude prediction. Vertical Flight Society’s 76th Annual Forum and Technology Display, Virginia Beach, Virginia, USA, Oct 2020.
- Deep ensemble for rotorcraft attitude prediction. Vertical Flight Society’s 77th Annual Forum and Technology Display, Palm Beach, Florida, USA, May 2021.
- Introduction to artificial intelligence. In Beginning Deep Learning with TensorFlow, pages 1–45. Springer, 2022.
- Continual lifelong learning with neural networks: A review. Neural Networks, 113:54–71, 2019.
- A continual learning survey: Defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
- The security of machine learning in an adversarial setting: A survey. Journal of Parallel and Distributed Computing, 130:12–23, 2019.
- Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069, 2018.
- Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
- Continual learning through synaptic intelligence. Proceedings of machine learning research, 70:3987, 2017.
- Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947, 2017.
- Continual learning with deep generative replay. In Advances in Neural Information Processing Systems, pages 2990–2999, 2017.
- Mark Bishop Ring. Continual learning in reinforcement environments. PhD thesis, University of Texas at Austin, 1994.
- Lifelong robot learning. Robotics and autonomous systems, 15(1-2):25–46, 1995.
- Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, pages 109–165. Elsevier, 1989.
- Roger Ratcliff. Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. Psychological review, 97(2):285, 1990.
- Recent advances of continual learning in computer vision: An overview. arXiv preprint arXiv:2109.11369, 2021.
- Learning without memorizing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5138–5146, 2019.
- Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America, 114(13):3521–3526, 2017.
- Continual learning through synaptic intelligence. Proceedings of machine learning research, 70:3987–3995, 2017.
- Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), pages 139–154, 2018.
- Gradient projection memory for continual learning. In International Conference on Learning Representations (ICLR), 2020.
- Continual learning with sparse progressive neural networks. 2020 28th Signal Processing and Communications Applications Conference (SIU), pages 1–4, 2020.
- Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
- icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2001–2010, 2017.
- Gradient episodic memory for continual learning. Advances in Neural Information Processing Systems (NIPS), 30, 2017.
- Anthony Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2):123–146, 1995.
- Learning to learn without forgetting by maximizing transfer and minimizing interference. In International Conference on Learning Representations (ICLR), 2019.
- Continual learning in low-rank orthogonal subspaces. Advances in Neural Information Processing Systems (NIPS), 33:9900–9911, 2020.
- Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
- Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
- Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39–57. IEEE, 2017.
- Gido M van de Ven and Andreas S Tolias. Three scenarios for continual learning. arXiv, pages arXiv–1904, 2019.
- Mnist handwritten digit database. 2010.
- Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. Proceedings of the National Academy of Sciences, 115(44):E10467–E10475, 2018.
- Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax. Journal of Open Source Software, 5(53):2607, 2020.