Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 68 tok/s
Gemini 2.5 Pro 56 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 31 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 441 tok/s Pro
Claude Sonnet 4.5 33 tok/s Pro
2000 character limit reached

Resilience and Security of Deep Neural Networks Against Intentional and Unintentional Perturbations: Survey and Research Challenges (2408.00193v2)

Published 31 Jul 2024 in cs.CR and cs.AI

Abstract: In order to deploy deep neural networks (DNNs) in high-stakes scenarios, it is imperative that DNNs provide inference robust to external perturbations - both intentional and unintentional. Although the resilience of DNNs to intentional and unintentional perturbations has been widely investigated, a unified vision of these inherently intertwined problem domains is still missing. In this work, we fill this gap by providing a survey of the state of the art and highlighting the similarities of the proposed approaches.We also analyze the research challenges that need to be addressed to deploy resilient and secure DNNs. As there has not been any such survey connecting the resilience of DNNs to intentional and unintentional perturbations, we believe this work can help advance the frontier in both domains by enabling the exchange of ideas between the two communities.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (199)
  1. LINe: Out-of-Distribution Detection by Leveraging Important Neurons. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 19852–19862.
  2. Adversarial Example Detection for DNN Models: A Review and Experimental Comparison. Artificial Intelligence Review 55, 6 (2022).
  3. Square Attack: A Query-efficient Black-box Adversarial Attack via Random search. In European conference on computer vision. Springer, 484–501.
  4. Using Neural Machine Translation Methods for Sign Language Translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, Samuel Louvan, Andrea Madotto, and Brielen Madureira (Eds.). Association for Computational Linguistics, Dublin, Ireland, 273–284. https://doi.org/10.18653/v1/2022.acl-srw.21
  5. Drebin: Effective and Explainable Detection of Android Malware in Your Pocket.. In Ndss, Vol. 14. 23–26.
  6. Synthesizing robust adversarial examples. In International conference on machine learning. PMLR, 284–293.
  7. Colin Atkinson and Ann FS Mitchell. 1981. Rao’s distance measure. Sankhyā: The Indian Journal of Statistics, Series A (1981), 345–365.
  8. HYPO: Hyperspherical Out-Of-Distribution Generalization. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=VXak3CZZGC
  9. Improving Query Efficiency of Black-box Adversarial Attack. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXV 16. Springer, 101–116.
  10. Improving Adversarial Robustness via Channel-wise Activation Suppressing. In International Conference on Learning Representations. https://openreview.net/forum?id=zQTezqCCtNx
  11. Brian Barrett. 2020. A Tiny Piece of Tape Tricked Teslas into Speeding up 50 mph. https://www.wired.com/story/tesla-speed-up-adversarial-example-mgm-breach-ransomware/
  12. Unrestricted Adversarial Examples via Semantic Manipulation. In International Conference on Learning Representations. https://openreview.net/forum?id=Sye_OgHFwH
  13. Folkmar Bornemann and Tom März. 2007. Fast Image Inpainting Based on Coherence Transport. Journal of Mathematical Imaging and Vision 28 (2007), 259–278.
  14. Deep Learning in Pervasive Health Monitoring, Design Goals, Applications, and Architectures: An Overview and A Brief Synthesis. Smart Health 22 (2021), 100221. https://doi.org/10.1016/j.smhl.2021.100221
  15. High-Performance Large-Scale Image Recognition Without Normalization. arXiv:2102.06171
  16. Celeb-500k: A Large Training Dataset for Face Recognition. In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2406–2410.
  17. Vggface2: A Dataset for Recognising Faces Across Pose and Age. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018). IEEE, 67–74.
  18. Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). Ieee, 39–57.
  19. RelaxLoss: Defending Membership Inference Attacks without Losing Utility. In International Conference on Learning Representations (ICLR).
  20. Rethinking Model Ensemble in Transfer-based Adversarial Attacks. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=AcJrSoArlh
  21. Zoo: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks Without Training Substitute Models. In Proceedings of the 10th ACM workshop on artificial intelligence and security. 15–26.
  22. Balanced Energy Regularization Loss for Out-of-distribution Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15691–15700.
  23. Unsupervised Adversarial Detection without Extra Model: Training Loss Should Change. In The Second Workshop on New Frontiers in Adversarial Machine Learning.
  24. Describing Textures in the Wild. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).
  25. Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 14453–14462.
  26. Francesco Croce and Matthias Hein. 2019. Sparse and Imperceivable Adversarial Attacks. In Proceedings of the IEEE/CVF international conference on computer vision. 4724–4732.
  27. Francesco Croce and Matthias Hein. 2020. Reliable Evaluation of Adversarial Robustness with An Ensemble of Diverse Parameter-free Attacks. In International conference on machine learning. PMLR, 2206–2216.
  28. Imagenet: A large-scale Hierarchical Image Database. In 2009 IEEE conference on computer vision and pattern recognition. Ieee, 248–255.
  29. Li Deng. 2012. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine 29, 6 (2012), 141–142.
  30. Libre: A practical bayesian approach to adversarial detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 972–982.
  31. Extremely Simple Activation Shaping for Out-of-Distribution Detection. In The Eleventh International Conference on Learning Representations.
  32. Greedyfool: Distortion-aware Sparse Adversarial Attack. Advances in Neural Information Processing Systems 33 (2020), 11226–11236.
  33. Boosting Adversarial Attacks with Momentum. In Proceedings of IEEE CVPR. 9185–9193.
  34. Evading Defenses to Transferable Adversarial Examples by Translation-invariant Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4312–4321.
  35. Siren: Shaping Representations for Detecting Out-of-Distribution Objects. Advances in Neural Information Processing Systems 35 (2022), 20434–20449.
  36. Unknown-Aware Object Detection: Learning What You Don’t Know from Videos in the Wild. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2022).
  37. VOS: Learning What You Don’t Know by Virtual Outlier Synthesis. Proceedings of the International Conference on Learning Representations (2022).
  38. Optimal Representations for Covariate Shifts. In NeurIPS 2021 workshop on distribution shifts: connecting methods and applications.
  39. David Elliott and Eldon Soifer. 2022. AI Technologies, Privacy, and Security. Frontiers in Artificial Intelligence 5 (2022), 826737.
  40. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html.
  41. Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias. In Proceedings of the IEEE International Conference on Computer Vision. 1657–1664.
  42. Detecting Adversarial Samples from Artifacts. arXiv:1703.00410 [stat.ML]
  43. When Explainability Meets Adversarial Learning: Detecting Adversarial Examples Using Shap Signatures. In 2020 international joint conference on neural networks (IJCNN). IEEE, 1–8.
  44. DiffGuard: Semantic Mismatch-Guided Out-of-Distribution Detection using Pre-trained Diffusion Models. In Proceedings of the IEEE/CVF International Conference on Computer Vision.
  45. Deep Manifold Traversal: Changing Labels with Convolutional Features. arXiv preprint arXiv:1511.06421 (2015).
  46. Boosting Adversarial Transferability by Achieving Flat Local Maxima. Advances in Neural Information Processing Systems 36 (2024).
  47. Igeood: An Information Geometry Approach to Out-of-Distribution Detection. In International Conference on Learning Representations. https://openreview.net/forum?id=mfwdY3U_9ea
  48. Note: Robust Continual Test-time Adaptation Against Temporal Correlation. Advances in Neural Information Processing Systems 35 (2022), 27253–27266.
  49. Explaining and Harnessing Adversarial Examples. arXiv e-prints (2014), arXiv–1412.
  50. An Alternative Surrogate Loss for Pgd-based Adversarial Testing. arXiv preprint arXiv:1910.09338 (2019).
  51. Caltech-256 Object Category Dataset. Technical Report. Technical Report 7694, California Institute of Technology Pasadena.
  52. On the (Statistical) Detection of Adversarial Examples. arXiv:1702.06280 [cs.CR]
  53. Simple Black-box Adversarial Attacks. In International conference on machine learning. PMLR, 2484–2493.
  54. Backpropagating Linearly Improves Transferability of Adversarial Examples. Advances in neural information processing systems 33 (2020), 85–95.
  55. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770–778.
  56. WSRC: Weakly Supervised Faster RCNN Toward Accurate Traffic Object Detection. IEEE Access 11 (2023), 1445–1455. https://doi.org/10.1109/ACCESS.2022.3231293
  57. Scaling Out-of-Distribution Detection for Real-World Settings. In International Conference on Machine Learning. https://api.semanticscholar.org/CorpusID:227407829
  58. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF international conference on computer vision. 8340–8349.
  59. Dan Hendrycks and Thomas Dietterich. 2019. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. Proceedings of the International Conference on Learning Representations (2019).
  60. Dan Hendrycks and Kevin Gimpel. 2016. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks. In International Conference on Learning Representations.
  61. Deep Anomaly Detection with Outlier Exposure. Proceedings of the International Conference on Learning Representations (2019).
  62. Augmix: A Simple Data Processing Method to Improve Robustness and Uncertainty. arXiv preprint arXiv:1912.02781 (2019).
  63. Natural Adversarial Examples. CVPR (2021).
  64. The iNaturalist Species Classification and Detection Dataset. arXiv:1707.06642
  65. Generalized Odin: Detecting Out-of-Distribution Image Without Learning from Out-of-Distribution Data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10951–10960.
  66. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. In Workshop on faces in’Real-Life’Images: detection, alignment, and recognition.
  67. T-sea: Transfer-based Self-ensemble Attack on Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20514–20523.
  68. On the Importance of Gradients for Detecting Distributional Shifts in the Wild. In Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (Eds.), Vol. 34. Curran Associates, Inc., 677–689. https://proceedings.neurips.cc/paper_files/paper/2021/file/063e26c670d07bb7c4d30e6fc69fe056-Paper.pdf
  69. Density-driven Regularization for Out-of-distribution Detection. In Advances in Neural Information Processing Systems, Vol. 35. Curran Associates, Inc., 887–900. https://proceedings.neurips.cc/paper_files/paper/2022/file/05b69cc4c8ff6e24c5de1ecd27223d37-Paper-Conference.pdf
  70. Yi Huang and Adams Wai-Kin Kong. 2022. Transferable Adversarial Attack based on Integrated Gradients. In International Conference on Learning Representations. https://openreview.net/forum?id=DesNW4-5ai9
  71. Black-box Adversarial Attacks with Limited Queries and Information. In International conference on machine learning. PMLR, 2137–2146.
  72. LAS-AT: Adversarial Training with Learnable Attack Strategy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13398–13408.
  73. Randomized Adversarial Training via Taylor Expansion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16447–16457.
  74. Segment Anything. arXiv:2304.02643 (2023).
  75. Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In International conference on machine learning. PMLR, 1885–1894.
  76. OpenImages: A Public Dataset for Large-scale Multi-label and Multi-class Image Classification. Dataset available from https://github.com/openimages (2016).
  77. Being Bayesian, Even Just a Bit, Fixes overconfidence in ReLU Networks. In International conference on machine learning. PMLR, 5436–5446.
  78. Alex Krizhevsky and Geoffrey Hinton. 2009. Learning Multiple Layers of Features from Tiny Images. (2009).
  79. Adversarial Examples in the Physical World. In Artificial intelligence safety and security. Chapman and Hall/CRC, 99–112.
  80. Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles. Advances in neural information processing systems 30 (2017).
  81. Raz Lapid and Moshe Sipper. 2023. I See Dead People: Gray-box Adversarial Attack on Image-to-text Models. arXiv preprint arXiv:2306.07591 (2023).
  82. Ya Le and Xuan Yang. 2015. Tiny Imagenet Visual Recognition Challenge. CS 231N 7, 7 (2015), 3.
  83. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks. Advances in neural information processing systems 31 (2018).
  84. Detecting Adversarial Patch Attacks through Global-Local Consistency. In Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia. 35–41.
  85. Deeper, Broader and Artier Domain Generalization. In Proceedings of the IEEE international conference on computer vision. 5542–5550.
  86. Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is All You Need. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11578–11589.
  87. Projection & Probability-driven Black-box Attack. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 362–371.
  88. Membership privacy: A unifying framework for privacy definitions. In Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security. 889–900.
  89. Stealthy Adversarial Perturbations Against Real-Time Video Classification Systems. In Proceedings 2019 Network and Distributed System Security Symposium. Internet Society.
  90. Investigating Robustness of Adversarial Samples Detection for Automatic Speaker Verification. arXiv preprint arXiv:2006.06186 (2020).
  91. Learning Transferable Adversarial Examples via Ghost Networks. In Proceedings of the AAAI conference on artificial intelligence, Vol. 34. 11458–11465.
  92. Nattack: Learning the Distributions of Adversarial Examples for an Improved Black-box Attack on Deep Neural Networks. In International Conference on Machine Learning. PMLR, 3866–3876.
  93. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks. In International Conference on Learning Representations. https://openreview.net/forum?id=H1VGkIxRZ
  94. Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks. In International Conference on Learning Representations. https://openreview.net/forum?id=SJlHwkBYDH
  95. Microsoft COCO: Common Objects in Context. In Proceedings of European Conference on Computer Vision (ECCV). Springer, 740–755.
  96. Energy-based Out-of-Distribution Detection. Advances in neural information processing systems 33 (2020), 21464–21475.
  97. Gen: Pushing the Limits of Softmax-based Out-of-Distribution Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 23946–23955.
  98. Delving into Transferable Adversarial Examples and Black-box Attacks. In International Conference on Learning Representations. https://openreview.net/forum?id=Sys6GJqxl
  99. Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization. In The Twelfth International Conference on Learning Representations.
  100. Frequency Domain Model Aaugmentation for Adversarial Attack. In European conference on computer vision. Springer, 549–566.
  101. Learning with Mixture of Prototypes for Out-of-Distribution Detection. In The Twelfth International Conference on Learning Representations.
  102. Safetynet: Detecting and Rejecting Adversarial Examples Robustly. In Proceedings of the IEEE international conference on computer vision. 446–454.
  103. Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model predictions. Advances in neural information processing systems 30 (2017).
  104. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality. arXiv preprint arXiv:1801.02613 (2018).
  105. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations. https://openreview.net/forum?id=rJzIBfZAb
  106. YOLO-Pose: Enhancing YOLO for Multi Person Pose Estimation Using Object Keypoint Similarity Loss. arXiv:2204.06806
  107. Kanti V Mardia and Peter E Jupp. 2009. Directional Statistics. John Wiley & Sons.
  108. Metamorphic Adversarial Detection Pipeline for Face Recognition Systems. In The AAAI-22 Workshop on Adversarial Machine Learning and Beyond.
  109. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267 (2017).
  110. Delving into Out-of-Distribution Detection with Vision-Language Representations. In Advances in Neural Information Processing Systems.
  111. How to Exploit Hyperspherical Embeddings for Out-of-Distribution Detection?. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=aEFaE0W5pAd
  112. Sparsefool: A Few Pixels Make a Big Difference. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 9087–9096.
  113. Advflow: Inconspicuous Black-box Adversarial Attacks Using Normalizing Flows. Advances in Neural Information Processing Systems 33 (2020), 15871–15884.
  114. Universal Adversarial Perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1765–1773.
  115. Deepfool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2574–2582.
  116. Nag: Network for Adversary Generation. In Proceedings of the IEEE conference on computer vision and pattern recognition. 742–751.
  117. Density of States Estimation for Out-of-Distribution Detection. In International Conference on Artificial Intelligence and Statistics. https://api.semanticscholar.org/CorpusID:219708243
  118. Agedb: The First Manually Collected, In-the-Wild Age Database. In proceedings of the IEEE conference on computer vision and pattern recognition workshops. 51–59.
  119. Voxceleb: A Large-scale Speaker Identification Dataset. arXiv preprint arXiv:1706.08612 (2017).
  120. Reading Digits in Natural Images with Unsupervised Feature Learning. NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2 (2011).
  121. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. In Proceedings of the IEEE conference on computer vision and pattern recognition. 427–436.
  122. Maria-Elena Nilsback and Andrew Zisserman. 2008. Automated Flower Classification over a Large Number of Classes. In Indian Conference on Computer Vision, Graphics and Image Processing.
  123. Detection of Out-of-Distribution Samples using Binary Neuron Activation Patterns. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3378–3387.
  124. OpenAI. 2023. GPT-4 Technical Report. ArXiv abs/2303.08774 (2023). https://arxiv.org/abs/2303.08774
  125. FACADE: A Framework for Adversarial Circuit Anomaly Detection and Evaluation. In The Second Workshop on New Frontiers in Adversarial Machine Learning.
  126. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016).
  127. Practical Black-Box Attacks against Machine Learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security. 506–519.
  128. Moment Matching for Multi-source Domain Adaptation. In Proceedings of the IEEE/CVF international conference on computer vision. 1406–1415.
  129. A Halfspace-Mass Depth-Based Method for Adversarial Attack Detection. Transactions on Machine Learning Research (2023). https://openreview.net/forum?id=YtU0nDb5e8
  130. A Survey on Deep Learning: Algorithms, Techniques, and Applications. ACM Comput. Surv. 51, 5, Article 92 (sep 2018), 36 pages. https://doi.org/10.1145/3234150
  131. Dataset Shift in Machine Learning. Mit Press.
  132. A General Framework for Detecting Anomalous Inputs to DNN Classifiers. In International Conference on Machine Learning. PMLR, 8764–8775.
  133. ImageNet-21K Pretraining for the Masses. arXiv:2104.10972
  134. High-Resolution Image Synthesis With Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10684–10695.
  135. The odds are odd: A statistical test for detecting adversarial examples. In International Conference on Machine Learning. PMLR, 5498–5507.
  136. A Unified Survey on Anomaly, Novelty, Open-Set, and Out of-Distribution Detection: Solutions and Future Challenges. Transactions on Machine Learning Research 234 (2022).
  137. Frontal to profile face verification in the wild. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). 1–9. https://doi.org/10.1109/WACV.2016.7477558
  138. Input Complexity and Out-of-distribution Detection with Likelihood-based Generative Models. In International Conference on Learning Representations.
  139. Colorfool: Semantic Adversarial Colorization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1151–1160.
  140. LS SHAPLEY. 1997. A VALUE FOR n-PERSON GAMES1. Classics in Game Theory (1997), 69.
  141. Novel Combination of Serum microRNA for Detecting Breast Cancer in the Early Stage. Cancer science 107, 3 (2016), 326–334.
  142. Towards certifiable adversarial sample detection. In Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security. 13–24.
  143. CASIA-E: a large comprehensive dataset for gait recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 45, 3 (2022), 2801–2815.
  144. Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses. Advances in Neural Information Processing Systems 33 (2020), 20297–20308.
  145. Disentangling Adversarial Robustness and Generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
  146. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation 23, 5 (2019), 828–841.
  147. A Model Robustness Optimization Method Based on Adversarial Sample Detection. In Proceedings of the 2022 5th International Conference on Artificial Intelligence and Pattern Recognition (Xiamen, China) (AIPR ’22). Association for Computing Machinery, New York, NY, USA, 304–310. https://doi.org/10.1145/3573942.3574026
  148. React: Out-of-Distribution Detection with Rectified Activations. Advances in Neural Information Processing Systems 34 (2021), 144–157.
  149. Out-of-Distribution Detection with Deep Nearest Neighbors. In Proceedings of the 39th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 162). PMLR, 20827–20840. https://proceedings.mlr.press/v162/sun22d.html
  150. Intriguing Properties of Neural Networks. arXiv:1312.6199
  151. Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift. Advances in Neural Information Processing Systems 33 (2020), 19276–19289.
  152. Attacks Meet Interpretability: Attribute-Steered Detection of Adversarial Samples. Advances in Neural Information Processing Systems 31 (2018).
  153. Jedi: Entropy-based Localization and Removal of Adversarial Patches. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4087–4095.
  154. Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 0–0.
  155. Ensemble Adversarial Training: Attacks and Defenses. In International Conference on Learning Representations. https://openreview.net/forum?id=rkZvSe-RZ
  156. Exploiting the Local Parabolic Landscapes of Adversarial Losses to Accelerate Black-box Adversarial Attack. In European conference on computer vision. Springer, 317–334.
  157. Open-Set Recognition: A Good Closed-Set Classifier is All You Need. In International Conference on Learning Representations. https://openreview.net/forum?id=5hLP5JY9S2d
  158. Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition. In 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021) (Jodhpur, India). IEEE Press, 1–8. https://doi.org/10.1109/FG52635.2021.9667076
  159. Adversarial Sample Detection for Deep Neural Network Through Model Mutation Testing. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). IEEE, 1245–1256.
  160. Out-of-distribution Detection with Implicit Outlier Transformation. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=hdghx6wbGuD
  161. Xiaosen Wang and Kun He. 2021. Enhancing the Transferability of Adversarial Attacks through Variance Tuning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 1924–1933.
  162. Admix: Enhancing the Transferability of Adversarial Attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 16158–16167.
  163. Structure Invariant Transformation for Better Adversarial Transferability. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4607–4619.
  164. Mitigating Neural Network Overconfidence with Logit Normalization. In International Conference on Machine Learning. PMLR, 23631–23644.
  165. SAFE: Sensitivity-Aware Features for Out-of-Distribution Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 23565–23576.
  166. Spatially Transformed Adversarial Examples. In International Conference on Learning Representations. https://openreview.net/forum?id=HyydRMZC-
  167. Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv preprint arXiv:1708.07747 (2017).
  168. Improving Transferability of Adversarial Examples with Input Diversity. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2730–2739.
  169. VRA: Variational Rectified Activation for Out-of-Distribution Detection. Advances in Neural Information Processing Systems 36 (2023), 28941–28959.
  170. TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking. arXiv:1504.06755
  171. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In Proceedings 2018 Network and Distributed System Security Symposium. Internet Society.
  172. Diversified Adversarial Attacks Based on Conjugate Gradient Method. In International Conference on Machine Learning. PMLR, 24872–24894.
  173. Ml-loo: Detecting Adversarial Examples with Feature Attribution. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 6639–6647.
  174. ImageNet-OOD: Deciphering Modern Out-of-Distribution Detection Algorithms. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=VTYg5ykEGS
  175. Improved OOD Generalization via Adversarial Training and Pretraing. In International Conference on Machine Learning. PMLR, 11987–11997.
  176. Knowledge Extraction with No Observable Data. Advances in Neural Information Processing Systems 32 (2019).
  177. Bdd100k: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2636–2645.
  178. LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop. arXiv preprint arXiv:1506.03365 (2015).
  179. Yunrui Yu and Cheng-Zhong Xu. 2023. Efficient Loss Function by Minimizing the Detrimental Effect of Floating-point Errors on Gradient-based Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4056–4066.
  180. Robust Test-time Adaptation in Dynamic Scenarios. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 15922–15932.
  181. Out-of-Distribution Detection using Union of 1-Dimensional Subspaces. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition. 9452–9461.
  182. CD-UAP: Class Discriminative Universal Adversarial Perturbation. In Proceedings of the AAAI conference on artificial intelligence, Vol. 34. 6754–6761.
  183. Data-free Universal Adversarial Perturbation and black-box Attack. In Proceedings of the IEEE/CVF international conference on computer vision. 7868–7877.
  184. Detecting Adversarial Perturbations with Saliency. In Proceedings of the 6th International Conference on Information Technology: IoT and Smart City (Hong Kong, Hong Kong) (ICIT ’18). Association for Computing Machinery, New York, NY, USA, 25–30. https://doi.org/10.1145/3301551.3301588
  185. You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle. Advances in neural information processing systems 32 (2019).
  186. Improving the Transferability of Adversarial Samples by Path-augmented Method. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8173–8182.
  187. A Fast Adversarial Sample Detection Approach for Industrial Internet-of-Things Applications. In 2023 IEEE/ACM 31st International Symposium on Quality of Service (IWQoS). IEEE, 01–10.
  188. Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score. In International Conference on Machine Learning (ICML).
  189. Zihan Zhang and Xiang Xiang. 2023. Decoupling MaxLogit for Out-of-Distribution Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3388–3397.
  190. Towards Optimal Feature-Shaping Methods for Out-of-Distribution Detection. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=dm8e7gsH0d
  191. Tianyue Zheng and Weihong Deng. 2018. Cross-pose LFW: A Database for Studying Cross-pose Face Recognition in Unconstrained Environments. Beijing University of Posts and Telecommunications, Tech. Rep 5, 7 (2018).
  192. Cross-age lfw: A database for studying cross-age face recognition in unconstrained environments. arXiv preprint arXiv:1708.08197 (2017).
  193. Zhihao Zheng and Pengyu Hong. 2018. Robust Detection of Adversarial Attacks by Modeling the Intrinsic Properties of Deep Neural Networks. In Advances in Neural Information Processing Systems, Vol. 31. Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2018/file/e7a425c6ece20cbc9056f98699b53c6f-Paper.pdf
  194. Places: A 10 million Image Database for Scene Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (2017).
  195. Dast: Data-free substitute training for adversarial attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 234–243.
  196. Boosting Adversarial Transferability via Gradient Relevance Attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4741–4750.
  197. Diversified Outlier Exposure for Out-of-Distribution Detection via Informative Extrapolation. In Thirty-seventh Conference on Neural Information Processing Systems. https://openreview.net/forum?id=RuxBLfiEqI
  198. Boosting Out-of-Distribution Detection with Typical Features. Advances in Neural Information Processing Systems 35 (2022), 20758–20769.
  199. Xin Zou and Weiwei Liu. 2023. On the Adversarial Robustness of Out-of-distribution Generalization Models. In Thirty-seventh Conference on Neural Information Processing Systems. https://openreview.net/forum?id=IiwTFcGGTq

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube