Adversarially Robust Feature Learning for Breast Cancer Diagnosis
Abstract: Adversarial data can lead to malfunction of deep learning applications. It is essential to develop deep learning models that are robust to adversarial data while accurate on standard, clean data. In this study, we proposed a novel adversarially robust feature learning (ARFL) method for a real-world application of breast cancer diagnosis. ARFL facilitates adversarial training using both standard data and adversarial data, where a feature correlation measure is incorporated as an objective function to encourage learning of robust features and restrain spurious features. To show the effects of ARFL in breast cancer diagnosis, we built and evaluated diagnosis models using two independent clinically collected breast imaging datasets, comprising a total of 9,548 mammogram images. We performed extensive experiments showing that our method outperformed several state-of-the-art methods and that our method can enhance safer breast cancer diagnosis against adversarial attacks in clinical settings.
- Feature purification: How adversarial training performs robust deep learning. In 62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021, pp. 977–988. IEEE, 2021.
- Domain-specific batch normalization for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7354–7362, 2019.
- Recent advances and clinical applications of deep learning in medical image analysis. Medical Image Analysis, 79:102444, 2022.
- Why adversarial training can hurt robust accuracy. arXiv preprint arXiv:2203.02006, 2022.
- Cui, C. et al. The chinese mammography database (cmmd): An online mammography database with biopsy confirmed types for machine diagnosis of breast. The Cancer Imaging Archive, 2021.
- Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE, 2009.
- Adversarial attack and defense for medical image analysis: Methods and applications. arXiv preprint arXiv:2303.14133, 2023.
- Adversarial attacks on medical machine learning. Science, 363(6433):1287–1289, 2019.
- Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
- Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization. Nature communications, 12(1):1–11, 2021.
- Understanding the impact of adversarial robustness on accuracy disparity. In Proceedings of the 40th International Conference on Machine Learning, pp. 13679–13709. PMLR, 2023.
- Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems, volume 32, 2019.
- Using adversarial images to assess the robustness of deep learning models trained on diagnostic images in oncology. JCO Clinical Cancer Informatics, 6(2):e2100170, 2022.
- Breast cancer heterogeneity: Mr imaging texture analysis and survival outcomes. Radiology, 282(3):665–675, 2017.
- Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
- Dual manifold adversarial robustness: Defense against lp and non-lp adversarial attacks. In Advances in Neural Information Processing Systems, volume 33, pp. 3487–3498, 2020.
- Robust breast cancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach. Nature Medicine, 27(2):244–249, 2021.
- A unified gradient regularization family for adversarial examples. In 2015 IEEE International Conference on Data Mining (ICDM), pp. 301–309. IEEE, 2015.
- Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognition, 110:107332, 2021.
- Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
- Ct-gan: Malicious tampering of 3d medical imagery using deep learning. In 28th USENIX Security Symposium (USENIX Security 19), pp. 461–478, 2019.
- Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
- Generalizability vs. robustness: Exploring adversarial examples in medical imaging. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2018. Springer, 2018.
- Pedregosa, F. et al. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12:2825–2830, 2011.
- Adversarial robustness via fisher-rao regularization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.
- Stabilized medical image attacks. arXiv preprint arXiv:2103.05232, 2021.
- A survey of robust adversarial training in pattern recognition: Fundamental, theory, and methodologies. Pattern Recognition, 131, 2022.
- Adversarial training can hurt generalization. arXiv preprint arXiv:1906.06032, 2019.
- Understanding and mitigating the tradeoff between robustness and accuracy. In arXiv preprint arXiv:2002.10716, 2020.
- Overfitting in adversarially robust deep learning. In International Conference on Machine Learning, pp. 8093–8104. PMLR, 2020.
- Adversarial training for free! In Advances in Neural Information Processing Systems, volume 32, 2019.
- Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- Improving the generalization of adversarial training with domain adaptation. In International Conference on Machine Learning, pp. 4934–4943. PMLR, 2018.
- Mirst-dm: Multi-instance rst with drop-max layer for robust classification of breast cancer. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 401–410. Springer, 2022.
- Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
- Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152, 2018.
- Adversarial examples improve image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 819–828, 2020.
- Miss the point: Targeted adversarial attack on multiple landmark detection. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part IV 23, pp. 692–702. Springer, 2020.
- Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning, pp. 7472–7482. PMLR, 2019.
- Interpreting adversarially trained convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, pp. 7502–7511. PMLR, 2019.
- Zhou, Q. et al. A machine and human reader study on ai diagnosis model safety under attacks of adversarial images. Nature Communications, 12(1), 2021.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.