Precise and Generalized Robustness Certification for Neural Networks (2306.06747v1)
Abstract: The objective of neural network (NN) robustness certification is to determine if a NN changes its predictions when mutations are made to its inputs. While most certification research studies pixel-level or a few geometrical-level and blurring operations over images, this paper proposes a novel framework, GCERT, which certifies NN robustness under a precise and unified form of diverse semantic-level image mutations. We formulate a comprehensive set of semantic-level image mutations uniformly as certain directions in the latent space of generative models. We identify two key properties, independence and continuity, that convert the latent space into a precise and analysis-friendly input space representation for certification. GCERT can be smoothly integrated with de facto complete, incomplete, or quantitative certification frameworks. With its precise input space representation, GCERT enables for the first time complete NN robustness certification with moderate cost under diverse semantic-level input mutations, such as weather-filter, style transfer, and perceptual changes (e.g., opening/closing eyes). We show that GCERT enables certifying NN robustness under various common and security-sensitive scenarios like autonomous driving.
- Artifact. https://github.com/Yuanyuan-Yuan/GCert.
- Autonomous vehicle collision reports. https://www.dmv.ca.gov/portal/vehicle-industry-services/autonomous-vehicles/autonomous-vehicle-collision-reports/.
- The driving dataset. https://github.com/SullyChen/driving-datasets.
- Eran: Eth robustness analyzer for deep neural networks. https://github.com/eth-sri/eran.
- Geometrical mutations in opencv. https://docs.opencv.org/4.x/da/d54/group__imgproc__transform.html.
- Pytorch data processing. https://pytorch.org/vision/stable/transforms.html.
- Tensorflow data processing. https://www.tensorflow.org/datasets/overview.
- Tesla vehicle safety report. https://www.tesla.com/VehicleSafetyReport.
- Deformrs: Certifying input deformations with randomized smoothing. In AAAI, volume 36, pages 6001–6009, 2022.
- Wasserstein generative adversarial networks. In ICML, 2017.
- Certifying geometric robustness of neural networks. Advances in Neural Information Processing Systems, 32, 2019.
- Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
- Fast and precise certification of transformers. In PLDI, 2021.
- Large scale gan training for high fidelity natural image synthesis. In ICLR, 2018.
- Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages, pages 238–252, 1977.
- Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
- Adversarial audio synthesis. In ICLR, 2018.
- Exposing previously undetectable faults in deep neural networks. In ISSTA, 2021.
- Understanding noise injection in gans. In ICML, 2021.
- Complete verification via multi-neuron relaxation guided branch-and-bound. In International Conference on Learning Representations, 2021.
- Certified defense to image transformations via randomized smoothing. NeurIPS, 2020.
- Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE symposium on security and privacy (SP), 2018.
- Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In ICLR, 2018.
- Making machine learning robust against adversarial inputs. Communications of the ACM, 2018.
- Generative adversarial networks. Communications of the ACM, 2020.
- Explaining and harnessing adversarial examples. stat, 1050:20, 2015.
- Gsmooth: Certified robustness against semantic transformations via generalized randomized smoothing. In ICML, 2022.
- Deep residual learning for image recognition. In CVPR, 2016.
- Natural adversarial examples. In CVPR, 2021.
- The origins and prevalence of texture bias in convolutional neural networks. NeurIPS, 2020.
- Image-to-image translation with conditional adversarial networks. In CVPR, 2017.
- Provable certificates for adversarial examples: Fitting a ball in the union of polytopes. NeurIPS, 2019.
- Progressive growing of gans for improved quality, stability, and variation. In ICLR, 2018.
- A style-based generator architecture for generative adversarial networks. In CVPR, 2019.
- Auto-encoding variational bayes. stat, 1050:1, 2014.
- Glow: Generative flow with invertible 1x1 convolutions. Advances in neural information processing systems, 31, 2018.
- Learning multiple layers of features from tiny images. 2009.
- Learning methods for generic object recognition with invariance to pose and lighting. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 2004.
- Certified robustness to adversarial examples with differential privacy. In IEEE S&P, 2019.
- A convolutional neural network cascade for face detection. In CVPR, 2015.
- Sok: Certified robustness for deep neural networks. In 2023 IEEE symposium on security and privacy (SP). IEEE, 2023.
- Tss: Transformation-specific smoothing for robustness certification. In CCS, 2021.
- Deep learning face attributes in the wild. In ICCV, 2015.
- In defense of soundiness: A manifesto. Communications of the ACM, 2015.
- Robustness certification for point cloud models. In ICCV, 2021.
- Robustness certification with generative models. In PLDI, 2021.
- Towards verifying robustness of neural networks against a family of semantic perturbations. In CVPR, 2020.
- Prima: general and precise neural network certification via scalable convex hull approximations. POPL, 2022.
- Cc-cert: A probabilistic approach to certify general robustness of neural networks. In AAAI, 2022.
- Deepxplore: Automated whitebox testing of deep learning systems. In SOSP, 2017.
- Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
- Rethinking conditional gan training: An approach using geometrically structured latent manifolds. NeurIPS, 2021.
- Henry Gordon Rice. Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical society, 74(2):358–366, 1953.
- Learning certified individually fair representations. NeurIPS, 2020.
- Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85–117, 2015.
- An abstract domain for certifying neural networks. POPL, 2019.
- Computing linear restrictions of neural networks. Advances in Neural Information Processing Systems, 32, 2019.
- Deeptest: Automated testing of deep-neural-network-driven autonomous cars. In ICSE, 2018.
- Perfectly parallel fairness certification of neural networks. OOPSLA, 2020.
- Formal security analysis of neural networks using symbolic intervals. In USENIX Security, 2018.
- Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. NeurIPS, 2021.
- High-fidelity gan inversion for image attribute editing. In CVPR, 2022.
- Explainable face recognition. In European conference on computer vision, pages 248–263. Springer, 2020.
- Spatially transformed adversarial examples. In International Conference on Learning Representations, 2018.
- Deephunter: a coverage-guided fuzz testing framework for deep neural networks. In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, pages 146–157, 2019.
- Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers. In International Conference on Learning Representations, 2020.
- Enhancing deep neural networks testing by traversing data manifold. arXiv preprint arXiv:2112.01956, 2021.
- Unveiling hidden dnn defects with decision-based metamorphic testing. In ASE, 2022.
- Revisiting neuron coverage for dnn testing: A layer-wise and distribution-aware criterion. In ICSE, 2023.
- Perception matters: Detecting perception failures of vqa models using metamorphic testing. In CVPR, 2021.
- Precise and generalized robustness certification for neural networks. USENIX Security, 2023.
- Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer, 2014.
- Efficient neural network robustness certification with general activation functions. Advances in neural information processing systems, 31, 2018.
- Deeproad: Gan-based metamorphic testing and input validation framework for autonomous driving systems. In 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 132–142. IEEE, 2018.
- Adversarially regularized autoencoders. In International conference on machine learning, pages 5902–5911. PMLR, 2018.
- Image reconstruction by domain-transform manifold learning. Nature, 555(7697):487–492, 2018.
- Low-rank subspaces in gans. Advances in Neural Information Processing Systems, 34:16648–16658, 2021.
- Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.