Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 105 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 41 tok/s
GPT-5 High 42 tok/s Pro
GPT-4o 104 tok/s
GPT OSS 120B 474 tok/s Pro
Kimi K2 256 tok/s Pro
2000 character limit reached

Towards Verifying Robustness of Neural Networks Against Semantic Perturbations (1912.09533v2)

Published 19 Dec 2019 in cs.LG and stat.ML

Abstract: Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task. While current verification methods mainly focus on the $\ell_p$-norm threat model of the input instances, robustness verification against semantic adversarial attacks inducing large $\ell_p$-norm perturbations, such as color shifting and lighting adjustment, are beyond their capacity. To bridge this gap, we propose \textit{Semantify-NN}, a model-agnostic and generic robustness verification approach against semantic perturbations for neural networks. By simply inserting our proposed \textit{semantic perturbation layers} (SP-layers) to the input layer of any given model, \textit{Semantify-NN} is model-agnostic, and any $\ell_p$-norm based verification tools can be used to verify the model robustness against semantic perturbations. We illustrate the principles of designing the SP-layers and provide examples including semantic perturbations to image classification in the space of hue, saturation, lightness, brightness, contrast and rotation, respectively. In addition, an efficient refinement technique is proposed to further significantly improve the semantic certificate. Experiments on various network architectures and different datasets demonstrate the superior verification performance of \textit{Semantify-NN} over $\ell_p$-norm-based verification frameworks that naively convert semantic perturbation to $\ell_p$-norm. The results show that \textit{Semantify-NN} can support robustness verification against a wide range of semantic perturbations. Code available https://github.com/JeetMo/Semantify-NN

Citations (18)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.