Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Provable Adversarial Robustness for Group Equivariant Tasks: Graphs, Point Clouds, Molecules, and More (2312.02708v2)

Published 5 Dec 2023 in cs.LG, cs.CR, and stat.ML

Abstract: A machine learning model is traditionally considered robust if its prediction remains (almost) constant under input perturbations with small norm. However, real-world tasks like molecular property prediction or point cloud segmentation have inherent equivariances, such as rotation or permutation equivariance. In such tasks, even perturbations with large norm do not necessarily change an input's semantic content. Furthermore, there are perturbations for which a model's prediction explicitly needs to change. For the first time, we propose a sound notion of adversarial robustness that accounts for task equivariance. We then demonstrate that provable robustness can be achieved by (1) choosing a model that matches the task's equivariances (2) certifying traditional adversarial robustness. Certification methods are, however, unavailable for many models, such as those with continuous equivariances. We close this gap by developing the framework of equivariance-preserving randomized smoothing, which enables architecture-agnostic certification. We additionally derive the first architecture-specific graph edit distance certificates, i.e. sound robustness guarantees for isomorphism equivariant tasks like node classification. Overall, a sound notion of robustness is an important prerequisite for future work at the intersection of robust and geometric machine learning.

Citations (4)

Summary

  • The paper introduces a novel definition of adversarial robustness for group equivariant tasks using action-induced distances.
  • It proposes equivariance-preserving randomized smoothing to certify model predictions under symmetrical transformations.
  • Experimental results demonstrate enhanced robustness in graphs, point clouds, and molecular data for real-world applications.

The paper at hand presents a novel framework for defining and achieving what is referred to as "adversarial robustness" in the context of machine learning tasks characterized by certain symmetries or invariance—the robustness of models with respect to symmetrical transformations of the input, such as rotations or permutations. Traditional robustness definitions, which focus on small norms of perturbations, fail to account for tasks with such symmetries because even large perturbations, like rotations in point clouds, may not change the underlying semantic content or the natural label of the data.

To redefine adversarial robustness for these group equivariant tasks, the authors introduce an action-induced distance that incorporates the task’s symmetries, measuring input similarity through group-invariant distances. Moreover, it is proposed that robustness can be certified if the model's predictions transform consistently with the equivariant transformations which preserve semantic content of the input.

For the certification of robustness, the paper introduces equivariance-preserving randomized smoothing, a method where a base model is smoothed by noise and the predictions are based on its random perturbations. This procedure is adapted to preserve group equivariances when applying randomness, therefore allowing certification methods to account for both the equivariances of the task and the small, unstructured perturbations often present in real-world data.

The paper highlights the importance of considering these modified aspects of robustness for future work in robust and geometric machine learning, where data is often structured or presented in non-Euclidean domains. The authors successfully extend several existing adversarial robustness guarantees, traditionally for uniform costs, to non-uniform costs for differing costs of insertion and deletion operations in graph-structured data, making robustness measures more nuanced and realistic. Their experimental results strongly support the notion that attaining provable robustness for group-equivariant tasks is vital for a broad range of real-world applications, including tasks involving graphs, point clouds, and molecular structures. The culmination of their work emphasizes the critical intersection between robustness and spatial symmetries in machine learning models and tasks.