Equivariant Frames and the Impossibility of Continuous Canonicalization (2402.16077v2)
Abstract: Canonicalization provides an architecture-agnostic method for enforcing equivariance, with generalizations such as frame-averaging recently gaining prominence as a lightweight and flexible alternative to equivariant architectures. Recent works have found an empirical benefit to using probabilistic frames instead, which learn weighted distributions over group elements. In this work, we provide strong theoretical justification for this phenomenon: for commonly-used groups, there is no efficiently computable choice of frame that preserves continuity of the function being averaged. In other words, unweighted frame-averaging can turn a smooth, non-symmetric function into a discontinuous, symmetric function. To address this fundamental robustness problem, we formally define and construct \emph{weighted} frames, which provably preserve continuity, and demonstrate their utility by constructing efficient and continuous weighted frames for the actions of $SO(2)$, $SO(3)$, and $S_n$ on point clouds.
- Neural injective functions for multisets, measures and graphs via a finite witness theorem, 2023.
- E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature Communications, 13(1), 2022. doi: 10.1038/s41467-022-29939-5. URL https://par.nsf.gov/biblio/10381731.
- On the sample complexity of learning under geometric stability. In Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=vlf0zTKa5Lh.
- Borsuk, K. Drei sätze über die n-dimensionale euklidische sphäre. Fundamenta Mathematicae, 20(1):177–190, 1933.
- Diffdock: Diffusion steps, twists, and turns for molecular docking. In ICLR, 2023.
- SE(3) equivariant graph neural networks with complete local frames. In ICML, 2022.
- A new perspective on building efficient and expressive 3d equivariant graph neural networks. In NeurIPS, 2023.
- Faenet: Frame averaging equivariant gnn for materials modeling. In ICML, 2023.
- Elesedy, B. Provably strict generalisation benefit for invariance in kernel methods. Advances in Neural Information Processing Systems, 34:17273–17283, 2021.
- Understanding and extending subgraph gnns by rethinking their symmetries. In NeurIPS, 2022.
- Hatcher, A. Algebraic topology. Cambridge University Press, Cambridge, 2002. ISBN 0-521-79160-X; 0-521-79540-0.
- Hopf, H. Über die abbildungen der dreidimensionalen sphäre auf die kugelfläche. Mathematische Annalen, 104(1):637–665, 1931.
- Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583–589, 2021.
- Equivariance with learned canonicalization functions. In Symmetry and Geometry in Neural Representations Workshop, 2022.
- Learning probabilistic symmetrization for architecture agnostic equivariance. In NeurIPS, 2023.
- Equiformerv2: Improved equivariant transformer for scaling to higher-degree representations. 2023. URL https://arxiv.org/abs/2306.12059.
- Equivariant point cloud analysis via learning orientations for message passing. In CVPR, 2022.
- Learning with invariances in random features and kernel models. In Conference on Learning Theory, pp. 3351–3418. PMLR, 2021.
- Equivariant adaptation of large pretrained models. In NeurIPS, 2023.
- Reducing SO(3) convolutions to SO(2) for efficient equivariant GNNs. In Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., and Scarlett, J. (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pp. 27420–27438. PMLR, 23–29 Jul 2023.
- Smooth, exact rotational symmetrization for deep learning on point clouds. 2023.
- Frame averaging for invariant and equivariant network design. CoRR, abs/2110.03336, 2021.
- E(n) equivariant graph neural networks. In Proceedings of the 38th International Conference on Machine Learning. PMLR, 2021.
- Urysohn, P. Über die mächtigkeit der zusammenhängenden mengen. Mathematische Annalen, 94(1):262–295, 1925.
- Scalars are universal: Gauge-equivariant machine learning, structured like classical physics. CoRR, abs/2106.06610, 2021. URL https://arxiv.org/abs/2106.06610.
- Yarotsky, D. Universal approximations of invariant maps by neural networks. Constructive Approximation, 55(1):407–474, 2022.
- Learning representations of sets through optimized permutations. In 7th International Conference on Learning Representations, ICLR, 2019.
- Fspool: Learning set representations with featurewise sort pooling. In 8th International Conference on Learning Representations, ICLR, 2020.