Papers
Topics
Authors
Recent
2000 character limit reached

Theoretical Aspects of Group Equivariant Neural Networks

Published 10 Apr 2020 in cs.LG and stat.ML | (2004.05154v2)

Abstract: Group equivariant neural networks have been explored in the past few years and are interesting from theoretical and practical standpoints. They leverage concepts from group representation theory, non-commutative harmonic analysis and differential geometry that do not often appear in machine learning. In practice, they have been shown to reduce sample and model complexity, notably in challenging tasks where input transformations such as arbitrary rotations are present. We begin this work with an exposition of group representation theory and the machinery necessary to define and evaluate integrals and convolutions on groups. Then, we show applications to recent SO(3) and SE(3) equivariant networks, namely the Spherical CNNs, Clebsch-Gordan Networks, and 3D Steerable CNNs. We proceed to discuss two recent theoretical results. The first, by Kondor and Trivedi (ICML'18), shows that a neural network is group equivariant if and only if it has a convolutional structure. The second, by Cohen et al. (NeurIPS'19), generalizes the first to a larger class of networks, with feature maps as fields on homogeneous spaces.

Citations (37)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.