Papers
Topics
Authors
Recent
2000 character limit reached

SE(2) Group Equivariant Theory

Updated 26 November 2025
  • SE(2) group equivariant theory is a framework that defines functions and operators to commute with planar translations and rotations.
  • It leverages group convolutions, kernel constraints, and harmonic analysis to construct linear and nonlinear equivariant neural network layers.
  • This theory underpins efficient models in image recognition, PDE surrogate modeling, and geometric graph networks, with extensions to higher-dimensional data.

The Special Euclidean group in two dimensions, SE(2), is the group of planar rigid motions: translations and rotations. SE(2) group equivariant theory is the unifying mathematical and algorithmic framework that addresses how functions, operators, and neural network layers can be constructed to commute with the action of SE(2), guaranteeing equivariance of learned mappings to translations and rotations in the plane. This theory yields canonical conditions and constructions for convolutional and message-passing architectures, characterizes all permitted linear and non-linear equivariant layers, and enables efficient and expressive models for invariant and equivariant learning on 2D data, vector fields, and geometric graphs.

1. Structure of SE(2) and Its Actions on Homogeneous Spaces

SE(2) is the semidirect product SE(2) = ℝ² ⋉ SO(2), with group law

(t1,Rθ1)(t2,Rθ2)=(t1+Rθ1t2,Rθ1+θ2)(t_1, R_{\theta_1}) (t_2, R_{\theta_2}) = (t_1 + R_{\theta_1} t_2, R_{\theta_1+\theta_2})

where tR2t \in \mathbb{R}^2 and RθSO(2)R_\theta \in SO(2). SE(2) acts transitively on R2\mathbb{R}^2: (t,R)x=Rx+t(t, R) \cdot x = R x + t. The homogeneous space R2\mathbb{R}^2 is realized as SE(2)/SO(2), with SO(2) the stabilizer of the origin. Feature fields are specified by picking a representation ρ\rho of SO(2) and considering functions f:R2Vf : \mathbb{R}^2 \to V (“ρ\rho-fields”), with the induced SE(2) action

(π(g)f)(x)=ρ(R)f(R1(xt))(\pi(g)f)(x) = \rho(R)\, f(R^{-1}(x - t))

where g=(t,R)g = (t, R) (Cohen et al., 2018, Gerken et al., 2021).

This action extends to functions on SE(2) or lifted position-orientation spaces, which permit additional modeling capacity, especially for directional features (Bekkers et al., 2023).

2. Equivariant Linear Operators: Homogeneous Space Convolution and Kernel Constraints

A linear operator LL mapping between ρ1\rho_1-fields and ρ2\rho_2-fields is SE(2)-equivariant iff for all gSE(2)g \in SE(2),

L[π1(g)f]=π2(g)[Lf]L[\pi_1(g) f] = \pi_2(g)[L f]

(Cohen et al., 2018, Gerken et al., 2021).

Every SE(2)-equivariant linear operator can be written as a group convolution (Mackey's theory). For functions f:SE(2)V1f : SE(2)\to V_1, this takes the form

(Lf)(g)=SE(2)κ(g1g)f(g)dg(L f)(g) = \int_{SE(2)} \kappa(g^{-1} g') f(g')\, dg'

with a kernel κ:SE(2)Hom(V1,V2)\kappa: SE(2) \to \text{Hom}(V_1,V_2) (Cohen et al., 2018). When mapped down to R2\mathbb{R}^2, this is known as the “ρ1\rho_1-twisted correlation” or generalized steerable convolution (Gerken et al., 2021).

The kernel constraint is

κ(hg)=ρ2(h)κ(g),κ(gh)=κ(g)ρ1(h)\kappa(hg) = \rho_2(h)\kappa(g), \quad \kappa(gh) = \kappa(g)\rho_1(h)

for all hSO(2)h \in SO(2), gSE(2)g \in SE(2). This “bi-equivariance” ensures the convolution outputs are of the correct field type (Cohen et al., 2018, Weiler et al., 2019).

3. Steerable Kernels: Fourier and Harmonic Basis

The solution space for equivariant kernels is parameterized analytically using representation theory and harmonic analysis. Decomposition into irreducibles yields:

  • For ρ1,ρ2\rho_1, \rho_2 with angular frequencies m1,m2m_1, m_2, only Fourier modes m2m1m_2 - m_1 appear in the kernel (see, e.g., K(t)ei(m2m1)ϕtJm2m1(t)K(t) e^{i(m_2 - m_1) \phi_t} J_{|m_2-m_1|}(\|t\|) in polar coordinates).
  • For scalar fields, the kernel is an isotropic radial function.
  • For vector or higher-order tensor fields, the kernel must satisfy

K(Rt)=RK(t)R1K(R t) = R K(t) R^{-1}

which yields steerable vector or tensor filter bases (Cohen et al., 2018, Weiler et al., 2019).

In the harmonic basis, a general equivariant kernel is constructed as a linear combination of radial profiles and angular harmonics, coupled by basis matrices determined by input/output irrep labels (Weiler et al., 2019).

4. Architectures: SE(2)-Equivariant Neural and Graph Networks

Convolutional Neural Networks

All equivariant network constructions (including convolutional, steerable, regular, vector field, and tensor field CNNs) fall under the above group-convolution and kernel-constraint paradigm. Each choice of representation for feature fields gives rise to different CNN architectures, encompassing prior proposals and their analytical bases (Cohen et al., 2018, Weiler et al., 2019).

Graph Neural Networks

For non-Euclidean domains (e.g., point clouds, graphs), equivariant message passing can be built by aligning node or edge features into canonical frames (principal axis alignment), applying unconstrained neural modules (MLPs or attention), and rotating the outputs back. SE(2) equivariance is preserved by this rotation-in, rotation-out scheme, allowing arbitrary nonlinearities without explicit kernel parameter tying (Bånkestad et al., 30 May 2024, Bekkers et al., 2023).

Attribute-based SE(2) message passing uses edge attributes that uniquely identify SE(2) orbits of point pairs, such as (RϕiT(pjpi),ϕjϕi)(R_{\phi_i}^T (p_j - p_i), \phi_j - \phi_i) in orientation-lifted spaces. Conditioning message functions on these attributes, and weight sharing, yield universal equivariant approximators for geometric learning tasks (Bekkers et al., 2023).

5. Alternative Constructions: Differential Invariants and CR Structures

An alternative approach to SE(2)-equivariant operator design uses differential invariants constructed by the method of moving frames. For scalar fields f:R2Rf:\mathbb{R}^2\to \mathbb{R}, the algebra of differential invariants is generated by the gradient norm, normalized second derivatives, and invariant differential operators, forming the SE(2) Differential Invariants Network (SE2DINNet). Each SE2DIN block computes local Gaussian-derivative filters, evaluates invariant polynomials, and applies pointwise nonlinearities, resulting in highly parameter-efficient convolutional architectures whose outputs are provably SE(2)-equivariant (Sangalli et al., 2022).

In the context of continuous wavelet transforms on SE(2), the relevant feature space is a reproducing kernel Hilbert space (RKHS) of CR functions (annihilated by a specific left-invariant Cauchy–Riemann operator). These furnish canonical L²-sections preserved by SE(2) action and used in models such as orientation score transforms, with direct connections to the Bargmann–Fock space and uncertainty minimization (Barbieri et al., 2013).

6. Group-Theoretic and Representation-Theoretic Foundations

The unitary irreducible representations (UIRs) of SE(2) are infinite-dimensional and indexed by spectral parameters: frequency ω>0\omega > 0 and angular momentum nZn \in \mathbb{Z}. These representations act on L2(S1)L^2(S^1) via

[ΠΩ(q,θ)u](φ)=eiΩ(q1cosφ+q2sinφ)u(φθ)[\Pi^\Omega(q, \theta) u](\varphi) = e^{-i \Omega (q_1 \cos\varphi + q_2 \sin\varphi)} u(\varphi - \theta)

Spectral analysis via the group Fourier transform diagonalizes convolution, giving explicit operator-valued basis decompositions and convolution theorems. Plancherel decompositions and spectral truncation strategies underlie efficient computation and parametrization of SE(2)-equivariant operators (Gerken et al., 2021, Barbieri et al., 2013, Esteves, 2020).

The principal-bundle perspective interprets feature fields as sections of the associated vector bundle SE(2)×ρVSE(2)/SO(2)SE(2) \times_\rho V \to SE(2)/SO(2), and equivariant layers as gauge-equivariant maps between such bundles. The “flat” geometry of SE(2)R2SE(2) \to \mathbb{R}^2 eliminates the need for connections or parallel transport in standard settings, but the same logic extends to arbitrary homogeneous spaces and compact structure groups (Gerken et al., 2021).

7. Applications and Extensions

SE(2) group equivariant theory supports robust learning in applications where planar translation and rotation symmetries are fundamental:

  • Image recognition and synthesis tasks requiring equivariance or invariance to orientation and position.
  • PDE surrogate modeling for non-grid domains, e.g., in fluid dynamics, where SE(2)-equivariant GNNs provide data-efficient and accurate surrogate solutions (Bånkestad et al., 30 May 2024).
  • Equivariant observer design for second-order kinematic systems, using SE(2) as the state manifold for pose estimation (Ng et al., 2021).
  • Optimal transport and barycenter computation in orientation-lifted image representations, yielding contour- and orientation-preserving flows with sharp geometric properties unavailable in purely translation-equivariant settings (Bon et al., 23 Feb 2024).

Additionally, these constructions generalize to higher-dimensional spaces governed by SE(n) and related motion groups, supporting equivariant deep learning models for 3D point clouds, molecular data, and other geometric structures (Bekkers et al., 2023).


References: (Cohen et al., 2018, Gerken et al., 2021, Weiler et al., 2019, Bekkers et al., 2023, Esteves, 2020, Bånkestad et al., 30 May 2024, Sangalli et al., 2022, Barbieri et al., 2013, Ng et al., 2021, Bon et al., 23 Feb 2024).

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to SE(2) Group Equivariant Theory.