Papers
Topics
Authors
Recent
2000 character limit reached

Lie Convolutions: Theory & Applications

Updated 25 December 2025
  • Lie Convolutions are convolution operations defined on Lie groups or algebras that leverage their geometric and algebraic properties to provide equivariant and symmetry-aware processing.
  • They enable almost equivariant neural architectures by approximating the exponential map and controlling error bounds in non-Euclidean settings.
  • Advanced kernel parameterizations, such as B-spline expansions and MLP filters, drive applications in harmonic analysis, quantum theory, and invariant valuations.

A Lie convolution is a general term for a convolution-type operation in which the domain, the kernel, or the integral structure is governed by the geometric and algebraic properties of a Lie group and/or its Lie algebra. This framework encompasses classical group convolutions on Lie groups, Lie algebra-based convolutions, operator convolutions for quantum harmonic analysis, and generalized convolutions in settings such as invariant valuations or harmonic analysis on homogeneous spaces. Lie convolutions fundamentally generalize the standard (Euclidean or translation-invariant) convolution, allowing equivariant or nearly equivariant processing under arbitrary continuous symmetries, both in pure mathematics and in machine learning contexts.

1. Classical Lie Group Convolutions: Definition and Properties

The foundational Lie group convolution for a locally compact Lie group GG (with left Haar measure dμGd\mu_G) and functions f,ψ:GRf, \psi : G \to \mathbb{R} or C\mathbb{C} is

(fGψ)(g)=Gf(h)ψ(h1g)dμG(h).(f \star_G \psi)(g) = \int_G f(h) \psi(h^{-1}g) d\mu_G(h).

This operation is equivariant under left group actions: left translations commute with convolution. This property is central to weight sharing and symmetry in convolutional neural networks generalized to non-Euclidean data domains.

In the representation-theoretic context, the convolution algebra L1(G)L^1(G) acts on a Hilbert space HH via a homomorphism associated to a unitary representation TT: ρ(a)=Ga(g)TgdμG(g),\rho(a) = \int_G a(g) T_g d\mu_G(g), where aL1(G)a \in L^1(G) represents a filter. The classical setting, with H=L2(G)H = L^2(G) and TT the left-regular representation, recovers the standard convolution (Kumar et al., 2023).

The structure of the Lie group (e.g., unimodularity, compactness, connectedness) affects the properties of convolution: for example, unique Haar measure, available representations, and the existence of Plancherel-type theorems for L2L^2-convolutions (Oyadare, 2017, Oyadare, 2017).

2. Lie Algebra Convolutions and Almost Equivariance

For many applications, especially when GG is non-compact or the exponential map exp:gG\exp: \mathfrak{g} \to G is not surjective, integrating over the group itself is problematic. The Lie algebra convolution circumvents these issues by shifting the domain to the vector space structure of the Lie algebra gRd\mathfrak{g} \cong \mathbb{R}^d: (fgψ)(u)=gψ(x)f(xu)dx,(f \star_{\mathfrak{g}} \psi)(u) = \int_{\mathfrak{g}} \psi(x) f(x \cdot u) dx, where xux \cdot u denotes a suitable action (e.g., Adjoint or linearized group action), and ψ\psi is typically parameterized as a neural network (MLP) or finite feature expansion. A key innovation is to replace the true exponential map by a learned approximation Φ:gG\Phi: \mathfrak{g} \to G, which allows the layer to interpolate between exact equivariance (when Φ=exp\Phi = \exp matches exactly) and "almost equivariance," which quantifies permissible deviations from symmetry due to non-ideal data or non-surjective exp\exp (McNeela, 2023).

This "almost equivariance" is formalized by error bounds: (fgψ)(gu)ρG(g)[(fgψ)(u)]ε,\| (f \star_{\mathfrak{g}} \psi)(g \cdot u) - \rho_G(g)[(f \star_{\mathfrak{g}} \psi)(u)] \| \leq \varepsilon, with ε\varepsilon controlled by the approximation accuracy of Φ\Phi and the model's parameters. This approach is well-suited for non-compact groups such as the Euclidean or Poincaré groups and for data not perfectly exhibiting exact symmetry (McNeela, 2023).

Comparison of Lie group (GG) and Lie algebra (g\mathfrak{g}) convolutions:

Feature GG-convolution g\mathfrak{g}-convolution
Domain Group GG Vector space gRd\mathfrak{g} \cong \mathbb{R}^d
Applicability Compact GG, surjective exp\exp Any connected matrix Lie group
Sampling Haar measure on GG Lebesgue measure on g\mathfrak{g}
Well-defined Fails if exp\exp not surjective Always works with a learnable Φ\Phi

(McNeela, 2023)

3. Advanced Kernel Parameterizations and Representations

Recent work leverages the differential geometry of GG, expanding convolution kernels via B-spline or learned basis on the Lie algebra. The approach in (Bekkers, 2019) employs a generic B-spline expansion: ψ(g)=kZnckBRn,n(loggks),\psi(g) = \sum_{k\in\mathbb{Z}^n} c_k B^{\mathbb{R}^n,n}\left(\frac{\log g - k}{s}\right), mapping group elements back to g\mathfrak{g} via the logarithm, enabling flexible localization, dilation (atrous), or deformation of kernels.

Similarly, (Qiao et al., 2023) and (McNeela, 2023) rely on MLP parametrization of filters as functions on the Lie algebra, with all group-convolutional layers sharing this principle for differentiability and sampling tractability. In infinite-dimensional or functional analytic contexts, such as quantum analysis, this parameterization is replaced by transforms connecting operator and function spaces (Fourier–Wigner, Fourier–Kirillov) (Berge et al., 25 Feb 2025).

4. Applications in Harmonic Analysis, Renormalization, and Valuations

Lie convolutions underlie harmonic analysis on homogeneous spaces, semisimple Lie groups, and spherical functions. Notably, in semisimple Lie group theory, "spherical convolution" with spherical functions generalizes the notion of group convolution and connects directly to the Harish–Chandra or spherical Fourier transform (Oyadare, 2017, Oyadare, 2017). This provides a functional calculus on Schwartz algebras of spherical functions and extends the reach of Plancherel and inversion formulas.

In mathematical physics, the convolution algebra is central to the algebraic structure of renormalization in quantum field theory. The action of the oscillator (Heisenberg-semidirect-GL) group on the convolution semigroup encodes block transformations and coarse-graining, with renormalization interpreted as a semigroup action via inner automorphisms in the Lie group (Puzio et al., 5 Nov 2025).

For integral geometry and valuations, convolution of invariant valuations constructs associative algebras on the space of valuations on GG, with explicit formulas for compact and unimodular cases in terms of invariant differential forms, connecting to the Alesker–Bernig and Bernig–Fu convolution algebras (Bernig et al., 30 Nov 2025).

5. Neural Architectures and Discretization Strategies

Lie convolutions are increasingly prominent in machine learning, especially in architectures that require equivariant processing under arbitrary continuous (Lie) symmetries—images, signals on manifolds, and language. Implementation details shared across frameworks (Kumar et al., 2023, MacDonald et al., 2021, McNeela, 2023, Bekkers, 2019, Qiao et al., 2023, Rim et al., 18 Dec 2025) include:

  • Discretization via exponential map: Sampling in g\mathfrak{g} via a grid or MCMC, mapping to GG via exp\exp or a learned map.
  • Kernel parametrization: Neural networks (MLPs) or B-splines expanded in Lie algebra coordinates.
  • Monte Carlo or grid approximation: Integrals over GG replaced by finite sums; exact equivariance is preserved for each realization if samples are kept fixed (MacDonald et al., 2021).
  • Invariant or equivariant layers: Nonlinearities and normalization layers (ReLU, batch norm) commute with the group action and do not break equivariance.
  • Error control: Quantitative error bounds relate discretization or approximation accuracy to the loss in equivariance (McNeela, 2023, Kumar et al., 2023).
  • Application domains: Equivariant neural networks for images (rotation/scale/affine invariance), 3D data, and text (linguistic symmetries) (McNeela, 2023, MacDonald et al., 2021, Qiao et al., 2023, Rim et al., 18 Dec 2025).

6. Generalizations: Operator Convolutions, Abstract Harmonic Analysis, and Symbol Calculi

In the analysis of linear and pseudodifferential operators, convolution with respect to non-commutative group structures leads to generalizations including operator convolutions (e.g., Weyl quantization on exponential Lie groups) and symbol calculus for differential equations on non-Euclidean groups (Berge et al., 25 Feb 2025, Kisil, 2013, Duduchava, 2022):

  • Operator convolution identities relate the (quantum) operator product to group convolution for Weyl quantizations on GG, passing through structures like the Wigner distribution and inducing Moyal-type orthogonality (Berge et al., 25 Feb 2025).
  • In the nilpotent group setting, relative convolutions and covariant transforms extend the Calderón–Vaillancourt theorem and clarify boundedness criteria for generalized convolution operators (Kisil, 2013).
  • On abelian (but non-Euclidean) examples, such as the interval (1,1)(-1,1) with a non-trivial group law, convolution and symbol calculus determine the solvability and spectral properties of convolution equations (Duduchava, 2022).

7. Theoretical and Practical Impact

The theoretical foundations of Lie convolutions unify disparate branches of analysis, group theory, and invariant theory. Their instantiations underpin advances in equivariant neural architectures, geometric deep learning, quantum harmonic analysis, and algebraic renormalization. The central insight is the abstraction of translation/convolution symmetry to arbitrary continuous symmetry groups, with computational realizations enabled by discretization and learned function-approximation in Lie algebra coordinates. These developments facilitate robust, parameter-efficient, and geometrically natural modeling across scientific domains (McNeela, 2023, MacDonald et al., 2021, Bekkers, 2019, Rim et al., 18 Dec 2025, Kumar et al., 2023, Puzio et al., 5 Nov 2025, Berge et al., 25 Feb 2025, Oyadare, 2017, Oyadare, 2017, Bernig et al., 30 Nov 2025, Kisil, 2013, Duduchava, 2022).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Lie Convolutions.