Lie Convolutions: Theory & Applications
- Lie Convolutions are convolution operations defined on Lie groups or algebras that leverage their geometric and algebraic properties to provide equivariant and symmetry-aware processing.
- They enable almost equivariant neural architectures by approximating the exponential map and controlling error bounds in non-Euclidean settings.
- Advanced kernel parameterizations, such as B-spline expansions and MLP filters, drive applications in harmonic analysis, quantum theory, and invariant valuations.
A Lie convolution is a general term for a convolution-type operation in which the domain, the kernel, or the integral structure is governed by the geometric and algebraic properties of a Lie group and/or its Lie algebra. This framework encompasses classical group convolutions on Lie groups, Lie algebra-based convolutions, operator convolutions for quantum harmonic analysis, and generalized convolutions in settings such as invariant valuations or harmonic analysis on homogeneous spaces. Lie convolutions fundamentally generalize the standard (Euclidean or translation-invariant) convolution, allowing equivariant or nearly equivariant processing under arbitrary continuous symmetries, both in pure mathematics and in machine learning contexts.
1. Classical Lie Group Convolutions: Definition and Properties
The foundational Lie group convolution for a locally compact Lie group (with left Haar measure ) and functions or is
This operation is equivariant under left group actions: left translations commute with convolution. This property is central to weight sharing and symmetry in convolutional neural networks generalized to non-Euclidean data domains.
In the representation-theoretic context, the convolution algebra acts on a Hilbert space via a homomorphism associated to a unitary representation : where represents a filter. The classical setting, with and the left-regular representation, recovers the standard convolution (Kumar et al., 2023).
The structure of the Lie group (e.g., unimodularity, compactness, connectedness) affects the properties of convolution: for example, unique Haar measure, available representations, and the existence of Plancherel-type theorems for -convolutions (Oyadare, 2017, Oyadare, 2017).
2. Lie Algebra Convolutions and Almost Equivariance
For many applications, especially when is non-compact or the exponential map is not surjective, integrating over the group itself is problematic. The Lie algebra convolution circumvents these issues by shifting the domain to the vector space structure of the Lie algebra : where denotes a suitable action (e.g., Adjoint or linearized group action), and is typically parameterized as a neural network (MLP) or finite feature expansion. A key innovation is to replace the true exponential map by a learned approximation , which allows the layer to interpolate between exact equivariance (when matches exactly) and "almost equivariance," which quantifies permissible deviations from symmetry due to non-ideal data or non-surjective (McNeela, 2023).
This "almost equivariance" is formalized by error bounds: with controlled by the approximation accuracy of and the model's parameters. This approach is well-suited for non-compact groups such as the Euclidean or Poincaré groups and for data not perfectly exhibiting exact symmetry (McNeela, 2023).
Comparison of Lie group () and Lie algebra () convolutions:
| Feature | -convolution | -convolution |
|---|---|---|
| Domain | Group | Vector space |
| Applicability | Compact , surjective | Any connected matrix Lie group |
| Sampling | Haar measure on | Lebesgue measure on |
| Well-defined | Fails if not surjective | Always works with a learnable |
3. Advanced Kernel Parameterizations and Representations
Recent work leverages the differential geometry of , expanding convolution kernels via B-spline or learned basis on the Lie algebra. The approach in (Bekkers, 2019) employs a generic B-spline expansion: mapping group elements back to via the logarithm, enabling flexible localization, dilation (atrous), or deformation of kernels.
Similarly, (Qiao et al., 2023) and (McNeela, 2023) rely on MLP parametrization of filters as functions on the Lie algebra, with all group-convolutional layers sharing this principle for differentiability and sampling tractability. In infinite-dimensional or functional analytic contexts, such as quantum analysis, this parameterization is replaced by transforms connecting operator and function spaces (Fourier–Wigner, Fourier–Kirillov) (Berge et al., 25 Feb 2025).
4. Applications in Harmonic Analysis, Renormalization, and Valuations
Lie convolutions underlie harmonic analysis on homogeneous spaces, semisimple Lie groups, and spherical functions. Notably, in semisimple Lie group theory, "spherical convolution" with spherical functions generalizes the notion of group convolution and connects directly to the Harish–Chandra or spherical Fourier transform (Oyadare, 2017, Oyadare, 2017). This provides a functional calculus on Schwartz algebras of spherical functions and extends the reach of Plancherel and inversion formulas.
In mathematical physics, the convolution algebra is central to the algebraic structure of renormalization in quantum field theory. The action of the oscillator (Heisenberg-semidirect-GL) group on the convolution semigroup encodes block transformations and coarse-graining, with renormalization interpreted as a semigroup action via inner automorphisms in the Lie group (Puzio et al., 5 Nov 2025).
For integral geometry and valuations, convolution of invariant valuations constructs associative algebras on the space of valuations on , with explicit formulas for compact and unimodular cases in terms of invariant differential forms, connecting to the Alesker–Bernig and Bernig–Fu convolution algebras (Bernig et al., 30 Nov 2025).
5. Neural Architectures and Discretization Strategies
Lie convolutions are increasingly prominent in machine learning, especially in architectures that require equivariant processing under arbitrary continuous (Lie) symmetries—images, signals on manifolds, and language. Implementation details shared across frameworks (Kumar et al., 2023, MacDonald et al., 2021, McNeela, 2023, Bekkers, 2019, Qiao et al., 2023, Rim et al., 18 Dec 2025) include:
- Discretization via exponential map: Sampling in via a grid or MCMC, mapping to via or a learned map.
- Kernel parametrization: Neural networks (MLPs) or B-splines expanded in Lie algebra coordinates.
- Monte Carlo or grid approximation: Integrals over replaced by finite sums; exact equivariance is preserved for each realization if samples are kept fixed (MacDonald et al., 2021).
- Invariant or equivariant layers: Nonlinearities and normalization layers (ReLU, batch norm) commute with the group action and do not break equivariance.
- Error control: Quantitative error bounds relate discretization or approximation accuracy to the loss in equivariance (McNeela, 2023, Kumar et al., 2023).
- Application domains: Equivariant neural networks for images (rotation/scale/affine invariance), 3D data, and text (linguistic symmetries) (McNeela, 2023, MacDonald et al., 2021, Qiao et al., 2023, Rim et al., 18 Dec 2025).
6. Generalizations: Operator Convolutions, Abstract Harmonic Analysis, and Symbol Calculi
In the analysis of linear and pseudodifferential operators, convolution with respect to non-commutative group structures leads to generalizations including operator convolutions (e.g., Weyl quantization on exponential Lie groups) and symbol calculus for differential equations on non-Euclidean groups (Berge et al., 25 Feb 2025, Kisil, 2013, Duduchava, 2022):
- Operator convolution identities relate the (quantum) operator product to group convolution for Weyl quantizations on , passing through structures like the Wigner distribution and inducing Moyal-type orthogonality (Berge et al., 25 Feb 2025).
- In the nilpotent group setting, relative convolutions and covariant transforms extend the Calderón–Vaillancourt theorem and clarify boundedness criteria for generalized convolution operators (Kisil, 2013).
- On abelian (but non-Euclidean) examples, such as the interval with a non-trivial group law, convolution and symbol calculus determine the solvability and spectral properties of convolution equations (Duduchava, 2022).
7. Theoretical and Practical Impact
The theoretical foundations of Lie convolutions unify disparate branches of analysis, group theory, and invariant theory. Their instantiations underpin advances in equivariant neural architectures, geometric deep learning, quantum harmonic analysis, and algebraic renormalization. The central insight is the abstraction of translation/convolution symmetry to arbitrary continuous symmetry groups, with computational realizations enabled by discretization and learned function-approximation in Lie algebra coordinates. These developments facilitate robust, parameter-efficient, and geometrically natural modeling across scientific domains (McNeela, 2023, MacDonald et al., 2021, Bekkers, 2019, Rim et al., 18 Dec 2025, Kumar et al., 2023, Puzio et al., 5 Nov 2025, Berge et al., 25 Feb 2025, Oyadare, 2017, Oyadare, 2017, Bernig et al., 30 Nov 2025, Kisil, 2013, Duduchava, 2022).