Lorentz Local Canonicalization (LLoCa)
- LLoCa is a comprehensive framework for canonicalizing Lorentz symmetries by decomposing transformations into distinct boost and rotation components, applicable across physics, geometry, quantum theory, and ML.
- It employs algebraic projections and modular quantum mappings to achieve exact equivariance and robust geometric modeling, laying the foundation for symmetry-respecting analyses.
- LLoCa’s practical applications range from frame field interpolation in relativity to enhancing Lorentz-equivariant architectures in neural networks, improving both accuracy and computational efficiency.
Lorentz Local Canonicalization (LLoCa) is a comprehensive mathematical and algorithmic framework for canonicalizing the action of the Lorentz group SO⁺(1,3) in physics, geometry, quantum theory, general relativity, and equivariant machine learning. LLoCa systematically decomposes Lorentz transformations and associated data into canonical local forms—typically through canonical coordinates, reference frames, or algebraic factorization. This enables full exploitation of Lorentz symmetry, exact equivariance, robust geometric modeling, and symmetry-respecting learning protocols. LLoCa finds explicit realization in Lie algebra decomposition, local quantum physics through modular theory, differential geometry of Lorentz surfaces, Lorentz-covariant canonical gravity, and modern equivariant neural architectures for high-energy physics.
1. Canonical Decomposition of Lorentz Transformations
LLoCa is grounded in the orthogonal decomposition of Lorentz algebra elements and canonical factorization of Lorentz group actions (Hanson, 2011). For any proper orthochronous Lorentz transformation :
- The generator is resolved into the sum of mutually commuting, simple Lorentz bivectors using algebraic projection operators computed from second- and fourth-order traces and determinant invariants.
- The eigenvalues of characterize boost- and rotation-like directions, and each is a decomposable bivector with explicit closed-form exponentials:
- The canonical factorization separates into pure boosts and rotations in orthogonal 2-planes, providing a robust basis for numerical applications such as frame field interpolation and time-stepping in relativity and graphics.
- All relevant invariants and closed forms use only real arithmetic—no complex Wick rotations are required.
This algebraic component yields a universal recipe for canonicalizing any Lorentz transformation and underpins all further LLoCa developments (Hanson, 2011).
2. Quantum-Theoretic Foundations and Modular Canonicalization
In finite-dimensional local quantum physics, LLoCa characterizes the canonical group of frame transformations between local thermal states, with direct implications for spacetime structure (Raasakka, 2017). Specifically:
- Under three postulates (finite-dimensional local observable algebras, minimal algebras , vacuum restriction to thermal states), each local region corresponds to a qubit Gibbs state with Hamiltonian and .
- Any , the double cover of , acts by conjugation , which canonically induces Lorentz transformations on the four-vector .
- The proper orthochronous Lorentz group thus acts freely and transitively on the space of non-maximally mixed qubit Gibbs states, and the relative modular operator explicitly encodes the infinitesimal canonical connection between local frames.
- This modular mapping directly reconstructs the spin connection between neighboring local regions purely from quantum data, defining local inertial structure algebraically via canonicalization.
- Limitations of this framework include kinematical scope (no dynamics), restriction to 3+1 dimensions, and absence of metric degrees of freedom.
LLoCa in this setting reveals an intrinsic quantum-information-theoretic origin of local Lorentz invariance—a new perspective for quantum gravity (Raasakka, 2017).
3. Canonical Coordinates and Geometry of Lorentz Surfaces
In differential geometry, LLoCa yields canonical isotropic coordinates and a reduction of the Gauss–Codazzi system for Lorentz surfaces in (Kanchev et al., 2021):
- Lorentz surfaces of general type are characterized by , where and denote Lorentzian mean and Gaussian curvatures, and the Weingarten map has distinct eigenvalues.
- Canonical null coordinates are constructed such that the coefficients and of the second fundamental form achieve prescribed signs .
- The fundamental natural equation—the canonical reduction—links the metric coefficient and mean curvature in a single integro-differential PDE:
- The Bonnet-type theorem guarantees that each solution to this equation determines a unique isometric immersion of a Lorentz surface, up to motion and orientation data.
- Special cases (constant , minimal surfaces ) induce further intrinsic PDE reductions, e.g. for non-flat minimal surfaces.
- Canonical coordinates thus reduce full surface geometry to manageable algebraic invariants and PDEs, with LLoCa providing constructive formulas for both coordinates and geometric data.
This geometrical form of LLoCa has foundational significance for Lorentzian surface theory and explicit immersion constructions (Kanchev et al., 2021).
4. Manifest Lorentz Covariance in Canonical Gravity
LLoCa restores manifest Lorentz covariance in canonical gravity through local observer fields, spontaneous symmetry breaking, and Cartan geometrodynamics (Gielen, 2012):
- In Ashtekar–Barbero canonical gravity, standard time-gauge fixing reduces SO(3,1) to SO(3), obscuring Lorentz invariance and introducing second-class constraints.
- LLoCa generalizes this by introducing a dynamical local observer field (timelike vector), which selects a time direction at each spacetime point but transforms under local Lorentz transformations .
- This spontaneous breaking SO(3,1)SO(3) defines projectors onto spatial slices, and decomposes the Lorentz connection into "rotation" and "boost" sectors with respect to .
- The canonical variables retain full covariance, with the observer field transforming appropriately to maintain invariance.
- Constraints (Gauss, vector, Hamiltonian) are reformulated with , avoiding the appearance of second-class constraints and permitting the full SO(3,1)-invariant hypersurface-deformation algebra.
- The geometric reinterpretation frames spatial slices as Cartan geometries modeled on SO(3,1)/SO(3), with as a Higgs-like field mediating symmetry breaking at each point.
LLoCa thus enables a covariant Hamiltonian formulation of gravity compatible with both canonical and geometric approaches (Gielen, 2012).
5. Exact Lorentz Equivariance in Machine Learning
Modern applications of LLoCa deliver universal Lorentz-equivariant architectures for machine learning on particle physics and collider data (Spinner et al., 26 May 2025, Favaro et al., 20 Aug 2025):
- In high-energy physics, LLoCa provides exact Lorentz-equivariance for arbitrary neural networks by predicting equivariant local reference frames for each particle, transforming all features (scalars, vectors, tensors) into these local frames, applying any backbone (graph, transformer, MLP), and transforming outputs back.
- The canonicalization algorithm constructs by predicting three Lorentz vectors via a small equivariant "Frames-Net" (typically an MLP on Lorentz invariants), followed by polar decomposition into a boost and rotation (Gram-Schmidt on spatial parts after the boost).
- Message passing is performed by moving sender features into the receiver frame via the transformation ; scaled-dot-product attention utilizes frame-to-frame tensor transformations.
- Symmetry breaking (to or subgroups) is controlled either at the architecture level (fixing certain globally) or input level (providing reference vectors or scalars).
- Data augmentation emerges as a special case (fixing all global), but LLoCa retains full equivariance regardless.
- The computational overhead of LLoCa is minimal ( FLOPs, training time), vastly outperforming prior Lorentz-equivariant architectures (L-GATr, PELICAN), which suffer up to higher FLOPs and slower training.
- Empirically, LLoCa-enhanced graph and transformer models consistently improve accuracy and AUC in jet tagging, amplitude regression, and event generation tasks, matching or exceeding domain-specific state-of-the-art, especially with higher-order tensor message representations.
LLoCa’s architectural flexibility allows for any backbone to be used unmodified, provided the canonicalization protocol is followed, resulting in exact symmetry-respecting learning (Spinner et al., 26 May 2025, Favaro et al., 20 Aug 2025).
6. Limitations, Numerical Considerations, and Outlook
Several practical and theoretical aspects delimit LLoCa and suggest future directions (Favaro et al., 20 Aug 2025, Spinner et al., 26 May 2025, Raasakka, 2017):
- Numerical stability in Gram–Schmidt and boost construction is critical; small regularization and clipping of Lorentz factors mitigate rare instabilities at high boosts.
- In low-data regimes, small Frames-Net architectures and dropout provide improved generalization; overfitting remains a practical consideration.
- For problems exhibiting only partial Lorentz symmetry, full equivariance may not yield maximal accuracy; LLoCa enables controlled symmetry breaking as needed.
- Geometric extension to curved spacetime or local gauge symmetries remains an open problem; a plausible implication is that LLoCa could be adapted to local gauge-canonicalization in curved backgrounds.
- The framework is algebraic and geometric, not dynamical: in quantum and gravitational contexts, LLoCa encodes frame transformations, not equations of motion.
- The quantum-theoretic form of LLoCa suggests that the emergence of Lorentzian structure in physical theories may be fundamentally connected to modular properties and symmetries of local quantum states.
LLoCa thus establishes a unifying algebraic-geometric paradigm for canonical treatment of Lorentz symmetry across mathematical physics, quantum theory, differential geometry, and machine learning. Its exact equivariance, canonical factorization, and architectural generality position it as a central tool for symmetry-respecting modeling and analysis.