Papers
Topics
Authors
Recent
2000 character limit reached

G-SympGNN: Scalable Symplectic GNN

Updated 12 January 2026
  • The paper introduces G-SympGNN, a graph neural network that enforces symplectic structure and permutation equivariance to model Hamiltonian systems with long-term energy stability.
  • It employs alternating “low” and “up” symplectic maps alongside graph-based message passing, ensuring scalable and data-efficient predictions in physical and node classification tasks.
  • The architecture leverages group-theoretic symmetry principles to achieve superior performance in high-dimensional system identification and robust node classification compared to conventional GCNs.

G-SympGNN is a specialized variant within the Symplectic Graph Neural Network (SympGNN) framework designed for scalable learning and identification of high-dimensional Hamiltonian systems, as well as for robust node classification on graph-structured data. Through an architecture that jointly enforces symplecticity, permutation equivariance, and efficient graph-based message passing, G-SympGNN achieves data-efficient, numerically stable long-term predictions in physical modeling and addresses core limitations in graph neural network scalability (Varghese et al., 2024). The architecture is also positioned as a group-theoretic extension of symmetry-endorsed graph networks in quantum chemistry, illustrating how broader symmetry principles—including space, symplectic, and permutation groups—yield physics-aware deep learning methods that generalize beyond conventional point-group equivariance (Ye et al., 2019).

1. Mathematical Foundations

G-SympGNN models nn-particle Hamiltonian dynamics of the form

H(p,q)=T(p)+V(q),p,qRn×d,H(p, q) = T(p) + V(q), \qquad p, q \in \mathbb{R}^{n \times d},

with canonical equations of motion: dpdt=qV(q),dqdt=pT(p).\frac{dp}{dt} = -\nabla_q V(q), \qquad \frac{dq}{dt} = \nabla_p T(p). The exact time-hh flow ϕh:(p(0),q(0))(p(h),q(h))\phi_h: (p^{(0)}, q^{(0)}) \mapsto (p(h), q(h)) is symplectic, preserving the canonical two-form of Hamiltonian mechanics.

G-SympGNN approximates this flow by composing alternating “low” and “up” symplectic maps: φ=i=1l(EiupEilow)ori=1l(EilowEiup),\varphi = \prod_{i=1}^l \left(\mathcal{E}_i^{\mathit{up}} \circ \mathcal{E}_i^{\mathit{low}}\right) \quad \text{or} \quad \prod_{i=1}^l \left(\mathcal{E}_i^{\mathit{low}} \circ \mathcal{E}_i^{\mathit{up}}\right), where each factor is symplectic by construction. For any (p,q)Rn×d×Rn×d(p, q) \in \mathbb{R}^{n \times d} \times \mathbb{R}^{n \times d}, define

Eilow(p,q)=(p,q+pTi(p)), Eiup(p,q)=(pqVi(q),q).\mathcal{E}_i^{\mathit{low}}(p, q) = (p, q + \nabla_p T_i(p)), \ \mathcal{E}_i^{\mathit{up}}(p, q) = (p - \nabla_q V_i(q), q).

Each update is proven symplectic by Jacobian analysis. Symplecticity guarantees the preservation of geometric structure and long-term energy stability in predicted dynamics.

2. Graph-Based Parameterization and Equivariance

The permutation-equivariant graph structure ensures scalable modeling for many-body systems. G-SympGNN represents the system by an undirected graph G=(V,E)\mathcal{G} = (\mathcal{V}, E), with adjacency matrix A{0,1}n×nA \in \{0, 1\}^{n \times n}. Each node jj encodes state (pj,qj)(p^j, q^j).

Kinetic energy is parameterized node-wise: Ti(G)(p)=j=1nϕvi(pj),T_i^{(G)}(p) = \sum_{j=1}^n \phi_v^i(p^j), with ϕvi:RdR\phi_v^i: \mathbb{R}^d \rightarrow \mathbb{R} implemented as an MLP.

Potential energy is parameterized over edges: Vi(G)(q;A)=(j,k)Eϕei(qj,qk,Ajk),-V_i^{(G)}(q; A) = \sum_{(j, k) \in E} \phi_e^i(q^j, q^k, A_{jk}), with ϕei:R2d+1R\phi_e^i: \mathbb{R}^{2d+1} \rightarrow \mathbb{R} as an MLP. Summations over nodes/edges enforce permutation invariance, aligning the architecture with the symmetry group SnS_n.

The per-layer updates involve gradients: pTi(G)(p):Rn×d, qVi(G)(q):Rn×d,\nabla_p T_i^{(G)}(p): \quad \mathbb{R}^{n \times d}, \ \nabla_q V_i^{(G)}(q): \quad \mathbb{R}^{n \times d}, and each G-SympGNN layer alternates between “low” and “up” modules, typically l=4l=4–8 iterations per rollout. All submodules retain graph and permutation equivariance, and the overall map φ\varphi is permutation-equivariant and symplectic.

3. Training Objectives and Optimization

G-SympGNN trains on datasets of one-step transitions {(p(t),q(t))(p(t+1),q(t+1))}\{(p^{(t)}, q^{(t)}) \rightarrow (p^{(t+1)}, q^{(t+1)})\} using a mean-squared error on predicted states: LMSE=1T1t=1T1φ(p(t),q(t))(p(t+1),q(t+1))22.\mathcal{L}_{\mathrm{MSE}} = \frac{1}{T-1} \sum_{t=1}^{T-1} \left\|\varphi(p^{(t)}, q^{(t)}) - (p^{(t+1)}, q^{(t+1)})\right\|_2^2. No additional regularization is required; symplecticity is architecturally enforced. Optimization utilizes Adam (10310^{-3} learning rate, 10410^{-4} weight decay), with single trajectory batch or multi-trajectory batching, and up to 300,000 steps for large-scale experiments.

For node classification tasks, identity encoders/decoders are replaced with small MLPs, mapping qq to a latent feature space and applying the same permutation-equivariant symplectic updates.

4. Empirical Performance: Physical System Identification and Node Classification

On physical system identification tasks, G-SympGNN demonstrates superior stability and data efficiency:

40-particle harmonic oscillator:

  • MSE below 10410^{-4} over 100-step rollout (cf. SympNet error grows by an order of magnitude).
  • Relative energy drift after 100 steps: 105\approx 10^{-5} (vs. 10310^{-3} for SympNet).
  • For limited training data (TT small), G-SympGNN improves MSE by 2–10×.

2000-particle 2D Lennard-Jones:

  • Energy drift <106kBJ< 10^{-6}\, k_BJ (vs. MPNN/HGNN 103\sim 10^{-3}).
  • Radial distribution function g(r)g(r) matches ground-truth, with baseline models exhibiting systematic bias.
  • Temperature remains stable within 0.1 K, whereas baselines drift by several K.

Scalability: MSE scales linearly with nn up to n=2000n=2000, with overall computational cost O(n)O(n) or O(E)O(|E|) per layer.

Node classification: Through LA-SympGNN variants,

  • On Squirrel (hom. 0.22), achieves new state-of-the-art 62.6%±0.97%62.6\% \pm 0.97\% accuracy.
  • Ranks top-three on Chameleon, Cora, and Film.
  • Depth-scaling: accuracy drops <1%<1\% at 16 layers (no oversmoothing), versus >10%>10\% for standard GCNs.
  • Heterophily: gracefully transitions between MLP-like performance for low homophily (H<0.2\mathcal{H}<0.2) and GCN-like at high homophily.

5. Relationship to Group-Theoretic Models and Future Extensions

The design of G-SympGNN is tightly connected to advances in symmetry-endorsed graph networks in quantum chemistry (“Symmetrical Graph Neural Network for Quantum Chemistry, with Dual R/K Space” (Ye et al., 2019)). SY-GNN introduced message passing constrained by molecular point-group symmetry, with layers commuting with group actions and predictions for dual real/momentum space properties. Group-theoretic equivariance enables decomposition into symmetry-adapted subspaces via projection operators, and symmetry-constrained pooling yields physically meaningful outputs.

G-SympGNN operationalizes symplectic symmetry (Hamiltonian flows) and permutation symmetry (SnS_n), extending these principles from point-group equivariance to broader settings, including space, time-reversal, and symplectic groups. The architecture is readily adaptable: replacing group actions D(g)D(g), projection operators PαP^\alpha, and equivariant modules fsym,Usymf_{sym}, U_{sym} as required yields models suited for materials, chemistry, and many-body physics (Ye et al., 2019, Varghese et al., 2024).

6. Technical Significance and Implications

G-SympGNN’s key innovations are the enforcement of symplectic structure for long-time energy stability, permutation equivariance for many-body physical modeling, and graph-based message passing for computational tractability in high-dimensional systems. The design ensures that no extra regularization is needed for symmetry, and direct architectural constraints drive both data efficiency and physical fidelity.

This suggests that symplectic-permutation-equivariant graph networks can serve as general templates for physics-aware deep learning, with implications for system identification, molecule/property prediction, and large-scale node/edge classification. A plausible implication is that future architectures may incorporate additional symmetry group actions, leveraging their projection operators for further specialization in modeling complex phenomena.

7. Limitations and Comparative Analysis

While G-SympGNN excels in preserving energy and scalability on large systems, the architecture presupposes a known graph structure and assumes separable Hamiltonians. Baseline comparisons demonstrate superior energy stability and scalability, but adaptation to systems with more complex interactions or non-separable Hamiltonians may require further architectural extension. In node classification benchmarks, performance robustness to oversmoothing and heterophily distinguishes LA-SympGNN variants over standard GCNs, though detailed ablation for all task types, such as link prediction, remains an area for further evaluation (Varghese et al., 2024).

Model Physical Stability Scalability Node Classification Accuracy
G-SympGNN High (energy drift <106<10^{-6}) Linear in nn (n2000n\leq2000) 62.6%±0.97%62.6\%\pm0.97\% (Squirrel)
SympNet Lower (drift >103>10^{-3}) Limited Not evaluated on large node tasks
MPNN/HGNN Lower stability Not linear Not reported

References

  • "SympGNNs: Symplectic Graph Neural Networks for identifiying high-dimensional Hamiltonian systems and node classification" (Varghese et al., 2024).
  • "Symmetrical Graph Neural Network for Quantum Chemistry, with Dual R/K Space" (Ye et al., 2019).

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to G-SympGNN.