Papers
Topics
Authors
Recent
Search
2000 character limit reached

Phase space integrity in neural network models of Hamiltonian dynamics: A Lagrangian descriptor approach

Published 1 Apr 2026 in cs.LG and math.DS | (2604.00473v1)

Abstract: We propose Lagrangian Descriptors (LDs) as a diagnostic framework for evaluating neural network models of Hamiltonian systems beyond conventional trajectory-based metrics. Standard error measures quantify short-term predictive accuracy but provide little insight into global geometric structures such as orbits and separatrices. Existing evaluation tools in dissipative systems are inadequate for Hamiltonian dynamics due to fundamental differences in the systems. By constructing probability density functions weighted by LD values, we embed geometric information into a statistical framework suitable for information-theoretic comparison. We benchmark physically constrained architectures (SympNet, HénonNet, Generalized Hamiltonian Neural Networks) against data-driven Reservoir Computing across two canonical systems. For the Duffing oscillator, all models recover the homoclinic orbit geometry with modest data requirements, though their accuracy near critical structures varies. For the three-mode nonlinear Schrödinger equation, however, clear differences emerge: symplectic architectures preserve energy but distort phase-space topology, while Reservoir Computing, despite lacking explicit physical constraints, reproduces the homoclinic structure with high fidelity. These results demonstrate the value of LD-based diagnostics for assessing not only predictive performance but also the global dynamical integrity of learned Hamiltonian models.

Summary

  • The paper demonstrates that reservoir computing can outperform symplectic networks in preserving phase-space topology as measured by Lagrangian descriptor-based PDFs.
  • It introduces a novel LD framework that goes beyond short-term trajectory errors by quantifying global geometric features such as invariant manifolds and homoclinic orbits.
  • Empirical evaluations on systems like the Duffing oscillator and three-mode NLS reveal trade-offs between energy preservation and phase-space geometric fidelity in different NN architectures.

Phase Space Integrity in Neural Network Models of Hamiltonian Dynamics

Introduction

This work rigorously investigates the capacity of neural network (NN) models—specifically, both symplectic architectures and reservoir computing (RC)—to reproduce the global geometric structures inherent in Hamiltonian systems. By integrating the theory of Lagrangian Descriptors (LDs) into model assessment, the study advances beyond conventional short-term trajectory error metrics and probes the preservation of phase-space topology, including invariant manifolds and homoclinic orbits, which are critical in organizing long-term Hamiltonian dynamics.

Lagrangian Descriptors as Phase-Space Diagnostics

The authors formalize LDs as scalar fields over phase space, computed through bidirectional integration of a positive-definite trajectory-dependent functional over a finite interval. Key properties of LDs are harnessed to reveal stable/unstable manifolds—the singularities of LD fields correspond directly to hyperbolic structures. Importantly, LDs can be formulated purely from trajectory data, obviating the need for explicit equations of motion or Hamiltonian function access.

The central methodological innovation resides in the probabilistic interpretation of LD fields: LD values are used to construct weighted probability density functions (PDFs) over phase space. This embedding allows for quantitative, information-theoretic comparison (primarily via Kullback–Leibler (KL) divergence) between the reference system's dynamical organization and that captured by the NN model, thus bridging geometric insight with statistical rigor. The framework's sensitivity to integration time, LD exponent, and weighting function is analyzed comprehensively, with model ranking shown to be robust across these choices.

Neural Network Architectures: Symplectic Versus Data-Driven

Three canonical symplectic NN architectures are evaluated:

  • SympNet: Implements sequences of analytically invertible, symplectic neural maps parameterized via shallow NNs on canonical coordinates, guaranteeing global conservation of volume and energy.
  • HénonNet: Composes parameterized Hénon maps, also forming exact, invertible symplectic transformations.
  • Generalized Hamiltonian Neural Network (GHNN): Generalizes the aforementioned classes through modular symplectic integration of neural Hamiltonians, extending the representational capacity for non-separable systems.

Reservoir Computing (RC) serves as a reference non-symplectic, high-capacity RNN approach with random, fixed hidden weights and a trainable linear output map. It possesses formal universal approximation guarantees for dynamical systems, but lacks explicit Hamiltonian symmetries or invertibility. Bidirectional (forward and backward) dynamics in RC are realized by training separate output weights on time-reversed data.

Empirical Evaluation: Duffing Oscillator and Three-Mode Nonlinear Schrödinger System

Two canonical Hamiltonian models are considered:

Duffing Oscillator

A low-dimensional, integrable, separable system featuring center and saddle fixed points, as well as a well-characterized homoclinic (figure-eight) separatrix.

  • All architectures reconstruct the existence of the homoclinic orbit with moderate data.
  • GHNN shows the strongest data efficiency among symplectic models for small training sets, while RC achieves the lowest KL divergence (as low as 6.2×10−56.2\times 10^{-5} for N=200N=200), reproducing the homoclinic topology and fixed points with high fidelity.
  • HénonNet lags in resolving sensitive boundaries and exhibits geometric distortions, particularly near the homoclinic orbit, reflected both in LD-based PDFs and direct trajectory errors.
  • Symplectic models maintain invertibility and energy conservation, yet exhibit localized misplacement or mild geometric deformation of the separatrix compared to RC.

Three-Mode Nonlinear Schrödinger Equation (NLS, 3-Mode Truncation)

A higher-dimensional, non-separable Hamiltonian system presenting pronounced cross-mode coupling, multiple fixed points, and global homoclinic structures.

  • Symplectic architectures, despite energy conservation, fail to capture the non-separable phase-space topology—notably, they contract the homoclinic orbit and misplace fixed points within the orbit.
  • RC robustly captures the homoclinic geometry and fixed points, evidenced by KL divergences up to two orders of magnitude lower than symplectic models (e.g., 1.0×10−51.0 \times 10^{-5} for RC vs. 2×10−32 \times 10^{-3} for GHNN at N=500N=500).
  • Training with trajectories restricted to inside or outside the homoclinic region leads to systematic failures in reproducing the full phase-space structure for all architectures. Uniform sampling is essential for generalization.
  • High local (pointwise) prediction error does not necessarily indicate loss of global geometric integrity: LD-based evaluation detects geometric and topological distortion, not trajectory misalignment per se.

Discussion

The LD-based framework offers a fundamentally geometric and robust route to quantifying model fidelity with respect to global Hamiltonian structure. The empirical findings expose architectural limitations in existing symplectic NNs: while these methods precisely preserve volume and energy, they may lack expressiveness for non-separable, strongly interacting regimes unless depth and parametrization are substantially increased. RC, though unconstrained, demonstrates superior flexibility in such regimes, hinting at potential trade-offs between explicit geometric priors and representational capacity.

The separation between short-term accuracy and long-term dynamical fidelity is sharply illustrated: symplectic NNs may minimize traditional loss yet distort phase-space topologies in high-complexity systems. Conversely, RC can recover topological invariants even if not strictly energy-preserving over arbitrarily long times, provided appropriate training coverage.

Strong claims substantiated in the paper:

  • RC, despite lacking explicit symplectic structure, can outperform Hamiltonian NNs and symplectic models in preserving critical phase-space geometry, particularly in non-separable Hamiltonian settings.
  • Symplectic structure and invertibility in architectural design are insufficient for faithful phase-space reconstruction if not matched by sufficient capacity, task-aligned training, and hyperparameter tuning.

Implications and Future Directions

The implications are twofold. Theoretically, LD-based diagnostics establish a principled means for evaluating the "climate fidelity" of machine-learned surrogates for Hamiltonian systems, directly relating statistical and geometric aspects of model performance. Practically, they inform model selection and experimental design—underscoring the necessity of strategic sampling and possibly hybridizing inductive biases for complex Hamiltonian domains.

Potential developments include:

  • Incorporation of LD-based metrics into training, e.g., via regularization with LD or LD-PDF discrepancy loss to bias models toward geometric accuracy.
  • Architectural augmentation of symplectic NNs (greater depth, non-separable Hamiltonian parametrizations) and the design of hybrid models blending explicit physical constraints with RC or transformer-derived architectures.
  • Extension to high-dimensional Hamiltonian PDEs, necessitating scalable LD computation, possibly via Monte Carlo or adaptive sampling.

Conclusion

This study advances the assessment of neural network surrogates for Hamiltonian dynamics by establishing LDs and information-theoretic analysis as rigorous measures of phase-space integrity. The findings challenge the preeminence of strictly symplectic NNs for complex systems and demonstrate the potential of unconstrained, flexible architectures—provided care in data sampling and evaluation methodology. These insights will inform future efforts to construct reliable, generalizable, and physically sound AI models in nonlinear sciences and beyond.


Reference:

"Phase space integrity in neural network models of Hamiltonian dynamics: A Lagrangian descriptor approach" (2604.00473)

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 6 likes about this paper.