Papers
Topics
Authors
Recent
Search
2000 character limit reached

Temporal Curvature Regularization

Updated 6 January 2026
  • Temporal curvature regularization is a technique that dynamically controls the curvature of learned data representations using principles from Riemannian geometry.
  • It employs methods such as time encoding, curvature networks, and geometric flow regularization to adapt embeddings in evolving, non-Euclidean spaces.
  • Empirical outcomes demonstrate enhanced performance in temporal graphs and latent dynamics with reductions in error up to 40% and improved surface reconstruction metrics.

Temporal curvature regularization is a class of techniques that explicitly control, adapt, or penalize the curvature of learned data representations—geometric, latent, or surface—over time. Originating in modern applications of Riemannian geometry to neural networks and machine learning, temporal curvature regularization addresses dynamic, non-Euclidean structures in evolving graphs, latent dynamics, and neural representations of surfaces. The key insight is to endow the metric, embedding space, or second-order structure with a temporally-evolving curvature, thereby capturing non-stationary or heterogeneous geometric properties and supplying an inductive bias that improves representation @@@@1@@@@, robustness, and generalization.

1. Temporal Curvature Parameterization and Learning

Dynamically tracking curvature over time requires explicit time-dependent parameterization. In temporal graph learning, such as in Self-supervised Riemannian GNNs (SelfRGNN), a functional curvature κ(t)\kappa(t) is predicted directly from the temporal context. The pipeline is as follows:

  • Time encoding: Each timestamp tt is embedded via a translation-invariant random Fourier mapping,

φ0(t)=1d[cos(ω1t), sin(ω1t), , cos(ωd/2t), sin(ωd/2t)].\varphi^0(t) = \sqrt{\frac{1}{d}}\bigl[\cos(\omega_1 t),\ \sin(\omega_1 t),\ \dotsc,\ \cos(\omega_{d/2} t),\ \sin(\omega_{d/2} t)\bigr].

  • Curvature network: A small MLP processes φ0(t)\varphi^0(t) and, via a learned bilinear form, outputs the scalar curvature

κ(t)=(MLP(φ0(t)))TW4MLP(φ0(t)).\kappa(t) = \bigl(\operatorname{MLP}(\varphi^0(t))\bigr)^T W_4\, \operatorname{MLP}(\varphi^0(t)).

The sign of κ(t)\kappa(t) determines local geometry: positive (hyperspherical), zero (Euclidean), or negative (hyperbolic) (Sun et al., 2022).

For latent Riemannian metrics in encoder–decoder architectures, g(u,t)g(u, t) is parameterized by a neural network gθ(u,t)g_\theta(u, t). Its time evolution is governed by a chosen PDE regularizer, such as Ricci or Gaussian-curvature flows, scalar curvature functionals, or harmonic map energies. This approach supports high flexibility and non-parametric adaptation of geometry in the latent space (Gracyk, 11 Jun 2025).

2. Curvature-Based Regularization Objectives

Central to temporal curvature regularization are loss terms that explicitly enforce or match curvature constraints.

  • SelfRGNN curvature loss: SelfRGNN learns κ(t)\kappa(t) by matching it to coarse Ricci curvature statistics of the temporal graph. For each edge (i,j)(i, j) at time tt, Ricci curvature is estimated as

κij=1W(miλ,mjλ)dM(hi(t),hj(t))\kappa_{ij} = 1 - \frac{W(m^\lambda_i, m^\lambda_j)}{d_{\mathcal{M}}(h_i(t), h_j(t))}

where WW is the earth mover's distance between (weighted) neighborhood mass distributions. These edge-wise Ricci curvatures are aggregated via a GRU to produce κ^(t)\hat{\kappa}(t), and the loss is

Lcurvature=tκ(t)κ^(t).\mathcal{L}_{\mathrm{curvature}} = \sum_t |\kappa(t) - \hat{\kappa}(t)|.

This regularizes the learned curvature toward empirically estimated geometric properties (Sun et al., 2022).

  • Physics-informed geometric flows: For general latent metrics g(u,t)g(u, t), dynamics-inspired losses operate via

Lreg=ktgij(uk,tk)flow[g,](uk,tk)2\mathcal{L}_{\rm reg} = \sum_k \| \partial_t g_{ij}(u_k, t_k) - {\rm flow}[g, \ldots](u_k, t_k) \|^2

where `flowstandsforachosengeometricPDE(e.g.,Ricci,scalarcurvature,harmonic).Eachflowimpartsdifferentinvariantsandinductivebiaseson{\rm flow}` stands for a chosen geometric PDE (e.g., Ricci, scalar-curvature, harmonic). Each flow imparts different invariants and inductive biases ong(u, t)(<ahref="/papers/2506.09679"title=""rel="nofollow"dataturbo="false"class="assistantlink"xdataxtooltip.raw="">Gracyk,11Jun2025</a>).</p><ul><li><strong>Scheduledcurvaturepenalties:</strong>Ingeometricsurfacelearning,suchaswiththeOffDiagonalWeingarten(ODW)lossforneural<ahref="https://www.emergentmind.com/topics/neuralsigneddistancefunctionssdfs"title=""rel="nofollow"dataturbo="false"class="assistantlink"xdataxtooltip.raw="">SDFs</a>,regularizationweight (<a href="/papers/2506.09679" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">Gracyk, 11 Jun 2025</a>).</p> <ul> <li><strong>Scheduled curvature penalties:</strong> In geometric surface learning, such as with the Off-Diagonal Weingarten (ODW) loss for neural <a href="https://www.emergentmind.com/topics/neural-signed-distance-functions-sdfs" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">SDFs</a>, regularization weight \lambda_{\mathrm{ODW}}(t)variesovertrainingtime.Schedules(constant,decay,quintic,step,warmup)determinethestrengthandtimingofcurvatureenforcement(<ahref="/papers/2511.03147"title=""rel="nofollow"dataturbo="false"class="assistantlink"xdataxtooltip.raw="">Yinetal.,5Nov2025</a>).</li></ul><h2class=paperheadingid=principalmethodologies>3.PrincipalMethodologies</h2><p>Distinctmethodologiesimplementtemporalcurvatureregularizationtailoredtothegeometryoftheproblemdomain:</p><ul><li><strong>Timevaryingmanifoldgeometry:</strong>Fortemporalgraphneuralnetworks,representationsareembeddedinatimeparametrizedRiemannianmanifold varies over training time. Schedules (constant, decay, quintic, step, warm-up) determine the strength and timing of curvature enforcement (<a href="/papers/2511.03147" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">Yin et al., 5 Nov 2025</a>).</li> </ul> <h2 class='paper-heading' id='principal-methodologies'>3. Principal Methodologies</h2> <p>Distinct methodologies implement temporal curvature regularization tailored to the geometry of the problem domain:</p> <ul> <li><strong>Time-varying manifold geometry:</strong> For temporal graph neural networks, representations are embedded in a time-parametrized Riemannian manifold \mathcal{M}^{d, \kappa(t)}.Allmetric,distance,andmappingoperations(e.g.,exponential/logarithmicmaps)areparameterizedby. All metric, distance, and mapping operations (e.g., exponential/logarithmic maps) are parameterized by \kappa(t).Theinducedgeodesic,</li></ul><p>. The induced geodesic,</li> </ul> <p>d_\mathcal{M}(x, y) = \frac{1}{\sqrt{|\kappa|}} \cos_\kappa^{-1}\big(|\kappa|\langle x, y \rangle_\kappa\big),</p><p>adaptstotheunderlyingtemporalcurvature(<ahref="/papers/2208.14073"title=""rel="nofollow"dataturbo="false"class="assistantlink"xdataxtooltip.raw="">Sunetal.,2022</a>).</p><ul><li><strong>Geometricflowregularizationinlatentspaces:</strong>Curvatureflows(Ricci,Gaussian,scalarcurvature,harmonicmap)areimposedbydifferentiatingthelatentmetricasafunctionoftime,ensuringnontrivialevolvingstructureandpreventingcollapse(e.g.,</p> <p>adapts to the underlying temporal curvature (<a href="/papers/2208.14073" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">Sun et al., 2022</a>).</p> <ul> <li><strong>Geometric flow regularization in latent spaces:</strong> Curvature flows (Ricci, Gaussian, scalar-curvature, harmonic map) are imposed by differentiating the latent metric as a function of time, ensuring nontrivial evolving structure and preventing collapse (e.g., g \to 0)(<ahref="/papers/2506.09679"title=""rel="nofollow"dataturbo="false"class="assistantlink"xdataxtooltip.raw="">Gracyk,11Jun2025</a>).</li><li><strong>CurvatureschedulinginneuralSDFs:</strong>Forsurfacereconstruction,theODWloss</li></ul><p>) (<a href="/papers/2506.09679" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">Gracyk, 11 Jun 2025</a>).</li> <li><strong>Curvature scheduling in neural SDFs:</strong> For surface reconstruction, the ODW loss</li> </ul> <p>\mathcal{L}_{\mathrm{ODW}} = \frac{1}{L}\sum_{p \in \Omega} \left| \frac{u^\top H_f(p) v}{\|\nabla f(p)\|_2} \right|$</p> <p>penalizes the off-diagonal entries of the Weingarten map. Temporal scheduling of its weight controls regularization strength during distinct optimization phases (<a href="/papers/2511.03147" title="" rel="nofollow" data-turbo="false" class="assistant-link" x-data x-tooltip.raw="">Yin et al., 5 Nov 2025</a>).</p> <h2 class='paper-heading' id='empirical-outcomes-and-applications'>4. Empirical Outcomes and Applications</h2> <p>Temporal curvature regularization yields demonstrable improvements in several applied settings.</p> <ul> <li><strong>Temporal graphs:</strong> SelfRGNN&#39;s time-varying curvature tracking results in embeddings that better adapt to shifting geometric regimes, such as transitions in citation networks from clustered (positive curvature, triangle-rich) to hub-dominated (negative curvature, hyperbolic). Empirical $\kappa(t)$ moves from positive (0.552 in 1996) to negative (–1.022 in 2002) (Sun et al., 2022).

  • Latent dynamics: Geometric flow regularization confers improved robustness to out-of-distribution and adversarial perturbations in latent dynamics learning. For the Navier–Stokes and Burgers’ equation tests, curvature-regularized models exhibit 20–40% reductions in error and variance, maintain metric magnitude, and outperform non-regularized autoencoders on zero-shot extrapolation (Gracyk, 11 Jun 2025).
  • Surface learning: Scheduling of the ODW curvature term delivers up to 35% Chamfer Distance improvement over fixed-weight baselines, with better normal consistency and F1 scores on CAD reconstruction tasks. Quintic decay schedules yield the smoothest training and the highest fidelity surfaces, while warm-up or abrupt step schedules are suboptimal (Yin et al., 5 Nov 2025).
  • A summary of surface learning performance is provided below:

    Schedule NC_mean CD_mean F1_mean
    Fixed (FlatCAD) 96.14 4.37 84.98
    Linear decay 97.95 3.05 90.59
    Quintic decay 98.01 2.86 92.72
    Step 97.99 2.87 92.71

    5. Theoretical and Practical Considerations

    Temporal curvature regularization fundamentally prevents degeneracy and collapse in learned geometric structures:

    • For Riemannian metric flows, the tg\partial_t g terms ensure that g(u,t)g(u, t) remains bounded away from zero, retaining full representational or embedding capacity over time (Gracyk, 11 Jun 2025).
    • In graph learning, anchoring to Ricci curvature estimates programmatically aligns the latent structure with empirically observable geometric relationships, reducing mismatch between model and data-induced geometry (Sun et al., 2022).
    • In neural SDFs, strong-start decay schedules for curvature losses immediately suppress spurious warp and allow fine-scale feature learning in later optimization, confirming the need for time- and task-adaptive regularization (Yin et al., 5 Nov 2025).

    A plausible implication is that task-specific adaptation of curvature priors—whether scheduled in optimization, self-supervised by data geometry, or enforced via PDEs in the latent space—substantially improves both quantitative and qualitative learning outcomes in nonstationary, geometry-intensive domains.

    6. Practical Guidelines and Future Directions

    Current research converges on the following best practices:

    • Implement temporally adaptive curvature priors—e.g., strong initial regularization decaying to near-zero for late-stage optimization—in non-Euclidean and latent-geometric networks (Yin et al., 5 Nov 2025).
    • Leverage geometric flows, not just fixed penalties, for preventing metric collapse and driving nontrivial latent evolution (Gracyk, 11 Jun 2025).
    • Use edge-based Ricci curvature as a self-supervised teaching signal for dynamic graph representations, especially where supervision is limited (Sun et al., 2022).

    Promising future directions include nonparametric and neurally discovered geometric flows, integration of curvature regularization with diffusion models, and generalization beyond current Riemannian settings to Finsler or sub-Riemannian structures.

    References

    • "A Self-supervised Riemannian GNN with Time Varying Curvature for Temporal Graph Learning" (Sun et al., 2022)
    • "Geometric flow regularization in latent spaces for smooth dynamics with the efficient variations of curvature" (Gracyk, 11 Jun 2025)
    • "Scheduling the Off-Diagonal Weingarten Loss of Neural SDFs for CAD Models" (Yin et al., 5 Nov 2025)

    Topic to Video (Beta)

    Whiteboard

    No one has generated a whiteboard explanation for this topic yet.

    Follow Topic

    Get notified by email when new papers are published related to Temporal Curvature Regularization.