Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 91 tok/s
Gemini 3.0 Pro 46 tok/s Pro
Gemini 2.5 Flash 148 tok/s Pro
Kimi K2 170 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Eikonal Loss in Neural and PDE Applications

Updated 9 November 2025
  • Eikonal loss is a functional that penalizes deviations of a function’s gradient norm from prescribed values (typically one), ensuring consistency with the underlying PDE.
  • It is pivotal in physics-informed neural networks and neural SDF optimization, enhancing accuracy in seismic traveltime modeling and geometric rendering.
  • Recent advances integrate stabilization techniques such as directional divergence and screened Poisson losses to mitigate instability and over-smoothing in optimization.

Eikonal loss is a functional penalizing the deviation of a function’s gradient norm from prescribed values, most universally ϕ=1\|\nabla \phi\|=1, and is a central tool in applications ranging from physics-informed neural networks for solving the eikonal equation, to implicit neural representations of shapes, to variational approaches in PDE-constrained optimization. The eikonal equation itself arises in numerous disciplines: it governs traveltimes in seismology, level-set evolution in computer vision, and coordinate construction in shape modeling. Eikonal loss is foundational to modern machine learning approaches where solutions must satisfy or approximate this nonlinear, first-order constraint as a soft penalty within optimization routines.

1. Mathematical Definition and Context

The classical eikonal equation in a domain ΩRn\Omega\subset \mathbb{R}^n is

u(x)=f(x)\|\nabla u(x)\| = f(x)

where uu is the unknown function and ff is a positive scalar field, e.g., the inverse of velocity in wave propagation or unity in signed distance functions (SDFs). The eikonal loss penalizes the degree to which a candidate uu fails to satisfy the above PDE. For the SDF and many shape modeling tasks, f(x)1f(x)\equiv 1.

The standard forms are: Leik(u)=Ωu(x)f(x)p  dxL_\text{eik}(u) = \int_\Omega \left|\|\nabla u(x)\| - f(x)\right|^p\;dx with p=2p=2 (most common, quadratic penalty) or p=1p=1 (mean absolute deviation).

In physics-informed neural networks (PINNs) and implicit neural representations (INR), the eikonal loss is imposed “softly” by evaluating at collocation points and using autodifferentiation to compute u\nabla u with respect to network inputs.

2. Eikonal Loss in Physics-Informed Neural Networks (PINNs)

For PDE-constrained learning as in the eikonal equation for traveltimes, the core methodology is to minimize a loss functional encoding the PDE residual. The PINN formulation for the one-point (fixed source) eikonal problem is

rτ(xr)=1v(xr),τ(xs)=0\|\nabla_{r} \tau(x_{r})\| = \frac{1}{v(x_{r})}, \qquad \tau(x_{s})=0

with τ\tau parameterized as a neural network. In PINNeik (Waheed et al., 2020), a factored form is preferred to avoid singularities: T(x)=T0(x)τ(x),T0(x)=xxsv(xs)T(x) = T_0(x) \tau(x), \qquad T_0(x) = \frac{\|x-x_s\|}{v(x_s)} The PINN's loss function combines mean-squared residuals of the factored PDE over collocation points, explicit boundary and positivity penalties, and in advanced systems, dynamically reweighted loss terms for convergence control.

The Neural Eikonal Solver (NES) (Grubas et al., 2022) further improves loss formulation by introducing a Hamiltonian residual,

Hp(x,τ)=1p[v(x)pτ(x)p1]\mathcal{H}_p(x, \tau) = \frac{1}{p}\left[ v(x)^p \|\nabla \tau(x)\|^p - 1\right]

and defines the loss as

LNES(θ)=1NxH2(x,τθ(x))L_\text{NES}(\theta) = \frac{1}{N} \sum_{x} \left|\mathcal{H}_2(x, \tau_\theta(x))\right|

using an L1L_1 norm for outlier-robustness, crucial in capturing sharp features arising from caustics.

Gaussian activations are employed in the hidden layers to enable the network to localize high-curvature events associated with caustics. The output is bounded with respect to physical travel-time limits, automatically enforcing boundary conditions without explicit BC terms.

This combination enables NES to dramatically outperform previous PINN strategies, achieving relative mean-absolute errors (RMAE) of 0.2–0.4% (vs. 5–12% for PINNeik/EikoNet), reducing training times by more than an order of magnitude.

3. Eikonal Loss in Neural Signed Distance Function (SDF) Optimization

In SDF learning, the eikonal constraint ensures that the neural network output behaves as a distance function; any true SDF ϕ\phi satisfies ϕ(x)=1|\nabla \phi(x)|=1 away from the surface. The standard L2L^2 penalty,

Leik(ϕ)=12Ω(ϕ(x)1)2dxL_\text{eik}(\phi) = \frac{1}{2} \int_\Omega (|\nabla \phi(x)| - 1)^2 dx

directly penalizes deviations from the SDF property.

However, (Yang et al., 2023) shows that, as the network expressivity increases, the minimization of LeikL_\text{eik} approaches a continuum PDE: tϕ=((11ϕ)ϕ)\partial_t \phi = \nabla \cdot \left( (1 - \frac{1}{|\nabla\phi|}) \nabla\phi \right) Locally, for ϕ>1|\nabla\phi|>1, this flow is backward-parabolic (unstable), leading to the amplification of high-frequency noise and sub-optimal local minima corresponding to "pseudo-SDFs" with incorrect surface geometry or topology.

Remedies, such as adding Laplacian or curvature penalties, stabilize optimization but cause excessive smoothing (over-regularization), degrading geometric detail.

4. Theoretical Limitations and Stability Issues

Eikonal loss is a necessary but not sufficient condition for the function to be a true SDF (Wang et al., 21 Nov 2024). There exists a large equivalence class of fields with gradient norm one almost everywhere but not genuinely representing the signed distance. Furthermore, the gradient descent dynamics associated with the L2L^2 eikonal loss are not, in general, well-posed due to local backward diffusion effects (as above), which can result in instability or convergence to undesirable minima.

In practice, this means that minimizing eikonal loss may yield functions that globally deviate from the true distance, especially under limited sampling or for high-genus/topologically complex surfaces.

5. Modern Stabilization and Sufficient Losses

Approaches to regularization and sufficiency diverge:

LL.n.(ϕ)=ΩϕD2ϕϕdxL_{\mathrm{L.n.}}(\phi) = \int_\Omega |\nabla\phi^\top D^2\phi \nabla\phi| dx

This penalizes only the normal-direction Hessian, stabilizing the flow while leaving tangential curvature unregularized, thus preserving fine geometry.

This substitutes the original eikonal constraint with a functional whose minimizer is asymptotically sufficient for the true distance field:

Lheat=12Ωe2λu(u2+1)dxL_\text{heat} = \frac{1}{2}\int_\Omega e^{-2\lambda|u|} \left( \|\nabla u\|^2 + 1 \right) dx

The loss is derived from the Dirichlet energy of h(x)=eλu(x)h(x) = e^{-\lambda|u(x)|}, the solution to a screened Poisson equation with the surface as its boundary. With appropriate λ\lambda \to \infty, minimization ensures convergence to the true SDF, while conferring both temporal and spatial stability on the optimizer.

This sufficiency is provable via asymptotic bounds: (1/λ)loghλ(x)+dΓ(x)=O(1/λ)(1/\lambda) \log h_\lambda(x) + d_\Gamma(x) = O(1/\lambda), where dΓ(x)d_\Gamma(x) is the true distance.

6. Applications and Numerical Impact

Eikonal loss and its stabilized or sufficiency-augmented variants are deployed in:

  • Seismic traveltime modeling and inversion using PINNs, where neural solvers with eikonal-based loss achieve orders-of-magnitude improvements over fast marching or fast sweeping methods, especially for tasks requiring repeated solves or surrogacy (source-to-travel time mapping as a learned model) (Grubas et al., 2022, Waheed et al., 2020).
  • Implicit neural representations (INRs) of geometry, where eikonal loss enforces SDF constraints for 3D objects, with stabilized losses enabling accurate reproduction of high-genus shapes and robust convergence (Yang et al., 2023, Wang et al., 21 Nov 2024).
  • Geometric rendering workflows (e.g., sphere tracing), where improved SDF accuracy reduces ray-marching steps and increases rendering speed (Wang et al., 21 Nov 2024).

For neural eikonal solvers, summary statistics include:

  • NES RMAE: 0.20.4%0.2–0.4\% vs. 512%5–12\% for previous PINNs.
  • NES training time: minutes (single GPU, Marmousi 2D) vs. hundreds–thousands of seconds.
  • HotSpot IoU: $0.987$ (2D) vs $0.788$ (DiGS) and $0.662$ (StEik); Chamfer and Hausdorff errors similarly improved.
  • Sphere-tracing: 2030%20–30\% fewer steps with HotSpot SDFs.

7. Trade-offs, Limitations, and Perspectives

While eikonal loss provides an accessible, mesh-free mechanism for enforcing first-order geometric or physical PDE constraints via automatic differentiation, its use is beset by several challenges:

  • Instability in the continuum limit and potential for pseudo-optimal solutions.
  • Over-smoothing by conventional stabilizers (Laplacian, area penalization).
  • Incomplete enforcement of global properties necessary for true signed distancification.

Recent advances employ loss functionals derived from deeper PDE theory (directional-divergence, screened Poisson/heat losses), shown to be asymptotically sufficient and to endow the optimization with robust temporal and spatial stability.

A plausible implication is that further progress will require the integration of PDE-informed sufficiency into the core objective to ensure consistency of neural representations with underlying differential-geometric properties, especially as models increase in expressivity and as applications demand high fidelity in geometry, topology, or physical accuracy.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Eikonal Loss.