Papers
Topics
Authors
Recent
2000 character limit reached

Neural Eikonal Solver (NES)

Updated 6 January 2026
  • Neural Eikonal Solvers (NES) are algorithms that integrate neural networks with numerical PDE methods to compute first-arrival wavefronts in inhomogeneous media.
  • They employ architectures like MLPs, PINNs, and equivariant neural fields to replace classical finite-difference schemes, achieving superior empirical accuracy.
  • NES methods find practical applications in seismic imaging, geodesic distance computation, and real-time surrogate modeling across complex, high-dimensional domains.

A Neural Eikonal Solver (NES) is a class of algorithms employing neural networks—most often multilayer perceptrons (MLPs), physics-informed neural networks (PINNs), or equivariant neural fields—to compute, represent, or accelerate numerical solutions to the Eikonal equation, which governs first-arrival wavefront propagation in inhomogeneous media. NESs unify machine learning and numerical PDE techniques, yielding mesh-free, scalable, and often highly accurate solvers that address limitations of classical finite-difference approaches, particularly in high-dimensional, geometrically complex, or data-intensive settings.

1. The Eikonal Equation and Its Significance

The Eikonal equation describes the evolution of a propagating front in a scalar speed field on a domain Ω or a Riemannian manifold 𝓜:

  • Flat (Euclidean) domain:

T(x)=F(x),xΩΓ,T(x)=0, xΓ|\nabla T(x)| = F(x), \quad x \in \Omega \setminus \Gamma, \qquad T(x)=0, \ x \in \Gamma

where T(x)T(x) is the earliest arrival time from a boundary or source set Γ\Gamma, F(x)F(x) is the slowness (=1/v(x)=1/v(x)), and \nabla denotes the Euclidean gradient.

  • Riemannian geometry:

GT(x)=F(x),xMΓ|\nabla_{\mathcal{G}} T(x)| = F(x), \quad x \in \mathcal{M} \setminus \Gamma

where G\nabla_{\mathcal{G}} is the gradient with respect to the manifold metric G\mathcal{G}.

The geometric interpretation is that T(x)T(x) encodes the geodesic (shortest-path) distance or minimum travel time, for spatially varying speed v(x)v(x), from Γ\Gamma to xx (Lichtenstein et al., 2019, Smith et al., 2020, García-Castellanos et al., 21 May 2025).

2. Core Architectures and Methodological Paradigms

NES methodologies span several neural architectures and algorithmic frameworks:

2.1 Neural "Local Solvers" Integrated in Upwind Schemes

“Deep Eikonal Solvers” replace the classical finite-difference update of fast marching methods (FMM) with a trained neural network that predicts the local update given a patch of nearby values (Lichtenstein et al., 2019):

  • On Cartesian grids: An MLP operates on a vector of normalized neighbor differences.
  • On triangulated surfaces: A PointNet-style architecture encodes mesh geometry and arrival times.

This approach preserves the upwind/Dijkstra-style global ordering and attains higher empirical accuracy and order of convergence by learning data-driven local stencils.

2.2 Physics-Informed Neural Networks (PINNs)

PINN-based NESs treat the Eikonal equation as a soft constraint within the loss function, optimizing all network weights so that the output traveltimes satisfy the PDE at randomly sampled spatial points (Waheed et al., 2020, Waheed et al., 2021, Song et al., 2024, Grubas et al., 2022):

  • For single-source traveltimes, PINNs output a scalar T(x)T(x) or a factored form T0(x)τ(x)T_0(x)\tau(x) with T0T_0 a reference solution.
  • For source-receiver (two-point) traveltimes, networks take both source and receiver locations as input.
  • Key advances include source singularity regularization via "factoring" (multiplying out a known singularity), adaptive loss weighting, surrogate modeling (handling many sources), and transfer learning for velocity inversion.

2.3 Continuous, Mesh-Free Neural Representations

Approaches such as EikoNet (Smith et al., 2020) and PINN-based methods (Waheed et al., 2021, Grubas et al., 2022, Song et al., 2024, García-Castellanos et al., 21 May 2025) represent the traveltime (or correction factor to a proxy solution) as a continuous function of spatial (and/or source) coordinates. Network gradients are computed analytically via automatic differentiation, enabling precise evaluation at arbitrary points.

2.4 Equivariant Neural Fields and Meta-Learning

Equivariant Eikonal Neural Networks (García-Castellanos et al., 21 May 2025) build NESs whose solution fields are equivariant under the action of Lie groups (e.g., SE(n)SE(n) for translations and rotations), using cross-attention and invariant representations of latent velocity models. Meta-learning enables rapid adaptation to new speed fields or geometric domains, while group-equivariant conditioning yields direct generalization to Euclidean, spherical, or hyperbolic manifolds.

3. Loss Functions, Factorization, and Physics Constraints

3.1 Factored Formulations

  • Classical: Direct PINN loss on T21/v2=0|\nabla T|^2 - 1/v^2 = 0 is ill-conditioned near point sources due to the singularity at xsx_s.
  • Factored: Solutions are written T(x)=T0(x)τ(x)T(x) = T_0(x) \tau(x) or T(x)=R(x)γ(x)T(x) = R(x)\gamma(x), where T0T_0 or R(x)=xxsR(x) = \|x - x_s\| captures the geometric singularity at the source, and τ\tau, γ\gamma are smooth factors learned by the network (Waheed et al., 2020, Waheed et al., 2021, Song et al., 2024, Grubas et al., 2022).
  • New factorizations: PINNPStomo proposes replacing background traveltime T0T_0 by pure distance RR to eliminate velocity model dependence, confining the unknown factor’s range and improving convergence and robustness (Song et al., 2024, Grubas et al., 2022).

3.2 Physics-Informed Loss

  • Supervised loss: When ground truth or high-accuracy reference solvers are available, mean squared error between predicted and reference traveltimes is used (Lichtenstein et al., 2019).
  • Physics loss: The main loss is typically the squared residual or L1L_1-norm of the Eikonal PDE (possibly in factored form), with adaptive weighting of PDE, positivity, and boundary terms (Waheed et al., 2020, Waheed et al., 2021, Grubas et al., 2022).
  • Hamiltonian loss: In regimes with caustics, a non-symmetric, L1L_1-based loss on the Hamiltonian supports robust training and better handles fronts with singularities (Grubas et al., 2022).
  • Meta-learning/autodecoding: For family-generalization, losses are defined over latent-parameterized model ensembles and optimized jointly (García-Castellanos et al., 21 May 2025).

4. Performance, Accuracy, and Generalization

4.1 Quantitative Accuracy

NESs match or surpass the accuracy of classical fast marching methods:

Approach RMAE (Marmousi) Training Time Reference
NES-OP (one-point) (Grubas et al., 2022) 0.2–0.6% 40 min (Grubas et al., 2022)
PINNeik (one-point) (Waheed et al., 2020) 12.4% 330 s (Grubas et al., 2022)
NES-TP (two-point) (Grubas et al., 2022) 0.4–0.9% 16 min (Grubas et al., 2022)
EikoNet (two-point) (Smith et al., 2020) 5.4% 9600 s (Grubas et al., 2022)

EikoNet achieves grid-free, continuous solutions, avoiding interpolation artifacts and matching FMM for various 3D velocity models (Smith et al., 2020). NESs exhibit higher order of accuracy (empirical r2r\sim2 or higher) compared to classical O(h)O(h) or O(h2)O(h^2) finite-difference schemes (Lichtenstein et al., 2019).

4.2 Scalability and Computational Efficiency

  • NESs are highly parallelizable on GPUs, allowing real-time inference for millions of source-receiver pairs.
  • Training cost exceeds a single FMM/PDE solve but is amortized over many evaluations, particularly in inversion or surrogate settings (Waheed et al., 2021).
  • Two-point NESs compress traveltime lookup tables by orders of magnitude, with similar or better inference speed than FMM.

4.3 Generalization

5. Applications and Extensions

NESs have been applied to a wide range of domains:

6. Limitations and Future Directions

Despite their advantages, NESs face specific limitations:

Ongoing work explores scalable, meta-learned NESs for extremely high-dimensional, multi-geometric, and hybrid inverse problems; integration of explicit PDE residuals as auxiliary loss (PINN-style); and robust solvers for anisotropic, attenuating, or topographic domains (Song et al., 2024, García-Castellanos et al., 21 May 2025).

7. Comparative Summary of Methods and Empirical Performance

NES Variant and Reference Key Features Scalability Typical Relative Error Special Capabilities
Deep Eikonal Solver (Lichtenstein et al., 2019) Neural local solver in FMM O(N log N) 2–3× lower than FMM Surfaces, complex geometries
EikoNet (Smith et al., 2020) Grid-free, two-point PINN >106>10^6 queries/s RMS error ≈ FMM (0.03–0.04) Continuous, multipathing, 3D, GPU
PINNeik (Waheed et al., 2020) PINN w/ factored form Mesh-free <1% in 2D smooth models Anisotropy, topography, transfer, surrogate
Neural Eikonal Solver (Grubas et al., 2022) Bounded factorizations, 2D/3D, minutes 0.1–0.9% (Marmousi) Handles caustics, compact two-point DFN
PINNPStomo (Song et al., 2024) New factorization, twin NN 2D/3D, tomography ≤5% in Overthrust, Marmousi Joint P/S-wave inversion, no v₀ dependence
Equivariant NES (García-Castellanos et al., 21 May 2025) Equivariance, meta-learn Homogeneous spaces RE 1–3% (OpenFWI 2D classes) Arbitrary manifolds, steerable, meta-adapt

NESs thus represent a unification and extension of numerical geometry, PINNs, operator learning, and geometric deep learning techniques for the Eikonal equation, offering mesh-independence, scalability, adaptability, and, with suitable architecture and loss engineering, state-of-the-art accuracy for a broad range of scientific and engineering applications.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Neural Eikonal Solver (NES).