Neural Eikonal Solver (NES)
- Neural Eikonal Solvers (NES) are algorithms that integrate neural networks with numerical PDE methods to compute first-arrival wavefronts in inhomogeneous media.
- They employ architectures like MLPs, PINNs, and equivariant neural fields to replace classical finite-difference schemes, achieving superior empirical accuracy.
- NES methods find practical applications in seismic imaging, geodesic distance computation, and real-time surrogate modeling across complex, high-dimensional domains.
A Neural Eikonal Solver (NES) is a class of algorithms employing neural networks—most often multilayer perceptrons (MLPs), physics-informed neural networks (PINNs), or equivariant neural fields—to compute, represent, or accelerate numerical solutions to the Eikonal equation, which governs first-arrival wavefront propagation in inhomogeneous media. NESs unify machine learning and numerical PDE techniques, yielding mesh-free, scalable, and often highly accurate solvers that address limitations of classical finite-difference approaches, particularly in high-dimensional, geometrically complex, or data-intensive settings.
1. The Eikonal Equation and Its Significance
The Eikonal equation describes the evolution of a propagating front in a scalar speed field on a domain Ω or a Riemannian manifold 𝓜:
- Flat (Euclidean) domain:
where is the earliest arrival time from a boundary or source set , is the slowness (), and denotes the Euclidean gradient.
- Riemannian geometry:
where is the gradient with respect to the manifold metric .
The geometric interpretation is that encodes the geodesic (shortest-path) distance or minimum travel time, for spatially varying speed , from to (Lichtenstein et al., 2019, Smith et al., 2020, García-Castellanos et al., 21 May 2025).
2. Core Architectures and Methodological Paradigms
NES methodologies span several neural architectures and algorithmic frameworks:
2.1 Neural "Local Solvers" Integrated in Upwind Schemes
“Deep Eikonal Solvers” replace the classical finite-difference update of fast marching methods (FMM) with a trained neural network that predicts the local update given a patch of nearby values (Lichtenstein et al., 2019):
- On Cartesian grids: An MLP operates on a vector of normalized neighbor differences.
- On triangulated surfaces: A PointNet-style architecture encodes mesh geometry and arrival times.
This approach preserves the upwind/Dijkstra-style global ordering and attains higher empirical accuracy and order of convergence by learning data-driven local stencils.
2.2 Physics-Informed Neural Networks (PINNs)
PINN-based NESs treat the Eikonal equation as a soft constraint within the loss function, optimizing all network weights so that the output traveltimes satisfy the PDE at randomly sampled spatial points (Waheed et al., 2020, Waheed et al., 2021, Song et al., 2024, Grubas et al., 2022):
- For single-source traveltimes, PINNs output a scalar or a factored form with a reference solution.
- For source-receiver (two-point) traveltimes, networks take both source and receiver locations as input.
- Key advances include source singularity regularization via "factoring" (multiplying out a known singularity), adaptive loss weighting, surrogate modeling (handling many sources), and transfer learning for velocity inversion.
2.3 Continuous, Mesh-Free Neural Representations
Approaches such as EikoNet (Smith et al., 2020) and PINN-based methods (Waheed et al., 2021, Grubas et al., 2022, Song et al., 2024, García-Castellanos et al., 21 May 2025) represent the traveltime (or correction factor to a proxy solution) as a continuous function of spatial (and/or source) coordinates. Network gradients are computed analytically via automatic differentiation, enabling precise evaluation at arbitrary points.
2.4 Equivariant Neural Fields and Meta-Learning
Equivariant Eikonal Neural Networks (García-Castellanos et al., 21 May 2025) build NESs whose solution fields are equivariant under the action of Lie groups (e.g., for translations and rotations), using cross-attention and invariant representations of latent velocity models. Meta-learning enables rapid adaptation to new speed fields or geometric domains, while group-equivariant conditioning yields direct generalization to Euclidean, spherical, or hyperbolic manifolds.
3. Loss Functions, Factorization, and Physics Constraints
3.1 Factored Formulations
- Classical: Direct PINN loss on is ill-conditioned near point sources due to the singularity at .
- Factored: Solutions are written or , where or captures the geometric singularity at the source, and , are smooth factors learned by the network (Waheed et al., 2020, Waheed et al., 2021, Song et al., 2024, Grubas et al., 2022).
- New factorizations: PINNPStomo proposes replacing background traveltime by pure distance to eliminate velocity model dependence, confining the unknown factor’s range and improving convergence and robustness (Song et al., 2024, Grubas et al., 2022).
3.2 Physics-Informed Loss
- Supervised loss: When ground truth or high-accuracy reference solvers are available, mean squared error between predicted and reference traveltimes is used (Lichtenstein et al., 2019).
- Physics loss: The main loss is typically the squared residual or -norm of the Eikonal PDE (possibly in factored form), with adaptive weighting of PDE, positivity, and boundary terms (Waheed et al., 2020, Waheed et al., 2021, Grubas et al., 2022).
- Hamiltonian loss: In regimes with caustics, a non-symmetric, -based loss on the Hamiltonian supports robust training and better handles fronts with singularities (Grubas et al., 2022).
- Meta-learning/autodecoding: For family-generalization, losses are defined over latent-parameterized model ensembles and optimized jointly (García-Castellanos et al., 21 May 2025).
4. Performance, Accuracy, and Generalization
4.1 Quantitative Accuracy
NESs match or surpass the accuracy of classical fast marching methods:
| Approach | RMAE (Marmousi) | Training Time | Reference |
|---|---|---|---|
| NES-OP (one-point) (Grubas et al., 2022) | 0.2–0.6% | 40 min | (Grubas et al., 2022) |
| PINNeik (one-point) (Waheed et al., 2020) | 12.4% | 330 s | (Grubas et al., 2022) |
| NES-TP (two-point) (Grubas et al., 2022) | 0.4–0.9% | 16 min | (Grubas et al., 2022) |
| EikoNet (two-point) (Smith et al., 2020) | 5.4% | 9600 s | (Grubas et al., 2022) |
EikoNet achieves grid-free, continuous solutions, avoiding interpolation artifacts and matching FMM for various 3D velocity models (Smith et al., 2020). NESs exhibit higher order of accuracy (empirical or higher) compared to classical or finite-difference schemes (Lichtenstein et al., 2019).
4.2 Scalability and Computational Efficiency
- NESs are highly parallelizable on GPUs, allowing real-time inference for millions of source-receiver pairs.
- Training cost exceeds a single FMM/PDE solve but is amortized over many evaluations, particularly in inversion or surrogate settings (Waheed et al., 2021).
- Two-point NESs compress traveltime lookup tables by orders of magnitude, with similar or better inference speed than FMM.
4.3 Generalization
- NESs generalize across different shapes (e.g., trained on TOSCA, tested on SHREC (Lichtenstein et al., 2019)), source locations, and mesh resolutions.
- Equivariant NESs provide direct generalization under group actions and across geometric spaces (García-Castellanos et al., 21 May 2025).
5. Applications and Extensions
NESs have been applied to a wide range of domains:
- Seismic imaging and tomography: Multi-parameter inversion for P- and S-wave velocities, hypocenter localization, distributed acoustic sensing, and Kirchhoff migration (Smith et al., 2020, Song et al., 2024, Grubas et al., 2022).
- Geodesic distance and geometry processing: On both flat and curved manifolds, including surfaces of arbitrary genus (Lichtenstein et al., 2019, García-Castellanos et al., 21 May 2025).
- Ray modeling and multipathing: NESs enable direct gradient-based detection of secondary arrivals (Smith et al., 2020).
- Arbitrary physical domains and anisotropic media: Simple modifications to the PINN residual allow solving with anisotropy, attenuation, and topographic boundaries (Waheed et al., 2021, Waheed et al., 2020, Song et al., 2024).
- Neural surrogate modeling and real-time updates: Surrogate DNNs enable single-pass traveltime evaluation for new sources, and transfer learning accelerates iterative inversion cycles (Waheed et al., 2021, Waheed et al., 2020).
6. Limitations and Future Directions
Despite their advantages, NESs face specific limitations:
- Convergence in high-heterogeneity or sharp-contrast regimes: PINN-based NESs require careful factoring and may need multi-stage adaptive optimization schemes (Waheed et al., 2021, Waheed et al., 2020, Song et al., 2024).
- Hyperparameter and architecture tuning: Network capacity, collocation sampling, and learning rates must be selected for each problem class (Grubas et al., 2022, Waheed et al., 2021).
- Theoretical guarantees: Rigorous error bounds and stability results for neural PDE solvers remain an active research area.
- Optimal data sampling: Adaptive refinement and meta-learning protocols are under development to maximize sample efficiency (García-Castellanos et al., 21 May 2025).
- Extension to viscosity solutions and non-Euclidean metrics: Current NESs focus on first-arrival (shortest) solutions, with research ongoing in distinguishing physically relevant solutions in the presence of multipaths and viscosity effects (Grubas et al., 2022, García-Castellanos et al., 21 May 2025).
Ongoing work explores scalable, meta-learned NESs for extremely high-dimensional, multi-geometric, and hybrid inverse problems; integration of explicit PDE residuals as auxiliary loss (PINN-style); and robust solvers for anisotropic, attenuating, or topographic domains (Song et al., 2024, García-Castellanos et al., 21 May 2025).
7. Comparative Summary of Methods and Empirical Performance
| NES Variant and Reference | Key Features | Scalability | Typical Relative Error | Special Capabilities |
|---|---|---|---|---|
| Deep Eikonal Solver (Lichtenstein et al., 2019) | Neural local solver in FMM | O(N log N) | 2–3× lower than FMM | Surfaces, complex geometries |
| EikoNet (Smith et al., 2020) | Grid-free, two-point PINN | queries/s | RMS error ≈ FMM (0.03–0.04) | Continuous, multipathing, 3D, GPU |
| PINNeik (Waheed et al., 2020) | PINN w/ factored form | Mesh-free | <1% in 2D smooth models | Anisotropy, topography, transfer, surrogate |
| Neural Eikonal Solver (Grubas et al., 2022) | Bounded factorizations, | 2D/3D, minutes | 0.1–0.9% (Marmousi) | Handles caustics, compact two-point DFN |
| PINNPStomo (Song et al., 2024) | New factorization, twin NN | 2D/3D, tomography | ≤5% in Overthrust, Marmousi | Joint P/S-wave inversion, no v₀ dependence |
| Equivariant NES (García-Castellanos et al., 21 May 2025) | Equivariance, meta-learn | Homogeneous spaces | RE 1–3% (OpenFWI 2D classes) | Arbitrary manifolds, steerable, meta-adapt |
NESs thus represent a unification and extension of numerical geometry, PINNs, operator learning, and geometric deep learning techniques for the Eikonal equation, offering mesh-independence, scalability, adaptability, and, with suitable architecture and loss engineering, state-of-the-art accuracy for a broad range of scientific and engineering applications.