D2E Framework for Variable-Domain PDEs
- D2E is a framework that learns PDE solution mappings on geometrically variable domains using deformation theory and metric embeddings.
- It integrates domain and function encodings with neural operator architectures to ensure continuity and universal approximation even for non-diffeomorphic changes.
- Empirical results demonstrate low error rates and significant speed-ups when D2E is coupled with traditional FEM solvers on complex domain geometries.
The D2E framework, as defined in "A deformation-based framework for learning solution mappings of PDEs defined on varying domains" (Xiao et al., 2 Dec 2024), is a mathematically rigorous approach for learning solution operators of partial differential equations (PDEs) when the domain itself varies, possibly discontinuously, within a broad class of shapes. D2E provides a principled metric-to-Banach-space mapping, leveraging deformation theory, metric embeddings, and neural operator architectures to support data-driven solution representation across non-diffeomorphic, homeomorphic regions, and facilitates the integration of these learned operators in large-scale scientific computing.
1. Mathematical Foundation and Problem Setting
The D2E framework operates in the context of parametric PDE solution operators where input data (boundary conditions, source terms) and solutions are defined over a family of domains , with potentially complex geometrical variability. The set is equipped with a metric —for instance, on star domains,
where is the centroid and is a Lipschitz radial boundary function over the unit sphere .
The core input-output space is the disjoint union , where is a Borel function space (e.g., ), with the "deformation-pullback" metric
using a bijective deformation map from a fixed reference domain .
The D2E framework defines the target space as , where is an encompassing bounding box, and each solution is extended by zero outside to a function . Thus, D2E recasts the solution mapping as
where is a compact subset for training/analysis.
2. Theoretical Guarantees and Continuity
The D2E framework is grounded in two central theorems:
- Continuity Theorem (Thm. 3.3): The solution operator is continuous with respect to the metric , and, upon zero-extension, its image under , , is a continuous map from into .
- Universal Approximation (Thm. 2.3): Any continuous metric-to-Banach map , with Banach, can be approximated arbitrarily well by composition of a finite-dimensional encoder, a continuous finite-dimensional map, and a decoder into :
for suitable encoders/decoders and dimension .
These results remain valid even if the family of domains is not diffeomorphic, only requiring homeomorphism and mild regularity on deformations.
3. Specialization to Star-Shaped and Locally Deformed Domains
For star-shaped domains, the D2E approach defines explicit deformation maps: yielding a bijective, Borel-measurable deformation continuous in (in ). The metric gives a natural measure of geometric variability, and the combined metric ensures that domain and function variability are both respected.
For locally deformed domains—such as a fixed square with a “floating” subdomain in contact along an edge—the D2E metric and encoder can incorporate discontinuous deformations, leveraging the fact that the continuity of the mapping and theoretical guarantees do not depend on smoothness of .
4. Encoder, Embedding, and Neural Operator Architecture
D2E relies on a two-branch encoding:
- Domain encoding : samples the centroid and radial boundary function at directions:
- Function encoding : samples the pullback of the input function at discrete points in .
The combined encoder is shown to satisfy the compactness/uniform approximation conditions required for the theoretical results.
The D2E neural operator instantiates the continuous map through a two-branch, trunk-branch neural architecture, typically based on MIONet: where the are MLPs (or linear maps for linearity preservation), and the are local bases or MLPs for function output on .
The training objective is the empirical average: optimized with standard stochastic gradient techniques.
If the governing PDE is linear in the input (e.g., Poisson), forcing to be affine ensures that the network preserves linearity of the solution operator, enabling integration into hybrid iterative methods (HIMs) and guaranteeing superposition:
5. Numerical Experiments and Empirical Results
Empirical validation of the D2E framework, using D2E-MIONet, includes:
- Convex polygons (quadrilaterals, pentagons, hexagons): D2E-MIONet achieves relative -error, versus for D2D-MIONet; Geo-FNO fails (∼96% error) when used on unstructured meshes, indicating the advantage of the deformation-based embedding.
- Smooth star domains: For fully parameterized PDEs with variable coefficient , source , and Dirichlet data , D2D-MIONet achieves relative error in 2D, in 3D.
- Locally deformed domains (discontinuous ): The D2E framework achieves error with robust regularity across geometric transitions.
- Hybrid iterative methods on large-scale FEM meshes: Coupling Gauss-Seidel with D2E-MIONet in polygonal domains yields an speed-up over standard Gauss-Seidel iteration.
These results empirically confirm that the D2E framework realizes provably convergent, accurate approximation for PDE solution operators across highly variable homeomorphic (and not necessarily diffeomorphic) shapes, and enables robust generalization to both large deformation and local domain perturbation.
6. Key Features, Applicability, and Limitations
Three properties particularly distinguish D2E:
- Domain Generality: Applicability to homeomorphic rather than strictly diffeomorphic domains, thus encompassing broad classes of geometric variability.
- Deformation Flexibility: Deformation maps need not be continuous, which allows for modular modeling of domain changes (e.g., local geometric modifications in large systems).
- Linearity Preservation: When the neural operator architecture is linearity-preserving (e.g., via affine branch networks in MIONet), the surrogate solution mapping strictly maintains the linear superposition principle for linear PDEs—crucial for scientific computing workflows.
A potential limitation arises from the necessity to specify suitable deformation maps and metrics for the domain class of interest, which may require specialized problem-dependent treatment for non-star-shaped or topologically complex domains.
7. Context and Connections within the Scientific Machine Learning Landscape
The D2E framework responds to the challenge of learning solution mappings of PDEs on varying domains, where previous neural operator frameworks are typically limited to fixed or smoothly-deforming geometries. By formalizing a metric-based, zero-extension representation and demonstrating reliable neural approximations with theoretical guarantees, D2E bridges geometric learning and neural operator theory. The seamless integration of D2E-surrogate models into traditional solvers (e.g., via hybrid iterative methods) highlights its utility for large-scale scientific computation, and the empirical results substantiate its superiority over mesh-agnostic spectral operator approaches in complex geometry settings (Xiao et al., 2 Dec 2024).