Tensor-Decomposition A Priori Surrogates (TAPS)
- Tensor-Decomposition-Based A Priori Surrogates (TAPS) are predictive models that use low-rank tensor decompositions to approximate high-dimensional simulations directly from governing equations.
- They leverage AI-enhanced basis functions, adaptive hyperparameters, and advanced decomposition algorithms to achieve controlled convergence and dramatic reductions in computational costs.
- TAPS enable data-free surrogate modeling for complex systems such as additive manufacturing and exascale simulations by decoupling exponential computational complexity.
Tensor-Decomposition-Based A Priori Surrogates (TAPS) are a class of predictive modeling frameworks that utilize tensor decompositions—such as CANDECOMP/PARAFAC (CP), tensor train (TT), Tucker, or block convolutional decompositions—as the core representational mechanism for surrogate modeling of extremely high-dimensional, multi-parametric scientific and engineering simulations. TAPS frameworks avoid the necessity of generating large offline datasets by directly constructing reduced-order surrogates from governing equations, thus enabling efficient approximation of systems with up to zetta-scale degrees of freedom. These methods leverage AI-enhanced basis functions, adaptive hyperparameters, and advanced decomposition algorithms to guarantee controlled convergence rates and orders-of-magnitude reductions in computational resource requirements (Guo et al., 18 Mar 2025, Guo et al., 31 Aug 2024, Bigoni et al., 2014, Chertkov et al., 2022).
1. Mathematical Foundation and Core Construction
The key mathematical structure underlying TAPS is the representation of high-dimensional discretized solutions via low-rank tensor decompositions. For a function defined over a -dimensional domain (incorporating spatial, temporal, and parametric variables), a typical TAPS surrogate is expressed as:
Here, denotes the AI-enhanced basis functions in direction , and are the coefficients ("mode weights") for the -th canonical component. The basis functions are often constructed with convolutional hierarchical deep-learning neural networks (C-HiDeNN), which possess local support, automatic adaptability, and the partition-of-unity and Kronecker-delta properties crucial for numerically robust surrogates (Guo et al., 18 Mar 2025, Guo et al., 31 Aug 2024).
Numerical realization proceeds by substituting this ansatz into a Galerkin variational formulation for the governing PDE:
where is the differential operator, is the source term, and is a test function from the same tensor product space.
After substituting (1) into (2) and performing separation of variables, the problem decomposes into a sequence of lower-dimensional (often 1D) problems for the univariate factors, with the coupling between dimensions occurring only via the contraction over the mode index . This separation enables the decoupling of the computational complexity from the curse of dimensionality, leading to linear or near-linear scaling in each direction.
2. Hierarchical Interpolation: C-HiDeNN Basis and Adaptive Hyperparameters
C-HiDeNN (Convolutional Hierarchical Deep-Learning Neural Network) bases generalize classical finite element functions by introducing learnable parameters (patch size , dilation , and reproducing order ) and neural kernels. The basis is constructed as:
This structure allows local adaptivity (through connectivity ) and high-order approximation (through ), while the dilation provides scale normalization. Because each univariate basis can be tailored to the problem regularity and geometry, arbitrarily high convergence rates are possible by increasing . The resulting basis maintains FE properties but is more expressive and mesh-agnostic, with automatic adaptation to nonuniform meshes or irregular input domains (Guo et al., 18 Mar 2025, Guo et al., 31 Aug 2024).
3. Tensor Decomposition Strategies and Scaling
TAPS employs tensor decompositions—CP, TT, Tucker, block-convolutional, or other advanced factorizations—to compress solution spaces. For example, the CP decomposition for a 3D solution array is:
Generalization to dimensions follows analogously. The computational benefit lies in the dramatic reduction of degrees of freedom: instead of storing grid values, one stores parameters for discretization points per dimension and mode count .
Rank adaptivity is often enabled using alternating least squares/tensor regression sub-steps or by adaptive truncation based on predefined error tolerances, and the choice of (tensor rank or number of modes) is tunable. When the function or solution is sufficiently smooth or possesses low effective dimension, superalgebraic or spectral convergence is observed as and increase (Bigoni et al., 2014, Guo et al., 18 Mar 2025). TT and hierarchical decompositions further enhance memory and CPU efficiency for extremely high-dimensional cases.
4. Weak Formulation and Discretized Matrix Structure
After substitution into the weak form (2), the TAPS surrogate leads to reduced-order, coupled algebraic matrix systems for each dimensional direction. For the -th variable, the system has the form:
Here, are coefficient vectors, and , are stiffness and force matrices assembled from 1D quadrature and basis interactions, with explicit dependence on the current iterate's mode weights. The coupled tensor structure is preserved, and algebraic operations exploit the compressed representation directly.
Iterative sweeps or subspace iterations solve these coupled 1D systems sequentially until global convergence is achieved, with inter-mode coupling handled via contraction over the mode index.
5. Numerical Performance and Large-Scale Application
In applications such as laser powder bed fusion additive manufacturing simulation (involving >3 billion spatial DoFs), TAPS achieves:
- ~1,370× computational speedup
- ~14.8× reduction in memory footprint
- ~955× reduction in storage requirements
compared to conventional finite difference methods (Guo et al., 18 Mar 2025). The scaling advantages arise because TAPS decouples the discretization's exponential cost: the dominant cost becomes proportional to the per-variable grid resolution and tensor mode count, not the total number of grid points in the full-dimensional space.
Error convergence is controlled directly by the hyperparameters (patch), (order), and (rank). Spectral convergence can be realized for regular problems, as guaranteed by error bounds of the form
(where is the target function, is the uniform TT rank, is the regularity, and is the Hurwitz zeta function) (Bigoni et al., 2014). In practice, target tolerances can be met by adjusting and , for arbitrary accuracy.
6. Comparative Advantages and Implementation Considerations
The primary advantages of the TAPS framework include:
- Data-Free Surrogacy: Directly constructs the surrogate from the PDE weak form, avoiding costly offline simulation databases.
- Dimensional Scalability: Handles problems with zetta-scale (10²¹) DoFs by reducing the effective numerical complexity via decomposition.
- Guaranteed Convergence: Hyperparameter choice enables user-specified convergence rates and controlled approximation error.
- Physics-Infused Representation: By embedding the structure of the governing equations in the surrogate, TAPS provides representations that respect conservativity, boundary conditions, and other physical constraints.
- Flexibility: Accommodates mixed boundary conditions, heterogeneous material properties, and variable coefficients through locally adaptive basis functions.
A plausible implication is that, in scenarios where both high regularity and complex variable interaction are present, TAPS is particularly effective, whereas for problems with significant spatial localized features or sharp discontinuities, careful tuning of the mode count and local basis enrichment might be necessary.
Implementation requires algorithmic solutions for adaptive hyperparameter selection (e.g., ANOVA/TT-ALS initialization (Chertkov et al., 2022)), robust tensor regression or decomposition solvers (sampling-based ALS (Malik et al., 2022), TT-DMRG-cross (Bigoni et al., 2014)), and possibly extensions for nonlinearity (e.g., through polynomial or deep neural network feature maps (Saragadam et al., 2022)).
7. Broader Impact and Prospective Directions
TAPS is broadly applicable to engineering and scientific computation for:
- Additive manufacturing simulations
- Integrated circuit design
- Exascale physical systems modeling
- Parametric uncertainty quantification
- Sensitivity analysis (via compressed Sobol indices and ANOVA in TT format (Ballester-Ripoll et al., 2017))
- Surrogate-assisted optimization, real-time digital twins, and control
Future research directions include automation of rank and basis adaptation in highly nonlinear or non-smooth settings, further integration with deep learning–based surrogates, and hybridization with data-driven model reduction frameworks. For problems with statistical uncertainty or high-dimensional parameter spaces, TAPS can be combined with stochastic collocation in the tensor format or with Bayesian generative priors (e.g., VAE-CP (Liu et al., 2016)) to yield uncertainty-aware surrogates.
In summary, Tensor-Decomposition-Based A Priori Surrogates provide a mathematically rigorous, computationally scalable, and physically-consistent approach to surrogate modeling in large-scale, high-dimensional simulation problems. By leveraging hierarchical AI-augmented interpolation and tensor compression, TAPS renders previously intractable predictive simulations feasible and paves the way for a new generation of physics-informed, data-free scientific AI models (Guo et al., 18 Mar 2025, Guo et al., 31 Aug 2024).