SVGP KANs: Scalable, Uncertainty-Aware Models
- SVGP KANs are scalable and interpretable machine learning models that blend the additive structure of Kolmogorov-Arnold Networks with Gaussian Process probabilistic inference.
- They leverage sparse variational methods and analytic moment matching to efficiently propagate uncertainty and reduce computational complexity in large-scale function regression.
- Their edge-wise functional mapping enables post-hoc structure discovery and precise feature importance analysis, facilitating rigorous model interpretability.
Sparse Variational Gaussian Process Kolmogorov-Arnold Networks (SVGP KANs) are a class of scalable, uncertainty-aware, and interpretable machine learning models that synthesize the additive structure of Kolmogorov-Arnold Networks (KANs) with the probabilistic inference of Gaussian Processes (GPs) realized via sparse variational methods. This architecture is designed for applications that demand both interpretability and rigorous uncertainty quantification, particularly in scientific discovery and large-scale function regression (Ju, 29 Nov 2025, Ju, 4 Dec 2025).
1. Architectural Foundations
SVGP KANs are constructed on the Kolmogorov-Arnold network formalism, which leverages the Kolmogorov–Arnold representation theorem to express any continuous multivariate function as finite compositions and summations of univariate functions. Each KAN layer maps an input vector to by placing independent, learnable univariate functions on every directed edge from input coordinate to output coordinate :
This edge-wise additive decomposition ensures that every is a single-input, single-output mapping, yielding direct interpretability by associating each edge with a unique univariate transformation (Ju, 29 Nov 2025).
Probabilistic inference is incorporated by endowing each edge function with a zero-mean GP prior:
Typically, is an RBF kernel with signal variance and length-scale . The network assumes mean-field independence of edge functions, resulting in a tractable factorized structure for Bayesian inference (Ju, 4 Dec 2025).
2. Sparse Variational Gaussian Process Formulation
Traditional GP-KANs are limited by the computational cost of exact GP inference, which scales as per edge, making them infeasible for large datasets. SVGP KANs circumvent this bottleneck by introducing inducing points per edge and employing sparse variational inference. For each edge :
- Inducing inputs
- Inducing values
- Variational posterior
The variational distribution over all functions factorizes as:
Training maximizes the evidence lower bound (ELBO):
where the KL-divergence term is amenable to a closed form due to the Gaussianity of both prior and posterior over inducing points (Ju, 29 Nov 2025, Ju, 4 Dec 2025).
3. Analytic Moment Matching and Uncertainty Propagation
A distinctive feature of SVGP KANs is analytic moment matching for propagating uncertainty through deep additive structures. When the input to a univariate GP edge is itself Gaussian distributed, as arises from aggregating uncertainty upstream, the predictive mean and variance can be computed in closed form for the RBF kernel. For and inducing point :
- Higher-order moments can be similarly derived, enabling efficient and exact marginalization over input uncertainties (Ju, 29 Nov 2025, Ju, 4 Dec 2025).
This mechanism supports rigorous propagation of both epistemic (model) and aleatoric (data) uncertainty, differentiating SVGP KANs from deterministic KANs and standard neural architectures.
4. Computational Complexity and Scalability
By using sparse variational inference and mini-batching, SVGP KANs achieve per-epoch computational complexity of , where is the number of inducing points per edge and is the number of training samples. If and mini-batch size are held fixed, the dependence on is linear, a substantial improvement over the cubic scaling seen in exact GP-KANs. For edges, the total per-batch computational cost is . Storage complexity per edge is for covariance and for inducing locations and means (Ju, 29 Nov 2025, Ju, 4 Dec 2025).
5. Training Procedure
SVGP KANs are trained by stochastic optimization of the ELBO using analytic gradients. Key training steps:
- Initialization: For each edge, inducing inputs are selected (e.g., via K-means or deterministic grid), and , are initialized.
- Mini-batch processing: For each mini-batch, compute all kernel matrices, aggregate predictive means/variances, and compute the batch ELBO.
- Backpropagation: Optimize all parameters, including inducing locations and kernel hyperparameters, potentially with gradient-based optimizers such as Adam.
- Utilize batched linear algebra to accelerate operations across all edges and exploit GPU parallelism.
Analytic moment matching obviates the need for Monte Carlo sampling in forward uncertainty propagation, further enhancing scalability (Ju, 29 Nov 2025).
6. Structural Discovery and Model Interpretability
SVGP KANs possess inherent interpretability due to their additive structure and edge-wise functional mapping. They can perform post-hoc structure discovery via permutation-based variable importance:
- Shuffle each input feature in a held-out dataset, measure the increase in test MSE, and define feature importance .
- Edges with importance below a predefined threshold are deemed irrelevant.
Functional relationship classification is facilitated by inspecting the learned kernel length-scales:
- Large relative to data range signifies linear or constant behavior.
- Small indicates high-frequency or nonlinear relationships.
Visualization of learned edge functions permits categorization into polynomials, periodicities, and other nonlinear behaviors (Ju, 29 Nov 2025).
7. Empirical Validation and Practical Applications
SVGP KANs have been validated across synthetic and real scientific machine learning tasks:
- Basic synthetic regression: Precisely recovers additive structure and prunes irrelevant features.
- 2D surface reconstruction: Demonstrates calibrated epistemic uncertainty outside the training domain.
- Friedman #1 benchmark: Achieves strong test RMSE and correctly identifies informative features, suppressing spurious signals.
- Heteroscedastic fluid flow reconstruction: Accurately infers spatially varying aleatoric noise fields and achieves coverage aligned with nominal error rates.
- Multi-step PDE forecasting: Predictive spread increases with compounded epistemic uncertainty, correlating with physical intuition.
- OOD detection in convolutional autoencoders: Predictive variance sharply distinguishes in-distribution from anomalous data with ROC–AUC ~0.8–0.9 (Ju, 29 Nov 2025, Ju, 4 Dec 2025).
SVGP KANs also support separation and quantification of aleatoric vs. epistemic uncertainty. In settings with measurement noise, distinct GPs are used for the latent predictive mean and input-dependent noise variance, ensuring principled uncertainty calibration.
SVGP KANs enable interpretable scientific modeling at scale, blending universal function approximation, Bayesian inference, analytic uncertainty propagation, and variable discovery. Their analytical tractability and computational efficiency position them as a robust alternative to traditional deep learning and GP-based models for scientific machine learning (Ju, 29 Nov 2025, Ju, 4 Dec 2025).