Adaptive Radial Basis Function Fields (NeuRBF)
- Adaptive Radial Basis Function Fields (NeuRBF) are data-driven frameworks that extend classical RBFs by locally adapting kernel parameters, centers, and anisotropy.
- They employ neural networks and adaptive algorithms to optimize parameters for improved accuracy, stability, and computational efficiency in complex data regimes.
- NeuRBF methods are applied in scattered-data interpolation, PDE solvers, and neural implicit field modeling, effectively capturing spatially varying phenomena.
Adaptive Radial Basis Function Fields (NeuRBF) are data-driven and computational frameworks that extend classical radial basis function (RBF) representations through local adaptation of kernel parameters, center positions, and anisotropy. Originating from multiple, largely independent research trajectories—neural network-based parameter selection, neural implicit field modeling, adaptive regression for scattered data, and meshless PDE solvers—NeuRBF methodologies share the core principle of adapting RBF representations locally to encode spatially varying complexity, regularity, and data geometry. Recent developments have unified these threads, yielding state-of-the-art accuracy, stability, and computational efficiency in tasks ranging from scattered-data interpolation and PDE discretization to neural signal fields and operator learning (Mojarrad et al., 2022, Chen et al., 2023, Shao et al., 12 May 2025, Chiu et al., 12 Jan 2026, Rigutto et al., 25 Nov 2025, Li et al., 2020).
1. Mathematical and Algorithmic Foundations
In classical RBF approximation, a function is interpolated or regressed using a fixed set of radial kernels centered at positions with globally constant shape parameters (typically, kernel width or anisotropy). Adaptive Radial Basis Function Fields generalize this by introducing locality in several dimensions:
- Variable shape parameter: Each center or input may have its own shape parameter (e.g., width, scale, covariance), modeled as a function of local data statistics or predicted by neural networks (Mojarrad et al., 2022, Chen et al., 2023, Rigutto et al., 25 Nov 2025).
- Adaptive center locations: The RBF centers themselves can be optimized or distributed using task-specific clustering or learned grid mechanisms, including weighted K-Means for image or geometry fidelity (Chen et al., 2023, Chiu et al., 12 Jan 2026).
- Anisotropic and data-informed kernels: Local anisotropy can be introduced by equipping each RBF with a full positive-definite matrix, i.e., , where is informed by estimated data gradients (Rigutto et al., 25 Nov 2025).
- Sparse and adaptive selection: Kernel selection and placement are made adaptive via sparsity-promoting regularizers or boosting-style algorithms, yielding parsimonious representations in complex domains (Shao et al., 12 May 2025).
- Markovian or local neighborhoods: Instead of global sums, only a local neighborhood is aggregated for each evaluation point, supporting high-dimensional and meshless regimes (Chen et al., 2023, Li et al., 2020).
These mechanisms can be summarized by the generalized NeuRBF expansion: with weights , centers , shape matrices , neighborhood selection , and possibly nonlinear feature transforms or decoder networks applied to the aggregated representation.
2. Neural Parameterization and Training Strategies
A major advancement in the NeuRBF paradigm is the use of neural networks to learn RBF shape parameters and adaptive features. In the unsupervised shape parameter selection strategy (Mojarrad et al., 2022), a multilayer perceptron (MLP) predicts local from coordinate or distance-based stencil features. The MLP is trained to maintain the interpolation/collocation matrix condition number within a prescribed range (e.g., ) via a custom loss function. This avoids the direct regression of against ground truth, instead indirectly optimizing for numerical stability and accuracy.
Key architecture and training details include:
- MLP Input features: Sorted local spacings, normalized coordinates, or 2D relative offsets.
- Network architecture: Several hidden layers with ReLU activation, final output linearly projected and regularized to favor positivity and stability.
- Loss formulation: Penalties depend on relative to the feasible window, plus regularization on parameters.
- Training protocol: Large batch size, Adam optimizer, early stopping on validation, pure offline training.
- Integration: The trained MLP is used at runtime for per-stencil or per-center shape prediction in RBF interpolation and finite-difference (FD) schemes.
This framework enables the automatic adaptation of local kernel width, circumventing hand-tuning and enabling stable operation across heterogeneous data (Mojarrad et al., 2022).
3. Adaptive Fields in Neural Signal Representations
NeuRBF methods have been generalized to represent continuous fields—such as images, 3D signed distance functions (SDFs), or radiance fields—in neural implicit frameworks (Chen et al., 2023). The core strategy leverages adaptive RBFs as local feature aggregators followed by neural decoders:
- Feature aggregation: For a query point , adaptive kernels (inverse-quadratic or Gaussian) with data-driven and aggregate features over a local stencil . Partition-of-unity normalization is optionally enforced.
- Multi-frequency composition: Each RBF response is expanded into a vector by applying a bank of sinusoidal functions, enhancing representational expressivity across frequency bands.
- Hybridization: Adaptive RBFs are concatenated with grid-based features, allowing smooth interpolation and fast learning inherited from grid approaches, while maintaining adaptability.
- Adaptation via clustering: Weighted K-Means determines both RBF center placement and the local shape matrix (as sample covariance of the cluster). Weights are selected according to task: image gradients, proximity to SDF zero level, or NeRF per-point densities and feature gradients.
This yields neural fields with significantly improved accuracy, compactness, and flexibility, particularly when compared to fixed-grid or uniform RBF methods. Grid components employ tensor decompositions or multiresolution hash tables (e.g., Instant-NGP, TensoRF, K-Planes), supporting cross-method comparison and hybridization (Chen et al., 2023).
4. Meshless PDE Solvers and Physics-Informed NeuRBF
NeuRBF fields underpin adaptive meshless numerical methods for PDEs, giving rise to high-order, data-local, and structure-preserving solvers (Mojarrad et al., 2022, Shao et al., 12 May 2025, Li et al., 2020). Two main classes are prominent:
- Neural shape-adaptive RBF-Finite-Difference (RBF-FD) (Mojarrad et al., 2022, Li et al., 2020):
- For each node, an MLP predicts the local based on stencil features.
- The local interpolation matrix is assembled with the predicted shape, inverted to obtain differentiation weights, and repeated globally to assemble sparse operators.
- Polynomial augmentation and stencil size adaptation further refine local accuracy according to data density and desired global convergence rates.
- Empirically, adaptive RBF-FD attains target convergence rates with 15–30% fewer nonzeros and better error localization compared to non-adaptive counterparts.
- Sparse and adaptive RBF collocation for nonlinear PDEs (Shao et al., 12 May 2025):
- Solutions are parameterized as sums over adaptively selected RBFs , with kernel placement and shape determined via boosting-style dual maximality.
- Optimization employs sparsity promotion, semismooth Gauss-Newton updates, and aggressive pruning, yielding parsimonious representations robust to stiff nonlinearity and high dimensions.
- Compared to Gaussian process collocation or PINNs, NeuRBF significantly mitigates overparameterization, numerical instability, and training cost.
These methods extend to operator learning, physics-informed learning, and time-dependent problems, demonstrating adaptability at regime transitions, interfaces, and shocks.
5. Anisotropic and Gradient-Informed Adaptivity
Recent NeuRBF regressors incorporate physical or statistical gradient information and anisotropic kernels for scattered data problems (Rigutto et al., 25 Nov 2025). Essential innovations include:
- Local metric adaptation: Each RBF's effective metric is assigned based on the direction and magnitude of data gradients, estimated via local polynomial regression and smoothed at each "collocation" center. Aspect ratios are adjusted according to a user-prescribed maximum stretching factor and combined with isotropy for volume normalization.
- Gradient-informed adaptive resampling: The sampling density is biased toward regions of high variation using a gradient-based probability estimate, coupled with local data density to maximize sample efficiency.
- Soft constraints for physical consistency: Observed gradients can be embedded as soft quadratic penalties in linear regression, alongside Tikhonov and other regularization.
- Performance: For turbulent flows and experimental fluid data, gradient-informed and anisotropic NeuRBF achieves order-of-magnitude reductions in required basis functions, removes artifacts near shear or boundaries, and provides consistent field reconstructions, all with reduction in solve time compared to isotropic schemes.
A plausible implication is that coupling physical priors (e.g., flow direction, anisotropy) with adaptive RBF representations is essential for high-fidelity reconstructions in complex or irregular domains.
6. Universality, Expressiveness, and Operator Learning
Within the function-approximation and operator-learning community, the universality of adaptive RBF fields is established using Kolmogorov–Arnold-type superpositions and density theorems (Chiu et al., 12 Jan 2026). Architectural innovations include:
- Free-RBF-KAN (Chiu et al., 12 Jan 2026): An adaptive RBF-based Kolmogorov–Arnold Network leverages fully trainable RBF centers and smoothness per layer and neuron, combining high expressivity with computational efficiency.
- Univariate RBF bases are optimized jointly with network weights via gradient descent, subject to smoothness positivity constraints.
- Theoretical results confirm density in for Gaussian and other non-polynomial kernels, ensuring universal approximation capability.
- Empirical findings indicate that Free-RBF-KAN matches or outperforms spline-based KANs and classical MLPs in PDE, operator learning, and regression tasks, with lower spectral bias (as measured in NTK eigenvalue spectra).
This establishes adaptive RBF fields as a theoretically sound and practically powerful backbone for high-dimensional function and operator approximation.
7. Limitations, Scalability, and Active Areas of Research
The current limitations of NeuRBF methods stem from both architecture and training data:
- Stencils with varying topology or dimensionality: Most neural shape-adaptive methods assume fixed-size, structured stencils; unstructured point clouds would require permutation- or rotation-invariant models, such as graph neural networks or higher-dimensional coordinate features (Mojarrad et al., 2022, Chen et al., 2023).
- Boundary and complex geometry coverage: Generating suitable training datasets that span both interior and boundary regions for arbitrary domains is nontrivial.
- Joint optimization challenges: Current practice often fixes RBF centers and shapes after an initialization phase, while weights alone are learned. Dynamically optimizing all components (centers, shapes, weights) in a fully end-to-end loop remains challenging.
- Kernel family generalization: Extension from Gaussian and inverse-quadratic to compactly supported or rational RBFs can be achieved, but may require task-specific retraining (Mojarrad et al., 2022).
- Computational bottlenecks: While runtime cost for MLP inference is negligible compared to solving RBF systems, large-scale, high-dimensional applications may still be bottlenecked by system assembly and inversion. There is active research in further reducing redundancy via pruning, sparsity, and more aggressive adaptivity (Shao et al., 12 May 2025, Chen et al., 2023).
- Operator learning and physical constraints: Embedding physical laws or constraints directly into loss formulations as soft or hard penalties is an emerging direction in regression and PDE contexts (Rigutto et al., 25 Nov 2025).
Continued development in these areas aims to produce robust, scalable, and physically informed NeuRBF frameworks for high-impact applications in scientific computing, computer vision, and learning theory.