Papers
Topics
Authors
Recent
Search
2000 character limit reached

Adaptive Sampling Framework for PINNs

Updated 3 February 2026
  • The framework introduces RAR-D to concentrate collocation points in high-residual regions, reducing errors compared to uniform sampling.
  • Adaptive sampling dynamically adjusts loss weights via exponential moving averages, ensuring balanced accuracy across PDE, boundary, and initial conditions.
  • Empirical comparisons demonstrate that BO-SA-PINNs achieve lower L2 errors with fewer points, significantly boosting computational efficiency.

An Adaptive Sampling Framework for PINNs is a class of methodologies that automatically concentrate network collocation points in regions of the domain where the neural PDE residual, or related physics-based indicators, are largest. This technique addresses the major limitation of classic PINN workflows, which is the inefficiency of uniform or static sampling when applied to solutions with nonuniform regularity, sharp gradients, or localized features. By adaptively redistributing collocation points, these frameworks achieve higher accuracy and training efficiency with fewer samples, essential for complex or high-dimensional problems. The field has evolved rapidly, with the framework in “BO-SA-PINNs: Self-adaptive physics-informed neural networks based on Bayesian optimization for automatically designing PDE solvers” introducing a rigorous, multi-stage, and highly automated pipeline that combines Bayesian optimization, residual-driven adaptive refinement with distribution (RAR-D), and dynamic loss weighting using exponential moving averages (Zhang et al., 14 Apr 2025). This article systematically details the principles, algorithms, mathematical structures, and empirical findings underpinning adaptive sampling in PINNs, with an emphasis on the RAR-D scheme as implemented in BO-SA-PINNs.

1. Mathematical Structure and Loss Function Components

The adaptive sampling paradigm is built on the formalization of the PINN loss as a weighted sum of PDE residual, boundary, and initial losses. For PDE

N[u](x)=f(x),xΩ,N[u](x) = f(x), \quad x \in \Omega,

the PINN approximation is u^(x;θ)\hat{u}(x; \theta), and the pointwise residual is

r(x):=N[u^](x;θ)f(x).r(x) := N[\hat{u}](x; \theta) - f(x).

With boundary and initial condition terms, the total loss is

L(θ)=ωRLR(θ)+ωBLB(θ)+ωILI(θ)+L(\theta) = \omega_R L_R(\theta) + \omega_B L_B(\theta) + \omega_I L_I(\theta) + \ldots

where each component (e.g., LRL_R) is a Monte Carlo average over a collocation set: LR(θ)=1NRk=1NRr(xRk).L_R(\theta) = \frac{1}{N_R} \sum_{k=1}^{N_R} |r(x_R^k)|. Adaptive sampling frameworks modulate the distribution or density of the input set {xRk}\{x_R^k\} according to indicators of local error, with the overarching aim of reducing the global loss most efficiently (Zhang et al., 14 Apr 2025).

2. Residual-Based Adaptive Refinement with Distribution (RAR-D)

RAR-D forms the core of the sampling protocol. At every adaptive iteration, it constructs a probability density function (PDF) over candidate points, proportional to the current residual field: p(x)r(x)α,α1.p(x) \propto |r(x)|^\alpha, \quad \alpha \geq 1. BO-SA-PINNs employs the following RAR-D implementation:

  • Generate a set of MM candidate points {xj}j=1M\{x_j\}_{j=1}^M uniformly in Ω\Omega.
  • For each candidate, evaluate residuals:

rj=N[u^](xj)f(xj)2,r_j = |N[\hat{u}^*](x_j) - f(x_j)|^2,

where u^\hat{u}^* comes from the current or pre-trained network.

  • Normalize to form the discrete PDF:

r^j=rj/(maxkrk+ε),pj=r^j/(k=1Mr^k+ε).\hat{r}_j = r_j / (\max_k r_k + \varepsilon), \quad p_j = \hat{r}_j / (\sum_{k=1}^M \hat{r}_k + \varepsilon).

  • Draw KK new points by sampling with probabilities {pj}\{p_j\}.
  • Augment the collocation set with these new points; iterate as needed.

The exponent α\alpha provides a tunable focusing effect: α>1\alpha>1 increases concentration in the highest-residual regions.

RAR-D Sampling Pseudocode (Zhang et al., 14 Apr 2025):

1
2
3
4
5
6
7
for i in range(n_iter):
    x_cands = random_uniform_samples(Ω, M)
    r = np.array([|N[hat_u^*](xj) - f(xj)|^2 for xj in x_cands])
    r_norm = r / (r.max() + ε)
    p = r_norm / (r_norm.sum() + ε)
    X_new = sample_discrete(x_cands, p, K)
    X_coll_new.extend(X_new)

Typical hyperparameters are M=1000M=1000, K=50K=50, niter=20n_{iter}=20.

3. Dynamic Loss Weighting via Exponential Moving Averages

To compensate for the evolving importance of different loss terms during training, BO-SA-PINNs updates the weights ωR,ωB,ωI\omega_R, \omega_B, \omega_I using exponential moving averages (EMA):

  • Maintain EMA of each loss term:

mR(t)=βmR(t1)+(1β)LR(t)m_R^{(t)} = \beta m_R^{(t-1)} + (1-\beta) L_R^{(t)}

Analogously for mB,mIm_B, m_I; typical β=0.999\beta=0.999.

  • Compute provisional weights:

ωR(t)=mR(t)/(mR(t)+mB(t)+mI(t))\omega_R'^{(t)} = m_R^{(t)} / (m_R^{(t)} + m_B^{(t)} + m_I^{(t)})

Analogous for other components.

  • Smoothly update actual weights:

ωR(t)=γωR(t1)+(1γ)ωR(t)\omega_R^{(t)} = \gamma \omega_R^{(t-1)} + (1-\gamma) \omega_R'^{(t)}

Clamp each ω\omega within [ε,1ε][\varepsilon, 1-\varepsilon].

This self-adaptive weighting enables robust handling of PDEs and boundary/initial data with distinct loss dynamics, preventing any term from dominating or collapsing.

4. Comparative Performance and Computational Benefits

RAR-D possesses critical advantages over uniform or low-discrepancy sampling:

  • Efficiency: Adaptive sampling quickly refines regions with high residual, avoiding redundant evaluations in already accurate areas.
  • Compactness: Empirical comparison shows BO-SA-PINN using RAR-D achieves L2L_2 error 3.2×1043.2 \times 10^{-4} on the 2D Helmholtz equation with only 3,500\sim3,500 interior points (including 500 adaptively added), whereas SA-PINN used 100,400\sim100,400 points for a 10×10\times larger error (3.2×1033.2 \times 10^{-3}).
  • Cost: Fewer collocation points per iteration reduce total computational burden, particularly in high-dimensional or stiff regimes.

The RAR-D-driven collocation sets lead to faster loss convergence and better local resolution in high-error zones.

5. Multi-Stage Workflow in BO-SA-PINNs

BO-SA-PINNs combine RAR-D with a hierarchical approach for full automation and generality:

  1. Stage 1: Bayesian Optimization Automatic selection of global hyperparameters — network depth, architecture, learning rate, initial sampling distribution, and initial loss weights — optimized for the specific PDE under consideration.
  2. Stage 2: Self-Adaptive Training Alternation of ADAM training, adaptive loss-weight EMA, and RAR-D collocation set refinement. This closed loop ensures both network and sample set adapt synergistically to the error landscape.
  3. Stage 3: L-BFGS Finalization Upon convergence of adaptive sampling, the now-optimized network undergoes further training with the full, adaptively selected dataset fixed, using L-BFGS for enhanced precision and stability.

The entire workflow is designed to require minimal manual intervention for hyperparameters and point selection, improving robustness and general applicability (Zhang et al., 14 Apr 2025).

6. Algorithmic Best Practices and Parameter Selection

Key practical points for effective RAR-D deployment:

  • Candidate pool size (MM) should exceed the number of new points (KK) by at least an order of magnitude.
  • Resampling frequency: Iterations of RAR-D (with or without retraining between steps) should be frequent (every few hundred to thousand epochs) to enable the sampling distribution to track rapid solution evolution.
  • Exponent α\alpha: Begin with α=1\alpha=1; increase if over-concentration leads to training instability.
  • Retain global coverage: Residual normalization and inclusion of a small uniform sampling base can help avoid unpopulated regions in Ω\Omega.
  • Loss weights: Use EMA as outlined, with β,γ0.999\beta, \gamma \sim 0.999, clamping to a safe range to prevent any term from nullifying.

7. Limitations, Extensions, and Relationship to Other Frameworks

RAR-D is best suited for problems where the key features are localized in small subdomains, i.e., solutions with singularities, high gradients, or steep fronts.

Limitations include:

  • High-dimensional scalability: With dimension, candidate sampling can become inefficient unless combined with density estimation (e.g., fitting a GMM or using importance sampling) or structural prior knowledge.
  • Dynamically evolving domains: For moving boundary or domain deformation problems, iterative re-evaluation of the full domain may become impractical.

Extensions:

  • Integration with self-adaptive weighting (as in BO-SA-PINNs) to handle stiff boundary-initial-PDE tradeoffs.
  • Coupling with Bayesian optimization for end-to-end hyperparameter and sampling design.
  • Potential fusion with moving-mesh techniques, energetic/physics-informed monitors, or reinforcement learning-based samplers.

Overall, the adaptive sampling framework in BO-SA-PINNs—epitomized by RAR-D adaptive sampling and dynamic loss weighting—constitutes a state-of-the-art approach for efficient, robust, and high-accuracy PINN solvers across a diversity of PDE classes (Zhang et al., 14 Apr 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Adaptive Sampling Framework for PINNs.