Papers
Topics
Authors
Recent
Search
2000 character limit reached

Adaptive Sampling Methods

Updated 24 January 2026
  • Adaptive sampling methods are dynamic algorithms that update sample selection based on prior data to reduce uncertainty and improve inference.
  • They employ techniques such as greedy variance reduction, moment matching, and KL divergence minimization, effective in applications like MRI, PDE solvers, and streaming analytics.
  • These methods optimize the balance between computational cost and precision, achieving faster convergence and robust performance across varied scientific tasks.

Adaptive sampling methods encompass a class of algorithms in which sample acquisition is dynamically informed by data accrued during prior iterations, with the objective of optimizing accuracy, efficiency, or statistical power with respect to a practical computational or inferential goal. These schemes are central to modern approaches in sensing, numerical solution of PDEs, stochastic optimization, risk-averse design, large-scale matrix analysis, and streaming analytics. The hallmark of adaptive sampling is its deviation from static or preordained sampling schedules, often manifesting as recursive strategies that exploit posterior uncertainty, residual distribution, or dynamic control criteria to guide the placement and weighting of future samples.

1. Mathematical Foundations of Adaptive Sampling

A prototypical adaptive sampling framework is characterized by sequential decision-making, where, at each round kk, sample locations (or subsets) are chosen using informatics from prior data. Within linear sensing (e.g., MRI), given the model y=Ax+ϵy = A x + \epsilon (xx the unknown image, AA the sensing operator, ϵ\epsilon Gaussian noise), adaptive sampling proceeds in rounds: after acquiring y(k)y^{(k)} (stacked measurements), the next sample index \ell is chosen to maximize a principled acquisition criterion, such as reduction of posterior variance in the measurement domain. Formally, with posterior samples {x^i(k)}i=1Sp(xy(k))\{\hat x_i^{(k)}\}_{i=1}^S \sim p(x|y^{(k)}), projected measurements {y^i(k)=Ax^i(k)}\{\hat y_i^{(k)} = A \hat x_i^{(k)}\}, the greedy selection is =argmaxnVarn\ell = \arg\max_n \mathrm{Var}_n where Varn=Var{[y^i(k)]n}i\mathrm{Var}_n = \operatorname{Var}\{[\hat y_i^{(k)}]_n\}_{i} quantifies ensemble disagreement (Wang et al., 2023).

Similar principles govern adaptive importance sampling (AIS) for stochastic integration and optimization. Here, the proposal density qkq_k is iteratively refined—by moment-matching, optimization of KL divergence to the target density, or generative modeling—using existing weighted samples to reduce estimator variance and tail risk (Paananen et al., 2019, Wan et al., 2023, Pieraccini et al., 14 Feb 2025).

In adaptive sampling for stochastic optimization, sample sizes or batch sizes are dynamically adjusted via variance or inner-product control tests (e.g., ensuring the gradient estimate provides a descent direction with high probability), optimizing the tradeoff between computational cost and convergence rate (Bollapragada et al., 2017, Bahamou et al., 2019).

2. Adaptive Sampling Algorithms Across Application Domains

Bayesian Adaptive Sampling for Sensing Systems

In dynamic sensing (e.g., in compressed sensing frameworks), candidate measurements are selected to maximally reduce the uncertainty—typically quantified via the variance—of the current posterior distribution. Posterior exploration is often executed using stochastic-gradient Langevin dynamics (SGLD) and may leverage either analytical image priors or learned neural score models (trained via denoising score matching) (Wang et al., 2023). In MRI, the adaptive method applied to k-space sampling yields an empirical gain in PSNR of 2–3 dB and improved restoration of subtle anatomical details compared to fixed sampling patterns.

Deep-Learning-Based Adaptive Importance Sampling

Recent advances in neural PDE solvers (PINNs, Deep Ritz, etc.) have led to adaptive sampling methodologies where collocation training points are placed to concentrate computational effort on regions of high residual or complexity. Sample distributions are iteratively modeled:

  • Using bounded KRnet flows to mimic the integrand in the Deep Ritz variational loss, updating via KL divergence minimization to approximate optimal importance densities (Wan et al., 2023).
  • Annealing adaptive importance sampling deploys a sequence of softened residual distributions πβ(x)[r(x)]β\pi_\beta(x) \propto [r(x)]^\beta (β1\beta \uparrow 1), fitted via mixture models and EM-like updates, facilitating robust collocation even in high-dimensional or singular PDE problems (Zhang et al., 2024).
  • Gaussian mixture distribution-based strategies directly fit the residual landscape with a mixture model, placing adaptively sampled points in multiple “hot spot” regions and incrementally refining the collocation pool (Jiao et al., 2023).

Causality-Guided and Time-Marching Adaptive Sampling

For time-dependent PDEs, causality-guided methods integrate temporal weights into the selection of collocation points, ensuring adequate precision at earlier time slices before allocating resources to later times. The adaptive-sampling indicator combines weighted residual and temporal alignment updates (TADU) for a hyperparameter that steers selection toward regions with large residual but subject to causality constraints. Points added in each cycle can be released in the next, preserving a fixed budget and reducing computational cost (Lin et al., 2024, Guo et al., 2022).

3. Control Criteria, Theoretical Properties, and Sampling Optimization

Adaptive sampling algorithms invoke quantitative control criteria for sample selection and adaptation:

  • Greedy variance-reduction: maximization of ensemble posterior variance in the measurement or prediction domain, analogous to Bayesian experimental design.
  • Moment-matching in AIS: iterative adjustment of proposal qkq_k such that weighted moments under qkq_k match importance-weighted empirical moments, approaching the optimal density for importance estimation (Paananen et al., 2019).
  • KL divergence or cross-entropy minimization: deep generative models (flows, mixtures) are trained to minimize the KL divergence between target residual-driven densities and sampler proposals (Wan et al., 2023, Zhang et al., 2024).
  • Batch-size and learning-rate adaptation: stochastic optimization frameworks control sample size via norm or inner-product tests—ensuring descent direction, bounding variance—combined with curvature-based adaptive step size selection, yielding provable convergence rates (Bahamou et al., 2019, Bollapragada et al., 2017).
  • Markov decision processes: in change detection, the sequence of sampling actions is optimized via Bellman recursion, maximizing the expected log-likelihood reward, with sampling focused on components most likely to exhibit change (Yi et al., 17 Dec 2025).

Convergence guarantees—linear or sublinear rates—are established under strong convexity, smoothness, and boundedness assumptions, with adaptive sampling strictly outperforming fixed sampling in empirical and theoretical analyses across domains (Gower et al., 2019, Beiser et al., 2020).

4. Representative Algorithms and Pseudocode

The following table encapsulates canonical adaptive sampling workflows:

Domain Core Adaptive Operation Update/Selection Rule
Bayesian Sensing Sequential measurement allocation =argmaxnVarn\ell = \arg\max_n \mathrm{Var}_n
PINN/Deep Ritz Collocation point update Sample from fitted p(x;θf)p(x;\theta_f)
Stochastic Opt Sample size adaptation Increase size if variance test fails
Change Detection Sampling line allocation Bellman optimal policy iteration
Mtx Subset Sel. Column selection Sequential residual-based CSSP

Pseudocode for a typical round of Bayesian adaptive sampling (Wang et al., 2023):

1
2
3
4
5
6
7
8
9
10
11
12
for k in range(K):
    # Posterior sampling via SGLD
    samples = [sgld_update(current_mask, data, score_model, ...) for _ in range(S)]
    # Measurement domain projections
    yhat = [A @ xhat for xhat in samples]
    # Compute variance over unmeasured indices
    var_n = compute_sample_variances(yhat, not_in_mask)
    # Greedy index selection
    l = argmax(var_n)
    current_mask.add(l)
    # Acquire measurement at l
    y[l] = measure(x_true, l)

5. Empirical Performance and Computational Efficiency

Adaptive sampling methods empirically demonstrate dominant performance in both convergence rate and solution fidelity across a broad spectrum of problems:

  • Deep Ritz adaptive importance sampling yields markedly reduced L2L_2 errors in challenging PDEs, especially for multi-peak or high-dimensional cases (Wan et al., 2023).
  • Annealed adaptive sampling in PINNs achieves convergence rates and solution accuracies unattainable by uniform or residual-only strategies, particularly in singular/high-dimensional systems (Zhang et al., 2024).
  • Causality-guided adaptive sampling for PINNs shows order-of-magnitude improvements over non-adaptive and prior adaptive methods, with near machine precision in relative L2L_2 errors and superior preservation of temporal solution consistency (Lin et al., 2024).
  • Binary response adaptive sampling for change detection enhances statistical power and allocates larger sample fractions to components with probable change, outperforming equal randomization for moderate sample sizes (Yi et al., 17 Dec 2025).

Computational cost is managed by adaptive control over sample size/batch size, use of generative models for efficient importance sampling, and by constrained release or removal of previously chosen samples (e.g., in causality-guided methods).

6. Connections to Matrix Analysis, Streaming, and Partitioned Domains

Adaptive sampling principles underpin algorithms for column/row selection in high-dimensional matrices, notably in adaptive column subset selection. Sequential application of relative-error CSSP algorithms to current residuals, with rigorous rank-projection adjustment, yields tighter approximation bounds than non-adaptive schemes and can be implemented efficiently for large data matrices (Paul et al., 2015).

In the streaming model (turnstile streams), adaptive sampling is rendered possible by maintaining sketch-based data structures and post-processing with projections, enabling one-pass implementations for column/row selection, subspace approximation, and related summarization tasks. Theoretical analyses demonstrate that such sketches accurately approximate the adaptive selection probabilities, with polylogarithmic space complexity and provable error bounds (Mahabadi et al., 2020).

For uncertainty quantification and reduced basis model construction, adaptive selection of sampling points based on surrogate error indicators and radial basis function interpolation enables rapid CDF convergence and certified error control, outperforming non-adaptive collocation or surplus-driven schemes (Camporeale et al., 2016, Chellappa et al., 2019).

7. Theoretical Implications and Open Directions

Adaptive sampling strategies uniformly leverage current estimation uncertainty and error structure to dynamically focus computational resources. Theoretical frameworks (Bayesian design, MDPs, KL divergence minimization, empirical variance control) ensure that adaptive methods can, under mild regularity assumptions, attain faster or more robust convergence than non-adaptive protocols.

Contemporary research directions include the development of adaptive samplers for non-Euclidean and hierarchical domains, causal and multi-objective adaptive frameworks, streaming-compatible adaptive protocols, and the study of theoretical limits when operating under memory, time, or sample constraints in complex data environments.

Further, recent contributions have focused on providing unbiased Horvitz–Thompson estimation in adaptive-threshold sampling, expanding the applicability of adaptive sketches in streaming contexts, and integrating adaptive sampling with generative and variational models for enhanced scalability and precision (Ting, 2017).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Adaptive Sampling Method.