Papers
Topics
Authors
Recent
Search
2000 character limit reached

SLADS Dynamic Supervised Sampling

Updated 4 March 2026
  • SLADS Dynamic Supervised Sampling adapts sparse data acquisition by greedily choosing the next measurement location to maximize the expected reduction in distortion (ERD).
  • It employs both linear regression (SLADS-LS) and deep neural networks (SLADS-Net) to predict ERD from local features, enhancing real-time sampling decisions.
  • Applied in high-throughput imaging modalities, SLADS significantly reduces sample requirements while improving reconstruction fidelity and minimizing measurement damage.

A supervised learning approach for dynamic sampling (SLADS) is a principled framework for adaptive sparse data acquisition, first proposed for high-throughput imaging modalities where exhaustive measurement is expensive, slow, or damaging. SLADS operates by greedily selecting the next measurement location to maximize the expected reduction in distortion (ERD) of the reconstructed object, using a regression surrogate trained offline on representative data. Its methodology admits both linear (SLADS-LS) and nonlinear (SLADS-Net) variants, and has been applied in contexts ranging from scanning microscopy to multichannel mass spectrometry imaging (MSI), as well as spectroscopy-driven mapping. The framework generalizes to “SLADS Dynamic Supervised Sampling,” referring to this family of learned ERD-based adaptive sampling techniques.

1. Mathematical Foundation

SLADS formulates dynamic sampling as a sequential decision process on a discrete spatial grid. Let XRNX \in \mathbb{R}^N denote the ground-truth image (or, in the multichannel case, XRm×n×dX \in \mathbb{R}^{m \times n \times d}), and let SΩ\mathcal{S} \subseteq \Omega be the set of measured locations after kk samples. At every iteration, the system reconstructs the object X^(k)\hat X^{(k)} via fast interpolation of current measurements Y(k)={(s(i),Xs(i))}i=1kY^{(k)} = \{(s^{(i)},X_{s^{(i)}})\}_{i=1}^k.

The central criterion is the reduction in distortion (RD)

R(k;s)=D(X,X^(k))D(X,X^(k;s)),R^{(k;s)} = D(X, \hat X^{(k)}) - D(X, \hat X^{(k;s)}),

where DD is a user-selected distortion metric (e.g., absolute error, squared error, or misclassification count), and X^(k;s)\hat X^{(k;s)} is the hypothetical reconstruction after having measured XsX_s as well. Since XX is unknown at run-time, the ERD at candidate ss is

Rˉ(k;s)=E[R(k;s)Y(k)],\bar R^{(k;s)} = \mathbb{E}[R^{(k;s)} | Y^{(k)}],

and the next sample is chosen via

s(k+1)=argmaxsΩSRˉ(k;s).s^{(k+1)} = \arg\max_{s \in \Omega \setminus \mathcal{S}} \bar R^{(k;s)}.

In multichannel scenarios (e.g., MSI), RD and ERD are computed per channel and averaged across channels: for z=1,,dz=1,\dots,d,

$E_z(t) = \text{predicted ERD for channel $z$}; \quad \bar{E}(t) = \frac{1}{d}\sum_z E_z(t).$

The sampling policy generalizes naturally to pointwise and linewise acquisition modes (Helminiak et al., 2022, Zhang et al., 2018, Godaliyadda et al., 2017).

2. Feature Extraction and Representation

Accurate ERD prediction requires local features that summarize structural, statistical, and geometric context. For each unmeasured candidate tt, a feature vector v(t)Rpv(t) \in \mathbb{R}^p is constructed. Hand-crafted features in canonical SLADS implementations include:

  • Local error statistics: mean, variance, and gradients of reconstruction residuals within a window centered at tt.
  • Proximity metrics: distance to nearest measured pixel and/or inverse-distance–weighted averages.
  • Sampling indicators: ratios or binary masks of measured points among tt's immediate neighbors.

Quadratic or higher-order interactions may be included by augmenting v(t)v(t) with pairwise products. In phase-mapping applications, classifier confidence or entropy features may be incorporated as well (Helminiak et al., 2022, Zhang et al., 2017, Godaliyadda et al., 2017).

3. Regression Models for ERD Estimation

SLADS employs supervised regression to approximate ERD as a function of features v(t)v(t), trained on simulated sampling trajectories from representative training images.

3.1 SLADS-LS (Least Squares):

A linear model predicts ERD:

Rˉ(k;s)v(t)β,\bar R^{(k;s)} \approx v(t)^\top \beta,

where βRp\beta \in \mathbb{R}^p is found via offline minimization of

β=argminβi=1N(Riviβ)2,\beta^\ast = \arg\min_\beta \sum_{i=1}^N \left(R_i - v_i^\top \beta \right)^2,

with closed-form solution β=(VV)1Vr\beta^\ast = (V^\top V)^{-1} V^\top r (Helminiak et al., 2022, Zhang et al., 2018, Godaliyadda et al., 2017).

3.2 SLADS-Net (Neural Network):

SLADS-Net replaces the linear predictor with a multi-layer perceptron gw(v)g_w(v), which maps v(t)Rpv(t) \in \mathbb{R}^p to a scalar ERD estimate:

  • 5 hidden layers of 50 units each, identity or leaky-ReLU activations.
  • Trained by mean-squared error loss:

L(w)=1Ni=1N(Rigw(vi))2L(w) = \frac{1}{N} \sum_{i=1}^N (R_i - g_w(v_i))^2

Support Vector Regression (SVR) has also been used, but is computationally heavier in practice (Zhang et al., 2018).

4. Sampling Policy and Execution

Both SLADS-LS and SLADS-Net employ the trained regression (linear or NN) to produce, at each step, an ERD map over all unmeasured locations. In pointwise mode, the next measurement tt^\ast is

t=argmaxtTEˉ(t)t^\ast = \arg\max_{t \in T} \bar{E}(t)

where TT is the set of unmeasured candidates and Eˉ(t)\bar{E}(t) is the (channel-averaged) ERD. In linewise mode, applicable to hardware-constrained raster settings (e.g., nano-DESI MSI), the row with highest cumulative trow Eˉ(t)\sum_{t \in \text{row } \ell} \bar{E}(t) is selected, and the top ERD-scoring fraction of its pixels are acquired (Helminiak et al., 2022).

In burst (groupwise) sampling, a greedy surrogate is used: pseudo-measurements are hallucinated for previously chosen points within a burst, and ERD is re-estimated before each selection (Godaliyadda et al., 2017).

Computational overhead per iteration remains low (<0.6<0.6s for 10510^5 candidates on a modern GPU), supporting real-time operation (Helminiak et al., 2022).

5. Applications and Empirical Performance

SLADS and its variants have been extensively validated in high-throughput and damage-limited imaging contexts, notably:

  • Mass Spectrometry Imaging (MSI): Achieves \sim70% reduction in required samples, with SLADS-Net yielding 36.7% improvement in ERD accuracy and +7%+7\% in m/z–reconstruction PSNR (AUC metric) over single-channel SLADS-LS. Further gains are realized by deep CNN-based methods such as DLADS (U-Net), which improve ERD accuracy by 6.2%6.2\% and reconstruction by 6.0%6.0\% over SLADS-LS (Helminiak et al., 2022).
Model Configuration m/z–PSNR AUC ERD–PSNR AUC
SLADS-LS Single-channel 745.5 536.9
SLADS-LS Multichannel 756.5 684.0
SLADS-Net Multichannel 765.2 731.3
DLADS (U-Net) Multichannel 792.0 778.4
  • Scanning Electron Microscopy (SEM): Reduces required sampling to <<20–40% with high-fidelity reconstructions (e.g., PSNR >33>33dB at 40%40\% sampling for similar train/test; $20.88$dB for dissimilar train/test with SLADS-Net) (Zhang et al., 2018).
  • Energy Dispersive Spectroscopy (EDS): Coupled with a neural classifier, achieves near-perfect phase mapping with only $5$–20%20\% of full-raster acquisition, maintaining distortion <102<10^{-2} and classification error <0.5%<0.5\% (Zhang et al., 2017).
  • Other high-dimensional imaging: The method generalizes to segmentation, spectral mapping, and high-content imaging (Godaliyadda et al., 2017, Zhang et al., 2018).

Pre-trained SLADS-Net on generic texture-rich images (“cameraman”) performs robustly even on previously unseen microstructure types, supporting rapid out-of-the-box deployment (Zhang et al., 2018).

6. Limitations, Assumptions, and Practical Guidance

SLADS assumes that training data are representative of the sampling domain and that features selected suffice to predict ERD. Mismatches in texture or noise between training and deployment can degrade performance, particularly for linear SLADS-LS and kernel SVR; however, deep nonlinear predictors (SLADS-Net) demonstrate greater robustness to such mismatch (Zhang et al., 2018). For generic or unknown samples, pretraining on diverse images mitigates the need for custom data (Zhang et al., 2018). The Gaussian-kernel approximation used for RD is tuned offline, and groupwise selection is a greedy surrogate and may introduce bias.

Lightweight interpolators (e.g., inverse-distance–weighted mean) are recommended for on-the-fly reconstruction (Helminiak et al., 2022, Zhang et al., 2018). Feature extraction, ERD computation, and sample selection are computationally efficient (1–100ms per iteration for 5122512^2 images) (Godaliyadda et al., 2017).

7. Evolution and Comparison with Deep Learning Approaches

While SLADS-LS and SLADS-Net perform competitively and efficiently, the introduction of deeper convolutional models (such as DLADS with U-Net architectures) further improves ERD estimation and final reconstructions for challenging multichannel imaging (e.g., nano-DESI MSI), with additional $3.4$–6.0%6.0\% gain in reconstruction quality over SLADS-Net (Helminiak et al., 2022). The rise of such models reflects a broader trend in sparse sampling and image reconstruction, wherein CNN-based estimators surpass shallow or hand-crafted regression-driven approaches in both representational power and generalization, particularly as the statistical structure of the data grows more complex.


References:

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to SLADS Dynamic Supervised Sampling.