Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 58 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 115 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 462 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Alternating Sampling Framework

Updated 7 October 2025
  • The alternating sampling framework is a suite of methods that iteratively optimizes variable blocks to decompose complex sampling problems into manageable subproblems.
  • Key techniques include structured Gibbs sampling, proximal oracles, ADMM-based consensus, and leverage-score based ALS, enabling efficient handling of non-smooth and high-dimensional models.
  • Applications span distributed Bayesian inference, tensor decompositions, phase retrieval, and sensor-actuator design, offering robust convergence and computational efficiency.

The alternating sampling framework encompasses a suite of algorithmic techniques characterized by iterative, staged optimization or sampling updates, alternating between variable blocks or subproblems. These frameworks are widely used in stochastic modeling, optimization, data decomposition, and structured learning, particularly where direct or joint handling of all variables is computationally infeasible or analytically intractable. Recent research has advanced alternating sampling methods across domains such as non-smooth convex sampling, distributed Bayesian inference, tensor decompositions, networked control system design, phase retrieval, and more. Central features include structured Gibbs sampling in augmented spaces, dynamic programming with BeLLMan recursions, proximal oracles for composite potentials, leverage-score based randomized ALS for high-dimensional tensors, and ADMM-inspired consensus protocols in distributed settings.

1. Mathematical Foundations and General Formulation

The central paradigm in alternating sampling frameworks is the decomposition of a complex sampling or optimization problem into subproblems that can be solved efficiently by sequentially fixing and updating variable blocks. Often, Gibbs sampling is used for probabilistic models, where the target distribution π(x)\pi(x) is augmented by auxiliary variables (e.g., yy). A prototypical construction involves forming the joint density:

π(x,y)exp(f(x)12ηxy2)\pi(x, y) \propto \exp\left(-f(x) - \frac{1}{2\eta}\|x - y\|^2\right)

and alternating between conditional updates:

  • y-update: ykN(xk,ηI)y_k \sim \mathcal{N}(x_k, \eta I) (Gaussian draw)
  • x-update: xk+1exp(f(x)12ηxyk2)x_{k+1} \sim \exp(-f(x) - \frac{1}{2\eta}\|x - y_k\|^2) (proximal sampling)

This two-step procedure is generically referred to as the alternating sampling framework (ASF) (Liang et al., 2021, Liang et al., 2022, Liang et al., 2 Apr 2024). Proximal sampling oracle implementations often rely on Moreau regularization or bundle methods in the case of non-smooth ff.

In alternating minimization contexts (such as pose estimation (Campos et al., 2019), sensor-actuator design in LQG systems (Yang et al., 25 Apr 2025)), variables—such as rotation R\mathbf{R} and translation t\mathbf{t}, or matrices BB and CC—are iteratively optimized by fixing one block and minimizing the overall objective with respect to the other.

2. Design of Alternating Steps and Oracles

Alternating frameworks hinge on efficient conditional update mechanisms:

  • Proximal Sampling Oracles/Restricted Gaussian Oracles (RGO): The x-update typically requires sampling from p(x)exp(f(x)(1/2η)xy2)p(x) \propto \exp(-f(x) - (1/2\eta)\|x - y\|^2). For composite or non-smooth ff, direct sampling is nontrivial. Proposed algorithms employ:
    • Cutting-plane or bundle methods to approximate solutions to argminx{f(x)+(1/(2η))xy2}\arg\min_x \{f(x) + (1/(2\eta))\|x - y\|^2\}
    • Rejection sampling based on sandwiching ff between quadratic surrogate functions h1h_1 and h2h_2 to ensure dimension-free acceptance rates (Liang et al., 2021, Liang et al., 2022, Liang et al., 2 Apr 2024).
  • Dynamic Programming and Threshold Policies: In sequential selection, dynamic programming equations (BeLLMan recursions) specify optimal selection rules via value functions v(s,r)v(s, r) and symmetrized threshold strategies (Arlotto et al., 2011).
  • ADMM-based Consensus Updates: In distributed sampling, local updates solve noisy proximal subproblems and dual variables are evolved to achieve consensus, with theoretical guarantees in Wasserstein distance (Tzikas et al., 29 Jan 2024).
  • Alternating Nonnegative Least Squares (NLS): For NMF, alternating updates solve NLS subproblems for WW and HH, exploiting parallel matrix multiplication and localized updates (Kannan et al., 2016).
  • Alternating Least Squares (ALS) with Sampling: For tensor decompositions, innovation includes leverage-score sampling of design matrices with TN-contracted probabilities, yielding input sublinear cost (Malik et al., 2022).

3. Complexity and Convergence Guarantees

Alternating frameworks are often designed to achieve strong theoretical and practical complexity bounds:

  • Sampling Complexity: For non-smooth convex potentials, alternating proximal sampling algorithms achieve O~(dϵ1)\tilde{\mathcal{O}}(d \epsilon^{-1}) complexity (in total variation), outperforming gradient-based methods like Langevin Monte Carlo and offering non-asymptotic convergence in KL or χ2\chi^2 divergence (Liang et al., 2021, Liang et al., 2022, Liang et al., 2 Apr 2024).
  • Distributed Convergence: D-ADMMS guarantees convergence in 2-Wasserstein distance, with a geometric contraction factor and error floor dictated by the noise level, outperforming decentralized Langevin and SGHMC in distributed Bayesian tasks (Tzikas et al., 29 Jan 2024).
  • ALS and Tensor Network Efficiency: Input sublinear per-iteration cost (in tensor size) is achieved through leverage-score sampling, with competitive decomposition error and feature extraction accuracy (Malik et al., 2022).
  • ADMM-based Sensor/Actuator Design: Explicit Riccati gradient formulas and closed-form proximal updates enable efficient convergence in structured control configuration problems (Yang et al., 25 Apr 2025).

4. Application Domains and Representative Scenarios

Alternating sampling frameworks have been empirically and theoretically validated across a spectrum of domains:

  • Sequential Selection of Alternating Subsequences: Online decision making with alternating minima/maxima achieves nearly optimal selection rates, incurring an explicit 12% penalty vs prophet (offline) selection (Arlotto et al., 2011).
  • Nonnegative Matrix Factorization: Large-scale NMF on distributed memory architectures with MPI framework, alternating between NLS subproblems for factor matrices (Kannan et al., 2016).
  • Phase Retrieval: Alternating phase inference with deep denoiser priors outperforms classical regularization for in- and out-of-distribution images (Agrawal et al., 2022).
  • Tensor Decomposition: Alternating sampling-ALS algorithms generalize to arbitrary tensor network formats, enabling efficient feature extraction in high-dimensional data (Malik et al., 2022).
  • Joint Sensor and Actuator Configuration: ADMM-based alternating minimization enables flexible LQG optimization under sparsity, rank, or structural constraints (Yang et al., 25 Apr 2025).
  • Distributed Bayesian Inference: ADMM-based sampling accommodates privacy and communication constraints in federated learning and sensor networks (Tzikas et al., 29 Jan 2024).
  • Pose Estimation: Alternating minimization between rotation and translation yields computationally efficient solvers for absolute/relative camera problems (Campos et al., 2019).

5. Extensions, Limitations, and Future Directions

Alternating sampling frameworks are evolving to address more challenging scenarios:

  • Semi-smooth and Composite Potentials: Universality is sought via adaptive bundle methods and proximal oracles, with complexity guarantees independent of hard-to-compute problem parameters (Liang et al., 2 Apr 2024).
  • Non-Cartesian and Variable Constraints: Extensions to non-uniform sampling (e.g., non-Cartesian MRI, structure-constrained actuators) require customized proximal operators and heuristic initialization strategies (Zibetti et al., 2021, Yang et al., 25 Apr 2025).
  • Implicit Priors in Inverse Problems: Integration with elaborate priors (e.g., learned by denoisers) in non-convex inverse problems offers robustness to out-of-distribution shifts and is a subject of ongoing research (Agrawal et al., 2022).
  • Algorithmic Acceleration: The incorporation of momentum, adaptive learning rates, or accelerated proximal schemas is anticipated as a future research direction (Liang et al., 2 Apr 2024).

6. Comparison with Classical and Contemporary Methods

Alternating sampling frameworks distinguish themselves from classical methodologies:

  • Compared to Standard Gibbs or MH: Alternating updates exploiting problem structure (augmentation, symmetry, or alternating blocks) produce rapid mixing and input-efficient sampling in structured models—examples include RBM sampling (AGS vs MH) (Roussel et al., 2021), tensor ALS with leverage sampling (Malik et al., 2022).
  • Contrast with Gradient-Based MCMC: Proximal sampling techniques eliminate the need for smooth gradients, robustly treat composite/non-smooth potentials, and yield favorable complexity (Liang et al., 2021, Liang et al., 2022).
  • Against Joint Learning Methods: Decoupled, alternating learning frameworks with monotonicity checks and non-differentiable heuristics offer better stability and convergence than end-to-end joint learning (Zibetti et al., 2021).

7. Key Algorithms and Formulas

Alternating sampling frameworks often center on key mathematical constructs and formulaic implementations. Representative examples include:

Algorithm Type Main Update Formula/Principle Domain
Proximal Sampling (ASF) xk+1exp(f(x)12ηxyk2)x_{k+1} \sim \exp(-f(x) - \frac{1}{2\eta}\|x-y_k\|^2) Convex Sampling
ADMM-based Sampling xi(k+1)=proxγifi{}x_i^{(k+1)} = \operatorname{prox}_{\gamma_i f_i}\{\ldots\} Distributed Inference
ALS with Leverage Score p(i)=i(A)/rank(A)p(i) = \ell_i(A) / \operatorname{rank}(A) Tensor Networks
Riccati-based Gradients JLQGB=2P(G1+G2)PBR1\frac{\partial J_{LQG}}{\partial B} = -2P(G_1 + G_2)PBR^{-1} Control Config
Threshold Selection Reflection identity v(s,0)=v(1s,1)v(s, 0) = v(1-s, 1) Sequential Sel
Rejection Sampling (RGO) Accept XX: Uexp(gη(X))/exp(h1(X))U \leq \exp(-g^\eta(X)) / \exp(-h_1(X)) Non-smooth Pot.

Summary

Alternating sampling frameworks are characterized by staged updates exploiting problem structure (block-wise variables, augmented conditionals, consensus constraints, or data-dependent thresholds). They enable efficient sampling and optimization for non-smooth, high-dimensional, or distributed problems and offer robust convergence properties substantiated by recent theoretical and empirical research. The framework is highly generalizable, encompassing applications from statistical inference and machine learning to control systems and signal processing. Continued evolution in orphan domain adaptation, composite optimization, and integration with learned priors is anticipated.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Alternating Sampling Framework.