Execution Feedback-Based Rejection Sampling
- Execution feedback-based rejection sampling is a framework that uses both accepted and rejected proposals as latent data to enhance sampling efficiency.
- It adaptively refines proposal distributions by treating rejected trials as informative feedback for tuning envelope quality and parameter updates.
- Applications include improved MCMC mixing rates, effective gradient-based parameter estimation, and tractable inference in doubly-intractable models.
Execution Feedback-Based Rejection Sampling is a methodological framework in which the outcomes of sampled proposals—specifically, the information from rejected and accepted trials—are systematically instantiated and exploited in both analysis and inference. This paradigm enables tractable augmentation of probabilistic models, adaptive tightening of sampling envelopes, and efficient gradient-based parameter estimation even in the presence of intractable or black-box normalizing constants. The central innovation is to treat the trace of rejected proposals as latent execution feedback, thereby transforming discrete sampling steps into useful probabilistic or differentiable components for advanced Monte Carlo algorithms.
1. Formal Principles and Theoretical Foundations
Execution feedback-based rejection sampling generalizes the classical accept–reject procedure, , , if for . Each accepted data point arises as the first acceptance in an i.i.d. trial sequence, with the preceding rejected proposals viewed not as wasted computation but as latent random variables.
The data augmentation scheme (Rao, Lin, Dunson, (Rao et al., 2014)) expresses the joint density over as:
Crucially, while the marginal depends on an intractable normalization , the joint density is tractable. Consequently, Markov chain Monte Carlo (MCMC) algorithms can alternate between (i) regenerating execution paths via rejection sampling and (ii) updating based on the joint likelihood of , sidestepping the need for entirely. The approach guarantees uniform ergodicity under mild conditions and achieves rapid mixing when envelope tightness is optimized.
2. Adaptive Proposal Construction via Execution Feedback
Adaptive refinement of proposal distributions is driven by execution feedback indicating envelope quality. In the self-tuned vertical weighted strips (VWS) methodology (Raim et al., 21 Sep 2025), the sequence of accepted and rejected proposals tightens the partitioning of the proposal, , improving the match to the true target density . The algorithm maintains a persistent set of “knots” defining piecewise envelopes for in the mixture
with , encoding local envelope bounds and masses. When rejection rates (feedback) exceed a threshold , new knots are inserted where the proposal is too loose; strips with negligible contribution are pruned for computational efficiency. This results in sustained acceptance rates and near-exact draws for arbitrary univariate conditionals in Gibbs samplers, without reconstructing proposals from scratch at each iteration.
3. Feedback-Driven Inference in Doubly-Intractable Models
Inference in models with doubly-intractable normalizing constants—such as matrix Langevin distributions on Stiefel manifolds and nonparametric Gaussian process density estimation—benefits uniquely from execution feedback-based augmentation. By augmenting observed data with the rejections , sampling and inference can be performed using only the joint , which is tractable and independent of the intractable . MCMC moves are then executed using Metropolis-Hastings or Hamiltonian Monte Carlo (HMC) directly in the augmented space. This scheme yields effective sample sizes (ESS/sec) up to an order of magnitude larger than competing exchange samplers and enables the use of gradient-based methods in inference pipelines that previously relied on local, autocorrelated updates.
4. Bitwise Execution Feedback and Information Complexity
The bitwise feedback model (Langevin et al., 29 Sep 2025) studies decision complexity by analyzing how many random bits must be revealed to certify acceptance or rejection in rejection sampling from monotonic densities over . The alternating strategy ALT cycles through the variables, revealing one bit at a time, until the feasible region defined by revealed bits enables a deterministic decision. Main results establish:
- Upper bound: bits needed by ALT in the worst case,
- Lower bound: bits required for a carefully constructed density ,
- Proposition: The number of “straddling” hypercubes determines tail probabilities, linking the complexity of decision regions to feedback-driven stopping times.
This theoretical perspective formalizes execution feedback as a resource in stochastic simulation, with implications for algorithmic optimality and adaptive bitwise comparison protocols.
5. Differentiable Rejection Sampling via Execution Trace Reweighting
Rejection Sampling with Autodifferentiation (RSA) (Heller et al., 4 Nov 2024) operationalizes execution feedback as a differentiable computation graph. By recording the full trial sequence (accepted and rejected proposals) at a base parameter point, the model computes an exact reweighting for new parameterizations :
with smooth derivatives for use in gradient-based loss minimization. This enables efficient parameter fitting and model exploration in physics simulations and ML-driven MC pipelines, leveraging full event-level data rather than summary statistics. Practical scaling is maintained by storing only necessary trial information and employing multiple base parameterizations to ensure coverage.
6. Applications, Algorithmic Performance, and Comparative Analysis
Execution feedback-based rejection sampling enables superior performance in several domains:
- Flow cytometry truncation handling, with efficient mixture-based augmentation yielding resolved densities near truncation boundaries.
- Matrix Langevin and GP-density models where previous algorithms suffered from poor mixing and long autocorrelation times; execution-feedback augmentation facilitated HMC transitions and rapid convergence.
- In Gibbs samplers for small area estimation (Raim et al., 21 Sep 2025), the self-tuned VWS approach achieved ESS in 1.8 minutes for counties (vs. ESS in 31s for Metropolis-within-Gibbs).
- In high-dimensional optimization and sampling (OS* algorithm (Dymetman et al., 2012)), recycling rejected points as feedback for bound refinement sped up acceptance rates by more than compared to static or randomly refined bounding strategies.
The empirical studies consistently demonstrate that integrating execution feedback into envelope construction, proposal adaptation, and model reweighting leads to substantial gains in mixing rates, statistical efficiency, and computational tractability.
7. Limitations, Open Questions, and Future Directions
Despite its generality, the effectiveness of execution feedback-based rejection sampling is contingent on several factors:
- Envelope tightness and the efficiency of partitioning schemes critically influence mixing rates and computational burden.
- In adaptive reweighting applications, coverage limitations can result in loss of effective sample size far from the base parameterization, necessitating hierarchical or multi-base schemes.
- The information complexity of feedback-driven halting strategies remains partially open, with the ALT bounds () inviting further work to close the gap.
- Extension to high-dimensional or continuous distributions requires new acceptance-comparison protocols and careful management of memory and computational costs.
A plausible implication is that execution feedback principles may be extended to black-box models and settings lacking well-behaved proposal distributions, as evidenced by successful deployment in LLM-based verbalized rejection sampling (Xiao et al., 11 Jun 2025). Potential generalizations include embedding feedback-driven primitives in Monte Carlo chains, developing information-optimal bit-revealing schedules, and combining execution-feedback traces with surrogate or amortized inference models for large-scale applications.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days free