Optimal Budgeted Rejection Sampling (OBRS)
- OBRS is a principled framework that maximizes sample quality under budget constraints by jointly optimizing proposal design and acceptance rates.
- Divide-and-conquer and adaptive envelope strategies reduce rejection costs from sublinear scaling to constant order, with theoretical guarantees on f-divergence minimization.
- OBRS extends to applications in budgeted prediction and high-dimensional settings, enabling efficient sampling in both discrete and continuous distribution scenarios.
Optimal Budgeted Rejection Sampling (OBRS) is a principled framework for maximizing the number or quality of accepted samples from a target distribution given explicit constraints on computational, sample, or randomness budgets. OBRS methods optimize the interplay between proposal design, acceptance rates, and resource management, often with theoretical guarantees on optimality under stringent budget constraints.
1. Rejection Sampling and Budget Constraints
In classical rejection sampling, samples are drawn from a proposal and accepted with probability proportional to , where is the target and ensures dominates over the support. The efficiency is determined by the average acceptance rate, . Traditional formulations ignore the possibility of limited resources, but practical settings often impose hard limits on the total number of candidate draws, random bits, or overall runtime. OBRS seeks to optimize sample quality against such constraints.
The critical insight is that, under budgeted conditions, the goal is to maximize the number of valid samples or minimize the divergence between the empirical post-rejection law and the true distribution , subject to the allowed number of proposals or random bits. Rigorous analysis shows that naive rejection sampling often yields suboptimal behavior in these regimes, especially as the acceptance probability decays with problem size (see, e.g., in integer partitions (Arratia et al., 2011)).
2. Divide-and-Conquer Strategies
Probabilistic Divide-and-Conquer (PDC) frameworks split proposal variables into independent partitions (e.g., in integer partitions) and match conditions deterministically or recursively across reduced subproblems (Arratia et al., 2011). For integer partitions, splitting into and allows one to solve for explicitly given , increasing the acceptance probability. The acceptance cost drops from order (full vector rejection) to (single-coordinate conditioning). Recursive PDC, exploiting generating function identities (e.g., ), further reduces the cost to a constant (asymptotically ), independent of .
Mix-and-match strategies extend PDC to -sample scenarios by decoupling the phases and reusing matches via the coupon collector's principle, delivering sublinear scaling in the number of samples—critical for budgeted multi-sample applications.
3. Adaptive and Optimized Envelope Construction
Adaptive rejection sampling methods update the proposal envelope in response to rejected samples, refining local bounds to increase acceptance on subsequent trials. For log-convex tails and multimodal densities, adaptive schemes partition the support and compute interval-wise majorants and minorants for the weight functions (Martino et al., 2011). Envelope refinement via addition of support points leads to improved matching and higher acceptance rates over time.
The Nearest Neighbor Adaptive Rejection Sampling (NNARS) algorithm establishes minimax near-optimality by iteratively constructing grid-based nonparametric density estimates with confidence radii under Hölder continuity assumptions (Achdou et al., 2018). NNARS demonstrates that, for budget and regularity , the best achievable loss (number of rejected proposals) is , with the algorithm attaining .
Optimization of the envelope, whether for a functional upper bound in high-dimensional graphical models (OS* algorithm (Dymetman et al., 2012)) or by gradient refinement of proposal parameters (Raff et al., 2023), seeks to minimize the rejection constant or the post-rejection -divergence with respect to .
4. Optimality with Respect to -Divergences
A unifying principle of OBRS is provable optimality for minimizing general -divergences between and the post-rejection law (Verine et al., 2023). The optimal acceptance function under a fixed budget constraint is
where is chosen so the mean number of accepted samples matches the prescribed budget , and is an upper bound on . This formulation is independent of the choice of -divergence (KL, Jensen-Shannon, Rényi with parameter , etc.), guaranteeing that OBRS minimizes any desired divergence subject to budget constraints.
Sample complexity results for approximate rejection sampling show that, for proposals and -divergence , the optimal total variation error decays as
where captures the scaling with budget (Block et al., 2023).
5. Budgeted Classification and Rejection in Structured Prediction
OBRS concepts extend to classification and regression under explicit reject options (Franc et al., 2021, Cheng et al., 2023). The Bayes-optimal rejector in cost-based models abstains when prediction risk or uncertainty exceeds the cost: for regression, predictions are rejected when estimated variance exceeds rejection cost; for classification, when conditional risk exceeds a set threshold.
Budgeted sequential classification systems (EMSCO algorithm (Hamilton et al., 2022)) optimize accuracy, cost, and coverage by integrating confidence-based rejection into evolutionary multi-stage design. Pareto efficient solutions balance resource expense and correct prediction under coverage constraints, informatively leveraging rejection sampling ideas for resource-constrained deployment.
6. Efficiency in Discrete and High-Dimensional Settings
In generating random samples from finite discrete distributions using coin flips, OBRS frameworks target entropy-optimality. The Amplified Loaded Dice Roller (ALDR) achieves expected entropy cost in coin flips (where is entropy of ), with space complexity only linearithmic in the problem size—improving substantially on previous alias-table methods (Draper et al., 5 Apr 2025).
For Markov chain Monte Carlo, kernels with tunable rejection rates (via cyclic shift of cumulative weights) yield exponential reductions in autocorrelation time as rejections decline (Suwa, 2022). This observation motivates OBRS designs emphasizing minimal rejection probability for optimal chain mixing in discrete variable models.
7. Applications and Future Directions
OBRS enables exact or approximate sampling in a wide range of settings: integer partitions, Bayesian computation with intractable likelihoods, generative model correction (with provable divergence guarantees), multi-objective resource-constrained prediction, and cryptographic random variate generation. Practical algorithms combine divide-and-conquer, adaptive envelope refinement, and sample allocation strategies (e.g. multilevel Monte Carlo (Warne et al., 2017)) to achieve efficiency under strict budget limitations.
Open questions include eliminating residual toll gaps for discrete distributions while maintaining linearithmic space, further characterization and empirical exploitation of mix-and-match strategies, and development of general partitioning and majorization principles for proposal design in arbitrary measure spaces (Raim et al., 18 Jan 2024).
Summary Table: OBRS Methodologies
Principle | Key Mechanism | Effect on Acceptance Rate/Cost |
---|---|---|
Divide-and-Conquer | Recursive/conditional splitting; mix-and-match/coupon methods | Reduces cost from to |
Adaptive Envelope | Support point, region, or grid refinement | Loss achieves minimax optimality |
-divergence Opt. | Acceptance threshold via divergence minimization | Provable optimality for any -divergence |
Budgeted Selection | Confidence-based or risk-based reject options | Optimal thresholding under budget |
Entropy-Optimal Sampl. | Dyadic amplification; coin-flip complexity optimization | Sampling cost |
OBRS frameworks synthesize sampling efficiency and optimality under budgets, bridging algorithmic principles in probabilistic inference, resource-constrained prediction, and generative modeling. Rigorous analysis of acceptance functions, adaptive mechanisms, and envelope construction enables systematic improvements in diverse real-world and theoretical applications.