Accept-then-Stop (AtS) Algorithm
- Accept-then-Stop (AtS) is a distributed decision-making paradigm that separates local acceptance based on single-sample statistics from global stopping criteria to ensure correctness.
- The algorithm applies local consensus and incremental tests across nodes, using recursive communication to confirm global properties in network consensus, variational inference, and path sampling.
- AtS achieves significant computational efficiency and convergence gains by enabling prompt local decision-making while employing post-processing reweighting for unbiased results.
The Accept–then–Stop (AtS) algorithm is a paradigm for distributed decision-making and stochastic optimization wherein iterations or proposals are accepted based on a local or incremental test, and then the procedure is terminated as soon as a global property or acceptance criterion can be certified. Originally conceptualized for local stopping in consensus algorithms over networks, AtS principles have been adapted in recent research to path sampling in rare event simulations and variational inference for probabilistic models. The unifying thread is the decoupling of acceptance (usually based on local or single-sample statistics) from the stopping (global or ensemble-correct, sometimes via weighting), yielding significant efficiency gains without compromising correctness.
1. Formal Definitions and Core Principles
AtS algorithms share the structure:
- Acceptance: Local criteria (e.g., neighborhood consensus, single-sample improvement, trajectory endpoint) are checked at each step or node.
- Stopping: The process halts only once a global or correctly reweighted property is guaranteed (e.g., network-wide accuracy, unbiased distributional invariance).
In the context of network consensus (Xie et al., 2017), agents on a strongly connected graph iteratively update states and propagate scalar counters, with each node "accepting" a local consensus event (its values near neighbors' values) and "stopping" only when it can infer, from local information and recursion over network diameter, that global consensus up to a specified tolerance is achieved.
For stochastic variational inference, AtS appears as a single-sample acceptance rule per iteration (based on ELBO improvement or an acceptance probability) and a halting condition once no improvements are detected for a patience budget (Dayta, 2024).
In transition path sampling (TPS), AtS refers to the always-accepting shooting protocol: new trajectories are generated so that they are always reactive (meet the rare event boundary), stopping segment integration as soon as an endpoint is reached, and applying a posteriori statistical weighting to restore ensemble correctness (Häupl et al., 13 Feb 2026).
2. Algorithmic Workflows and Pseudocode
Distributed Consensus Stopping
Each agent on a strongly connected directed graph of diameter maintains local counters:
- : number of consecutive rounds of local -consensus,
- : recursive minimum of neighbor counters and , incremented by 1.
The local logic per agent at each time :
- Gather from in-neighbors their state and minimum counter.
- If , increment by 1; else reset to 0.
- Set .
- If , declare consensus achieved and stop updating.
Pseudocode:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
for i in range(N): y[i], z[i] = 0, 0 while not stopped: broadcast x[i], m[i] = min(y[i], z[i]) for j in in_neighbors(i): receive x[j], m[j] if max(abs(x[i]-x[j]) for j in in_neighbors(i)) < eps: y[i] += 1 else: y[i] = 0 z[i] = 1 + min([m[j] for j in in_neighbors(i)] + [m[i]]) if z[i] >= D+1: stop updating |
YOASOVI Stochastic Variational Inference
Each iteration draws a single sample and performs an accept/reject step, with empirical improvement:
- Draw .
- Evaluate gradient and candidate ELBO .
- Acceptance ratio:
- Naive:
- Metropolis:
- Accept update if , Uniform(0,1); else retain previous .
- Terminate after consecutive non-improving rounds.
Path Sampling: Always-Accepting TPS
Steps to generate transition paths for overdamped dynamics:
- Select a shooting point on path using weight .
- Integrate forward or backward from by stochastic steps, stopping as soon as the process hits or .
- Concatenate path segments to produce a new reactive path ; always accept.
- Assign weight to the new path.
- Reweight observable statistics by during analysis.
3. Theoretical Guarantees and Correctness
Consensus Stopping
- Lemma III.3: Uniformly local -consensus over the network guarantees global -consensus.
- Theorem III.5: When any node records , global -consensus holds.
- Stopping is guaranteed to never overestimate the satisfaction of the consensus property; i.e., premature halting is impossible (Xie et al., 2017).
YOASOVI Convergence
- The acceptance ratio ensures certain acceptance for improving ELBO () and preference towards improvement for stochastic proposals.
- Convergence is ensured (in the stochastic approximation sense) under classic diminishing step-size conditions (Robbins–Monro).
- Rejection chains terminate after a finite patience budget, ensuring the algorithm halts in practice (Dayta, 2024).
Always-Accepting TPS
- The algorithm samples an auxiliary ensemble , where is the target transition path density, and is a path-dependent weight.
- By reweighting with , unbiased estimators for all observables are recovered.
- The method is formally correct for overdamped stochastic dynamics due to preserved detailed balance in path space (Häupl et al., 13 Feb 2026).
4. Efficiency, Computational Complexity, and Parameter Selection
Distributed Consensus
- The response time (delay between actual and detected consensus) is bounded by a function of network diameter , ergodic coefficient , and parameter :
- Empirical results indicate actual response time is much less than this worst-case bound.
- Communication per round is minimal: at most two small integers per node.
- Practical recommendation is to choose , with the global tolerance (Xie et al., 2017).
YOASOVI VI
- Single-sample gradient computation reduces each iteration's complexity by a factor of compared to conventional MCVI (mini-batch size ).
- Parameter acts as an "inverse temperature"—smaller increases variance and exploration, larger slows progress.
- Typical values: learning rate , chosen for 50% acceptance, patience parameter between 5 and 20.
- Warm-up or adaptive scheduling is suggested for stability.
- On synthetic and real clustering benchmarks, YOASOVI achieves substantially faster convergence and better ELBOs compared to both MCVI and QMCVI (Dayta, 2024).
Always-Accepting TPS
- The AtS approach yields acceptance probability 1 for every trajectory; computational speed-up over standard TPS is $2$–, dependent on system details.
- Reweighting cost for is negligible compared to integrator (force evaluation) time.
- Effective sample size and decorrelation properties are improved, leading to higher throughput of statistically independent trajectories.
- For example, in CO clathrate hydrate simulations, AtS delivered a faster sampling rate and significantly enhanced rare channel access (Häupl et al., 13 Feb 2026).
5. Applications and Empirical Results
| Domain | Reference | Key Result |
|---|---|---|
| Distributed Consensus | (Xie et al., 2017) | Correct local stopping, parameter-free |
| Stochastic VI (YOASOVI) | (Dayta, 2024) | 10–200 faster, competitive ELBO |
| Rare-Event Path Sampling (TPS) | (Häupl et al., 13 Feb 2026) | Acceptance=1, $2$– speed-up, unbiased |
Consensus on networks: AtS enables provably correct local stopping with negligible communication and parameter tuning, suitable for large-scale strongly connected architectures.
Variational inference: YOASOVI demonstrates that AtS-style single-sample acceptance with early stopping achieves near-order-of-magnitude improvements in wall-clock time and solution quality for hierarchical and mixture models.
Transition path sampling: Always-Accepting TPS with AtS methodology provides exact reweighted statistics for rare event ensembles, offering both computational gains and enhanced phase-space exploration in realistic molecular systems.
6. Practical Guidelines, Limitations, and Extensions
- Consensus applications: Only an upper bound on network diameter is needed; clipping counters at suffices.
- VI/YOASOVI: Critical to adjust or adapt the learning rate for model complexity; intractable ELBOs may require batching; local-only acceptance can be isolated to global variables in hierarchical problems.
- Path sampling: Uniform or state-dependent can be used; post-processing for reweighting is required for unbiased observables; currently restricted to overdamped stove stochastic processes.
Limitations: The AtS approach may have additional evaluation cost for local statistics or ELBOs, especially in extremely large or deeply hierarchical models. For TPS, the methodology is established for overdamped (Langevin) dynamics; continuous-momentum systems may require further modification.
A plausible implication is that AtS strategies may extend to other domains where local acceptance events and global stopping/correctness can be coupled through recursion, a posteriori weighting, or graph-theoretic propagation.