Papers
Topics
Authors
Recent
Search
2000 character limit reached

Interruptible Algorithms

Updated 9 December 2025
  • Interruptible algorithms are computational procedures that return meaningful outputs at any interruption point, improving monotonically with computational effort.
  • They employ scheduling metrics such as acceleration ratio and deficiency to measure performance in multiprocessor and resource-bound scenarios.
  • These algorithms enhance real-time analytics, safe autonomy, and dynamic inference by integrating contract scheduling and anytime prediction techniques.

Interruptible algorithms are computational procedures designed to provide valid, quality-improving outputs at arbitrary interruption points, with the guarantee that intermediate results are meaningful, and typically monotonic with computational effort. These algorithms underpin resource-bounded systems, real-time analytics, multiproblem scheduling, safe autonomous decision-making, and dynamic inference. The following exposition develops the formal models, evaluation metrics, architectural paradigms, and key results defining interruptible algorithms, drawing on canonical literature and recent advances across theory and applications.

1. Formal Definitions and Main Paradigms

Interruptibility is distinguished from related concepts by its requirement that the algorithm, upon being halted at any unknown time t>0t > 0, returns an output whose quality is non-decreasing in tt and typically converges to an optimal or exact result for sufficiently large tt. This applies across scheduling, reasoning, and prediction domains.

Classic paradigms:

A formal model for an anytime (interruptible) predictor is given as:

f:(R+,X)Y,withf: (\mathbb{R}_+, \mathcal{X}) \to \mathcal{Y}, \quad \text{with}

  • Interruptibility: ff returns a valid output yy at any tt.
  • Bounded return: There exists TT s.t. τT:f(τ,x)=f(T,x)\forall \tau \geq T: f(\tau, x) = f(T, x) and limtf(t,x)=f(x)\lim_{t\to\infty} f(t, x) = f(x) (asymptotic correctness).
  • Monotonicity: t1t2:Ex,y[q(f(t1,x),y)]Ex,y[q(f(t2,x),y)]\forall t_1 \geq t_2: \mathbb{E}_{x,y}\,[q(f(t_1,x), y)] \geq \mathbb{E}_{x,y}\,[q(f(t_2,x), y)] for a relevant quality function qq (Kuhse et al., 21 Mar 2025).

2. Interruptible Scheduling and Performance Metrics

Scheduling for interruptible algorithms encompasses single and multi-problem scenarios, often simulated by repeated invocation of contract algorithms. Two main performance measures have been formalized:

  • Acceleration ratio: In the classical single-problem setting, this is defined as t/(t)t/\ell(t) where tt is interruption time and (t)\ell(t) is the largest completed contract length by tt.
  • Deficiency: For concurrent solutions to nn equally important problems on mm processors, deficiency is

def(σ)=supt>0maxpPQp(t)Qp,σ(t),\text{def}(\sigma) = \sup_{t > 0} \max_{p \in P} \frac{Q^*_p(t)}{Q_{p,\sigma}(t)},

where Qp(t)Q^*_p(t) is the offline optimum contract length for problem pp at tt, and Qp,σ(t)Q_{p,\sigma}(t) is achieved by schedule σ\sigma (Angelopoulos et al., 2018). Deficiency represents the minimum uniform speed-up required to guarantee real-time output parity with the offline optimum across all instances.

Notable results include:

  • For nmn \leq m, the optimal deficiency is no greater than $4$, achieved by an exponential round-robin schedule (contract lengths grow as bib^i for some base b>1b > 1). For n>mn > m, deficiency remains bounded and approaches $1$ as n/mn/m increases.
  • Lower bounds show that for any schedule, deficiency is at least (n+1)/n(n+1)/n for m=1m=1; for round-robin schedules, a tight lower bound is realized (e.g., (n+1)(n+1)/n/n(n+1)^{(n+1)/n}/n) (Angelopoulos et al., 2018).

3. Contract Scheduling with Predictions

Recent work advances the paradigm by incorporating predictions about interruption times, allowing improved scheduling under uncertainty.

  • Single-prediction model: The scheduler receives a (possibly erroneous) prediction τ\tau of the interruption time TT, and may buffer accordingly. Geometric schedules parameterized by a robustness rr and consistency cr=(rr24r)/2c_r=(r-\sqrt{r^2-4r})/2 achieve Pareto-optimal trade-offs (Angelopoulos et al., 2020). Error tolerance is attained by buffering: tuning contracts to complete at τ(1p)\tau (1-p), so that accuracy degrades linearly with prediction error η\eta up to pp, with worst-case performance saturating at rr.
  • Binary-queries model: The scheduler can query nn bits about the interruption. Information-theoretic bounds prove necessity of 2n2^n schedules for optimal consistency, with robust-to-error constructions possible via partitioned schedule selection.

Trade-off curves and impossibility results demonstrate that below certain resource multiples (e.g., cost $4B$ for anytime linear prediction (Hu et al., 2014)), no scheme can guarantee approximation of arbitrary budget BB solutions at all interruption points.

4. Applications: Anytime Inference, Prediction, and Large Reasoning Models

Interruptible algorithms are pervasive in areas requiring incremental, resource-aware computation.

Anytime Inference in Valuation Algebras

A generic framework for interruptible inferential algorithms leverages ordered valuation algebras, defining time-bounded combination operators t\otimes_t and refinement passes (Dasgupta et al., 2016). Soundness and completeness theorems establish that:

  • The output strictly improves with each additional computation time allocation.
  • After finite total resource, the output converges to the exact value.

This covers probabilistic inference, DNF and constraint satisfaction, and belief functions, supporting semiring-induced instantiations (e.g., probability potentials, max-min lattices).

Interruptible Machine Learning and Linear Prediction

Efficient anytime linear prediction under groupwise feature costs is realized via greedy sequencing algorithms (cost-sensitive group OMP and FR) (Hu et al., 2014). These algorithms select feature groups maximizing marginal gain per cost, and guarantee near-optimal explained variance at selected budgets. A doubling-cost algorithm achieves bi-criteria approximation: at any cost BB, spending up to $4B$ guarantees constant-factor approximation to the best possible at cost BB.

Interruptible Large Reasoning Models

Interruptibility challenges the "frozen world" assumption of LLMs and analogous systems. In dynamic environments, the ability to deliver high-quality partial outputs under time constraints or evolving contexts is critical (Wu et al., 13 Oct 2025). Evaluation metrics include:

  • Interrupt-conditioned accuracy Ai(X)A_i(X) quantifying correct answers after forced halts.
  • Post-interrupt token cost Li(X)L_i(X) tracking hidden reasoning beyond the interruption. Empirically, performance degrades by up to 60% on late-stage interruptions, with failure modes such as reasoning leakage, panic (premature truncation), and self-doubt (update rejection).

Robust interruptibility requires explicit control tokens (e.g., ⟨end-thinking⟩), systematic evaluation under staged interruptions, and tight monitoring of output chains to cap excessive reasoning.

5. Interruptible Algorithms in Autonomous and Reactive Systems

In safety-critical, reactive, and autonomous contexts, interruptibility enables external control (human or supervisory).

  • Virtualization for Safe Interruptibility: A reinforcement learning agent is made safely interruptible by routing its sensors and effectors to a virtual environment during interruptions, preserving the agent's internal perception of uninterrupted reward (Riedl et al., 2017). Restoration involves phased hand-over ensuring Q-value continuity and preventing disabling of the interrupt mechanism.
  • Concurrent Reactive Systems: Systems are formalized in event-based languages (e.g., Pi-Core), leveraging small-step operational semantics and event-based rely–guarantee proof systems to model and verify interruptibility in concurrent and multicore kernels (Zhao et al., 2018). Fine-grained interleaving and stack-based guards capture arbitrary depths of preemption, with preservation of key safety invariants machine-checked in Isabelle/HOL.

6. Architectural Techniques and Optimization

Architectural innovations underlie interruptible inference and prediction architectures, as typified by AnytimeYOLO for object detection (Kuhse et al., 21 Mar 2025):

  • Multiple early-exit heads enable prediction at many interruption points; granularity is tuned versus overhead.
  • Transposed network variants re-order multi-scale stages for early and fused predictions.
  • Optimal path selection: A DAG-based approach selects execution orders and exit points to maximize area-under-the-curve quality metrics, normalized for comparison across models.
  • Deployment trade-offs: Balancing soft and hard anytime modes, kernel-level interrupt signaling, and inference pipeline synchronization challenges real-world deployment.

7. Theoretical Foundations and Practical Limits

Key theorems and impossibility results form the backbone of interruptibility theory:

Interruptible algorithms generalize across single-problem, multiproblem, scheduling with predictions, and approximation-inference domains, with guarantees anchored in monotonic output improvement, worst-case bounds, and resource-optimality.


Interruptible algorithms constitute a foundational class of resource-adaptive, real-time, and fail-safe computational methods, blending theoretical rigor, architectural innovation, and practical deployment constraints. Their mathematical formalization and performance guarantees underpin applications spanning multiprocessor scheduling, dynamic inference, interruptible neural computation, safe autonomy, and verified concurrent control.

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Interruptible Algorithms.