Interruptible Algorithms
- Interruptible algorithms are computational procedures that return meaningful outputs at any interruption point, improving monotonically with computational effort.
- They employ scheduling metrics such as acceleration ratio and deficiency to measure performance in multiprocessor and resource-bound scenarios.
- These algorithms enhance real-time analytics, safe autonomy, and dynamic inference by integrating contract scheduling and anytime prediction techniques.
Interruptible algorithms are computational procedures designed to provide valid, quality-improving outputs at arbitrary interruption points, with the guarantee that intermediate results are meaningful, and typically monotonic with computational effort. These algorithms underpin resource-bounded systems, real-time analytics, multiproblem scheduling, safe autonomous decision-making, and dynamic inference. The following exposition develops the formal models, evaluation metrics, architectural paradigms, and key results defining interruptible algorithms, drawing on canonical literature and recent advances across theory and applications.
1. Formal Definitions and Main Paradigms
Interruptibility is distinguished from related concepts by its requirement that the algorithm, upon being halted at any unknown time , returns an output whose quality is non-decreasing in and typically converges to an optimal or exact result for sufficiently large . This applies across scheduling, reasoning, and prediction domains.
Classic paradigms:
- Interruptible algorithms: Return useful results at any time, improving monotonically as more computational resources are consumed (Hu et al., 2014, Kuhse et al., 21 Mar 2025, Dasgupta et al., 2016).
- Contract algorithms: Require a pre-specified time budget to start and may produce no output if interrupted before that budget elapses; can be "simulated" to yield interruptible behavior via contract scheduling (Angelopoulos et al., 2018, Angelopoulos et al., 2020).
- Anytime algorithms: Used interchangeably with "interruptible" in most technical literature, particularly in inference and machine learning contexts (Dasgupta et al., 2016, Kuhse et al., 21 Mar 2025).
A formal model for an anytime (interruptible) predictor is given as:
- Interruptibility: returns a valid output at any .
- Bounded return: There exists s.t. and (asymptotic correctness).
- Monotonicity: for a relevant quality function (Kuhse et al., 21 Mar 2025).
2. Interruptible Scheduling and Performance Metrics
Scheduling for interruptible algorithms encompasses single and multi-problem scenarios, often simulated by repeated invocation of contract algorithms. Two main performance measures have been formalized:
- Acceleration ratio: In the classical single-problem setting, this is defined as where is interruption time and is the largest completed contract length by .
- Deficiency: For concurrent solutions to equally important problems on processors, deficiency is
where is the offline optimum contract length for problem at , and is achieved by schedule (Angelopoulos et al., 2018). Deficiency represents the minimum uniform speed-up required to guarantee real-time output parity with the offline optimum across all instances.
Notable results include:
- For , the optimal deficiency is no greater than $4$, achieved by an exponential round-robin schedule (contract lengths grow as for some base ). For , deficiency remains bounded and approaches $1$ as increases.
- Lower bounds show that for any schedule, deficiency is at least for ; for round-robin schedules, a tight lower bound is realized (e.g., ) (Angelopoulos et al., 2018).
3. Contract Scheduling with Predictions
Recent work advances the paradigm by incorporating predictions about interruption times, allowing improved scheduling under uncertainty.
- Single-prediction model: The scheduler receives a (possibly erroneous) prediction of the interruption time , and may buffer accordingly. Geometric schedules parameterized by a robustness and consistency achieve Pareto-optimal trade-offs (Angelopoulos et al., 2020). Error tolerance is attained by buffering: tuning contracts to complete at , so that accuracy degrades linearly with prediction error up to , with worst-case performance saturating at .
- Binary-queries model: The scheduler can query bits about the interruption. Information-theoretic bounds prove necessity of schedules for optimal consistency, with robust-to-error constructions possible via partitioned schedule selection.
Trade-off curves and impossibility results demonstrate that below certain resource multiples (e.g., cost $4B$ for anytime linear prediction (Hu et al., 2014)), no scheme can guarantee approximation of arbitrary budget solutions at all interruption points.
4. Applications: Anytime Inference, Prediction, and Large Reasoning Models
Interruptible algorithms are pervasive in areas requiring incremental, resource-aware computation.
Anytime Inference in Valuation Algebras
A generic framework for interruptible inferential algorithms leverages ordered valuation algebras, defining time-bounded combination operators and refinement passes (Dasgupta et al., 2016). Soundness and completeness theorems establish that:
- The output strictly improves with each additional computation time allocation.
- After finite total resource, the output converges to the exact value.
This covers probabilistic inference, DNF and constraint satisfaction, and belief functions, supporting semiring-induced instantiations (e.g., probability potentials, max-min lattices).
Interruptible Machine Learning and Linear Prediction
Efficient anytime linear prediction under groupwise feature costs is realized via greedy sequencing algorithms (cost-sensitive group OMP and FR) (Hu et al., 2014). These algorithms select feature groups maximizing marginal gain per cost, and guarantee near-optimal explained variance at selected budgets. A doubling-cost algorithm achieves bi-criteria approximation: at any cost , spending up to $4B$ guarantees constant-factor approximation to the best possible at cost .
Interruptible Large Reasoning Models
Interruptibility challenges the "frozen world" assumption of LLMs and analogous systems. In dynamic environments, the ability to deliver high-quality partial outputs under time constraints or evolving contexts is critical (Wu et al., 13 Oct 2025). Evaluation metrics include:
- Interrupt-conditioned accuracy quantifying correct answers after forced halts.
- Post-interrupt token cost tracking hidden reasoning beyond the interruption. Empirically, performance degrades by up to 60% on late-stage interruptions, with failure modes such as reasoning leakage, panic (premature truncation), and self-doubt (update rejection).
Robust interruptibility requires explicit control tokens (e.g., ⟨end-thinking⟩), systematic evaluation under staged interruptions, and tight monitoring of output chains to cap excessive reasoning.
5. Interruptible Algorithms in Autonomous and Reactive Systems
In safety-critical, reactive, and autonomous contexts, interruptibility enables external control (human or supervisory).
- Virtualization for Safe Interruptibility: A reinforcement learning agent is made safely interruptible by routing its sensors and effectors to a virtual environment during interruptions, preserving the agent's internal perception of uninterrupted reward (Riedl et al., 2017). Restoration involves phased hand-over ensuring Q-value continuity and preventing disabling of the interrupt mechanism.
- Concurrent Reactive Systems: Systems are formalized in event-based languages (e.g., Pi-Core), leveraging small-step operational semantics and event-based rely–guarantee proof systems to model and verify interruptibility in concurrent and multicore kernels (Zhao et al., 2018). Fine-grained interleaving and stack-based guards capture arbitrary depths of preemption, with preservation of key safety invariants machine-checked in Isabelle/HOL.
6. Architectural Techniques and Optimization
Architectural innovations underlie interruptible inference and prediction architectures, as typified by AnytimeYOLO for object detection (Kuhse et al., 21 Mar 2025):
- Multiple early-exit heads enable prediction at many interruption points; granularity is tuned versus overhead.
- Transposed network variants re-order multi-scale stages for early and fused predictions.
- Optimal path selection: A DAG-based approach selects execution orders and exit points to maximize area-under-the-curve quality metrics, normalized for comparison across models.
- Deployment trade-offs: Balancing soft and hard anytime modes, kernel-level interrupt signaling, and inference pipeline synchronization challenges real-world deployment.
7. Theoretical Foundations and Practical Limits
Key theorems and impossibility results form the backbone of interruptibility theory:
- Soundness and completeness of anytime inference schemes in ordered valuation algebras (Dasgupta et al., 2016).
- Tight bounds on deficiency and acceleration ratio given resource constraints and scheduling architectures (Angelopoulos et al., 2018, Hu et al., 2014).
- Necessity of resource multiplicity for bi-criteria approximation at arbitrary interruption points.
- Information-theoretic lower bounds for scheduling with advice (Angelopoulos et al., 2020).
Interruptible algorithms generalize across single-problem, multiproblem, scheduling with predictions, and approximation-inference domains, with guarantees anchored in monotonic output improvement, worst-case bounds, and resource-optimality.
Interruptible algorithms constitute a foundational class of resource-adaptive, real-time, and fail-safe computational methods, blending theoretical rigor, architectural innovation, and practical deployment constraints. Their mathematical formalization and performance guarantees underpin applications spanning multiprocessor scheduling, dynamic inference, interruptible neural computation, safe autonomy, and verified concurrent control.