OutsideInterval Mechanism
- OutsideInterval Mechanism is a differentially private algorithm that adapts traditional SPRT by monitoring privatized query values against dynamically calibrated thresholds.
- It integrates per-round and global noise injections to achieve robust privacy guarantees while tightly controlling type I and II errors.
- Its analytical threshold calibration and improved sample efficiency make it suitable for sensitive applications like clinical trials and online A/B testing.
The OutsideInterval Mechanism is a differentially private method central to the DP-SPRT sequential testing framework, designed to privatize classical SPRT-style stopping rules by monitoring when a privatized sequence of queries leaves a dynamically calibrated interval bounded by two thresholds. This construction ensures strong statistical guarantees (type I and II errors) and improved privacy efficiency relative to naive adaptations of prior mechanisms, enabling practical deployment in privacy-sensitive sequential decision-making tasks.
1. Conceptual Basis and Functional Description
The OutsideInterval mechanism instantiates the stopping policy of Wald’s Sequential Probability Ratio Test (SPRT) in a differentially private regime. In traditional SPRT, the test statistic (e.g., cumulative log-likelihood ratio or empirical mean) is compared at each round to preset lower and upper thresholds. The process continues until the statistic exits this interval, at which point a decision is rendered.
In the private adaptation, each query (typically the sum or average of observed data up to time ) is obfuscated by noise sampled independently for each round from a distribution tailored for privacy (Laplace or Gaussian). Additionally, a single global noise variable is drawn per test run and applied symmetrically to both threshold comparisons. At iteration , the mechanism examines whether
$f_i(D) + Y_i \leq T_0^i - Z \quad \text{(accept %%%%5%%%%)}$
or
$f_i(D) + Y_i \geq T_1^i + Z \quad \text{(accept %%%%6%%%%)}$
where are the carefully corrected lower and upper thresholds at stage . Otherwise, the result is the null output and the process continues. This schema ensures that the sequence remains private, and decision time and outcome both depend only on privatized statistic movements outside the thresholded interval.
2. Mathematical Formulation and Threshold Calibration
Threshold placement and noise calibration are derived analytically to guarantee prescribed type I (α) and type II (β) error rates, as well as desired privacy parameters. For exponential family models, the threshold expressions (for cumulative mean/proportion tests) take the form: where are the means under the respective simple hypotheses, is the Kullback-Leibler divergence, and is an explicit correction for the noise’s effect and the failure probability . The DP mechanism replaces the statistic by (with scaled appropriately) and shifts the thresholds by respectively.
Correctness hinges on ensuring . This bound quantifies the excess probability of spurious threshold crossing due to noise, directly calibrating .
3. Privacy Guarantees
Differential privacy is achieved through noise injection both at the sequence of outputs () and at the global threshold (), leveraging their interaction for efficient privacy management. Specifically:
- If is drawn from a distribution guaranteeing -DP (sensitivity ) and from a distribution guaranteeing -DP (sensitivity ), the process is -DP overall.
- Under Rényi Differential Privacy (RDP), with analogous profiles and , the composite privacy of the mechanism includes a bound proportional to random stopping time and the moments of the noise distributions: This integrated mechanism is strictly more privacy-efficient than independent AboveThreshold applications, with privacy loss roughly halved due to the shared .
4. Error Control and Sample Complexity
Rigorous upper bounds for both error probabilities and expected stopping times are established. For Bernoulli testing,
where is the smallest for which the sum of threshold corrections and noise effects is lower than half the separation in -divergence per sample, and is a function of the privacy/nuisance terms.
In the Laplace noise case (pure -DP), the additive sample complexity overhead is characterized as proportional to , affirming near-optimality in difficult regimes (small error, small ) compared to extant methods. Tight error control is achieved without reliance on ad hoc MC simulations for calibration.
5. Empirical Evaluation and Application Contexts
Empirical results are provided for Bernoulli settings (e.g., , , with ), demonstrating that the OutsideInterval-based DP-SPRT achieves superior average sample complexity compared to mechanisms based on independent AboveThreshold instances (e.g., PrivSPRT). Empirical type I error is reliably controlled and often below nominal thresholds, attesting to sound correction term calibration.
The mechanism is demonstrated with both Laplace (pure DP) and Gaussian (Rényi-DP) noise. A subsampling extension is also proposed, affording further improvements under stringent privacy requirements. These results are immediately relevant in sequential clinical trials, online A/B testing, and quality control, where privacy and statistical efficiency are both critical.
6. Comparative Advantages and Theoretical Significance
The principal innovation over previous privatized SPRT mechanisms (notably, PrivSPRT) is the simultaneous, symmetric use of the global noise for both boundaries, enabling:
- An approximate halving of cumulative privacy loss relative to two independent AboveThreshold applications.
- Analytical threshold calibration without reliance on MC tuning.
- Lower empirical variance and improved sample efficiency, especially pronounced when hypotheses are close or privacy budgets are tight.
The mechanism’s generic formulation also allows adaptation to broader sequential analysis and monitoring tasks that require robust privacy management and timely stopping rules.
7. Extensions and Potential Generalizations
While the mechanism is formalized for binary hypothesis testing under exponential family models, the general theory provides a template for broader sequential and online settings, including multi-armed bandits and other sequential analyses where a decision is triggered by the privatized statistic crossing an interval. The efficiency gains in privacy and sample complexity realized by the OutsideInterval construction suggest that analogous wrappers may be beneficial wherever symmetric threshold checking and sequential privatization are needed.
In conclusion, the OutsideInterval mechanism is an analytically grounded, privacy-efficient module for privatizing interval-exit type sequential tests, combining theoretical guarantees, empirical soundness, and flexibility for a range of sensitive sequential decision-making applications (Michel et al., 8 Aug 2025).