Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
95 tokens/sec
Gemini 2.5 Pro Premium
32 tokens/sec
GPT-5 Medium
18 tokens/sec
GPT-5 High Premium
20 tokens/sec
GPT-4o
97 tokens/sec
DeepSeek R1 via Azure Premium
87 tokens/sec
GPT OSS 120B via Groq Premium
468 tokens/sec
Kimi K2 via Groq Premium
202 tokens/sec
2000 character limit reached

OutsideInterval Mechanism

Updated 11 August 2025
  • OutsideInterval Mechanism is a differentially private algorithm that adapts traditional SPRT by monitoring privatized query values against dynamically calibrated thresholds.
  • It integrates per-round and global noise injections to achieve robust privacy guarantees while tightly controlling type I and II errors.
  • Its analytical threshold calibration and improved sample efficiency make it suitable for sensitive applications like clinical trials and online A/B testing.

The OutsideInterval Mechanism is a differentially private method central to the DP-SPRT sequential testing framework, designed to privatize classical SPRT-style stopping rules by monitoring when a privatized sequence of queries leaves a dynamically calibrated interval bounded by two thresholds. This construction ensures strong statistical guarantees (type I and II errors) and improved privacy efficiency relative to naive adaptations of prior mechanisms, enabling practical deployment in privacy-sensitive sequential decision-making tasks.

1. Conceptual Basis and Functional Description

The OutsideInterval mechanism instantiates the stopping policy of Wald’s Sequential Probability Ratio Test (SPRT) in a differentially private regime. In traditional SPRT, the test statistic (e.g., cumulative log-likelihood ratio or empirical mean) is compared at each round to preset lower and upper thresholds. The process continues until the statistic exits this interval, at which point a decision is rendered.

In the private adaptation, each query fif_i (typically the sum or average of observed data up to time ii) is obfuscated by noise YiY_i sampled independently for each round from a distribution tailored for privacy (Laplace or Gaussian). Additionally, a single global noise variable ZZ is drawn per test run and applied symmetrically to both threshold comparisons. At iteration ii, the mechanism examines whether

$f_i(D) + Y_i \leq T_0^i - Z \quad \text{(accept %%%%5%%%%)}$

or

$f_i(D) + Y_i \geq T_1^i + Z \quad \text{(accept %%%%6%%%%)}$

where T0i,T1iT_0^i, T_1^i are the carefully corrected lower and upper thresholds at stage ii. Otherwise, the result is the null output \perp and the process continues. This schema ensures that the sequence remains private, and decision time and outcome both depend only on privatized statistic movements outside the thresholded interval.

2. Mathematical Formulation and Threshold Calibration

Threshold placement and noise calibration are derived analytically to guarantee prescribed type I (α) and type II (β) error rates, as well as desired privacy parameters. For exponential family models, the threshold expressions (for cumulative mean/proportion tests) take the form: T0n=μ0+KL(νθ0,νθ1)1nlog1γβθ1θ0C(n,(1γ)β), T1n=μ1KL(νθ1,νθ0)1nlog1γαθ1θ0+C(n,(1γ)α),\begin{align*} T_0^n &= \mu_0 + \frac{KL(\nu_{\theta_0}, \nu_{\theta_1}) - \frac{1}{n}\log\frac{1}{\gamma \beta}}{\theta_1 - \theta_0} - C(n, (1 - \gamma)\beta), \ T_1^n &= \mu_1 - \frac{KL(\nu_{\theta_1}, \nu_{\theta_0}) - \frac{1}{n}\log\frac{1}{\gamma \alpha}}{\theta_1 - \theta_0} + C(n, (1 - \gamma)\alpha), \end{align*} where μ0,μ1\mu_0, \mu_1 are the means under the respective simple hypotheses, KL(,)KL(\cdot,\cdot) is the Kullback-Leibler divergence, and C(n,δ)C(n, \delta) is an explicit correction for the noise’s effect and the failure probability δ\delta. The DP mechanism replaces the statistic by fn(D)+Ynf_n(D) + Y_n (with YnY_n scaled appropriately) and shifts the thresholds by ±Z/n\pm Z/n respectively.

Correctness hinges on ensuring nP((Yn/nZ/n)>C(n,δ))δ\sum_n P\left( (Y_n / n - Z/n) > C(n, \delta) \right) \leq \delta. This bound quantifies the excess probability of spurious threshold crossing due to noise, directly calibrating CC.

3. Privacy Guarantees

Differential privacy is achieved through noise injection both at the sequence of outputs (YiY_i) and at the global threshold (ZZ), leveraging their interaction for efficient privacy management. Specifically:

  • If ZZ is drawn from a distribution guaranteeing ϵZ\epsilon_Z-DP (sensitivity Δ\Delta) and YiY_i from a distribution guaranteeing ϵY\epsilon_Y-DP (sensitivity 2Δ2\Delta), the process is (ϵZ+ϵY)(\epsilon_Z + \epsilon_Y)-DP overall.
  • Under Rényi Differential Privacy (RDP), with analogous profiles ϵZ(α)\epsilon_Z(\alpha) and ϵY(α)\epsilon_Y(\alpha), the composite privacy of the mechanism includes a bound proportional to random stopping time τ\tau and the moments of the noise distributions: Dα(A(D)A(D))α1/2α1ϵZ(2α)+ϵY(α)+12(α1)log(2E[τ2])D_\alpha(\mathcal{A}(D) \| \mathcal{A}(D')) \leq \frac{\alpha - 1/2}{\alpha - 1}\epsilon_Z(2\alpha) + \epsilon_Y(\alpha) + \frac{1}{2(\alpha-1)}\log(2\mathbb{E}[\tau^2]) This integrated mechanism is strictly more privacy-efficient than independent AboveThreshold applications, with privacy loss roughly halved due to the shared ZZ.

4. Error Control and Sample Complexity

Rigorous upper bounds for both error probabilities and expected stopping times are established. For Bernoulli testing,

Eθ0[τ]1+(1γ)β+11exp(c)+N(θ0,θ1,β,γ),\mathbb{E}_{\theta_0}[\tau] \leq 1 + (1-\gamma)\beta + \frac{1}{1-\exp(-c)} + N(\theta_0, \theta_1, \beta, \gamma),

where NN is the smallest nn for which the sum of threshold corrections and noise effects is lower than half the separation in KLKL-divergence per sample, and cc is a function of the privacy/nuisance terms.

In the Laplace noise case (pure ϵ\epsilon-DP), the additive sample complexity overhead is characterized as proportional to (θ1θ0)/(ϵKL)(\theta_1 - \theta_0)/(\epsilon \cdot KL), affirming near-optimality in difficult regimes (small error, small KLKL) compared to extant methods. Tight error control is achieved without reliance on ad hoc MC simulations for calibration.

5. Empirical Evaluation and Application Contexts

Empirical results are provided for Bernoulli settings (e.g., p0=0.3p_0 = 0.3, p1=0.7p_1 = 0.7, with α=β=0.05\alpha=\beta=0.05), demonstrating that the OutsideInterval-based DP-SPRT achieves superior average sample complexity compared to mechanisms based on independent AboveThreshold instances (e.g., PrivSPRT). Empirical type I error is reliably controlled and often below nominal thresholds, attesting to sound correction term calibration.

The mechanism is demonstrated with both Laplace (pure DP) and Gaussian (Rényi-DP) noise. A subsampling extension is also proposed, affording further improvements under stringent privacy requirements. These results are immediately relevant in sequential clinical trials, online A/B testing, and quality control, where privacy and statistical efficiency are both critical.

6. Comparative Advantages and Theoretical Significance

The principal innovation over previous privatized SPRT mechanisms (notably, PrivSPRT) is the simultaneous, symmetric use of the global noise ZZ for both boundaries, enabling:

  • An approximate halving of cumulative privacy loss relative to two independent AboveThreshold applications.
  • Analytical threshold calibration without reliance on MC tuning.
  • Lower empirical variance and improved sample efficiency, especially pronounced when hypotheses are close or privacy budgets are tight.

The mechanism’s generic formulation also allows adaptation to broader sequential analysis and monitoring tasks that require robust privacy management and timely stopping rules.

7. Extensions and Potential Generalizations

While the mechanism is formalized for binary hypothesis testing under exponential family models, the general theory provides a template for broader sequential and online settings, including multi-armed bandits and other sequential analyses where a decision is triggered by the privatized statistic crossing an interval. The efficiency gains in privacy and sample complexity realized by the OutsideInterval construction suggest that analogous wrappers may be beneficial wherever symmetric threshold checking and sequential privatization are needed.

In conclusion, the OutsideInterval mechanism is an analytically grounded, privacy-efficient module for privatizing interval-exit type sequential tests, combining theoretical guarantees, empirical soundness, and flexibility for a range of sensitive sequential decision-making applications (Michel et al., 8 Aug 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)