Papers
Topics
Authors
Recent
Search
2000 character limit reached

Error Rate-Based Rejection

Updated 26 January 2026
  • Error Rate-Based Rejection is a method that minimizes misclassification risk by abstaining from decisions when uncertainty is high.
  • It employs calibrated likelihood ratios in Bayesian settings or p-value thresholds in conformal prediction to ensure the error rate remains below a user-defined threshold.
  • The approach balances the trade-off between reject rates and predictive accuracy, providing a transparent, risk-controlled framework for decision systems.

Error Rate-Based Rejection (ERR) is a principled methodology for constraining the probability of erroneous predictions in statistical decision making and machine learning classifiers. ERR achieves risk minimization by systematically abstaining from decisions in cases of uncertainty, bounding the error rate below a user-specified threshold. In contrast to ad hoc score thresholding, ERR utilizes model calibration and formal error rate analysis, underpinned by both Bayesian and distribution-free frameworks. Prominent operationalizations include Bayes error-rate minimization for speaker verification (Brümmer et al., 2021) and conformal prediction–based reject option for binary classification (Szabadváry et al., 26 Jun 2025).

1. Foundational Principles of Error Rate-Based Rejection

ERR centers on the explicit quantification and control of prediction errors through abstention. Classical models without a reject option must emit a label for each input, creating vulnerability to high error rates in ambiguous or low-confidence cases. ERR circumvents this by introducing a third action: rejection or abstain.

The central objective is to guarantee that the classifier’s error probability, conditioned on acceptance, remains below a user-defined level α\alpha. This is formalized in various settings by:

  • Calibrated likelihood ratios and Bayes error-rate minimization: The decision threshold τ\tau is chosen such that the expected error rate r(τ;π)r^\ast(\tau; \pi) does not exceed a bound determined by the prior probability π\pi and system calibration accuracy (Brümmer et al., 2021).
  • Conformal prediction singleton acceptance: Accept only singleton conformal prediction sets Γα(x)\Gamma_\alpha(x); abstain when these are empty or ambiguous, yielding a classifier mα(x)m_\alpha(x) whose accepted error rate is provably α\leq \alpha (Szabadváry et al., 26 Jun 2025).

This framework shifts focus from ROC/DET conditional error curves to holistic, user-facing error rate guarantees.

2. Bayesian Formulation for Error Control

Bayesian ERR is formally defined through calibrated likelihood-ratio outputs. For input trial xx, the calibrated likelihood ratio is (x)=P(sH1)/P(sH0)\ell(x) = P(s|H_1)/P(s|H_0), where H1H_1 and H0H_0 denote competing hypotheses (e.g., same vs. different speaker) (Brümmer et al., 2021).

Given a prior π=P(H1)\pi = P(H_1), the Bayes error-rate for threshold τ\tau is

r(τ;π)=PH1[(x)<τ]π+PH0[(x)τ](1π)r^\ast(\tau; \pi) = P_{H_1}[\ell(x) < \tau] \cdot \pi + P_{H_0}[\ell(x) \ge \tau] \cdot (1-\pi)

where PH1[<τ]P_{H_1}[\ell < \tau] and PH0[τ]P_{H_0}[\ell \ge \tau] denote miss and false-accept rates, respectively.

Optimal error is achieved at threshold

τ=1ππ\tau^\ast = \frac{1-\pi}{\pi}

yielding

r(π)=πmiss(τ)+(1π)fa(τ)r^\ast(\pi) = \pi \cdot \text{miss}(\tau^\ast) + (1-\pi) \cdot \text{fa}(\tau^\ast)

The trapezium bound encapsulates the minimal error achievable:

r(π)min{π,1π,EER}r^\ast(\pi) \le \min\{\pi, 1-\pi, \text{EER}\}

with EER denoting equal-error-rate; min(π,1π)\min(\pi, 1-\pi) reflects task hardness due to class imbalance.

Extension to expected cost introduces costs Cmiss,CfaC_{\text{miss}}, C_{\text{fa}} for each type of error, generalizing the operating threshold and risk (Brümmer et al., 2021).

3. Distribution-Free Guarantees via Conformal Prediction

ERR in binary classification can be realized with distribution-free validity by employing conformal prediction (CP). CP assigns each candidate label y{0,1}y \in \{0,1\} a p-value py(x)p_y(x) representing evidence against its conformity, based on exchangeability (Szabadváry et al., 26 Jun 2025).

For each test input xx, the prediction set at reject level α\alpha is

Γα(x)={y{0,1}:py(x)>α}\Gamma_\alpha(x) = \{y \in \{0,1\} : p_y(x) > \alpha\}

ERR is instantiated by accepting only singleton sets (Γα(x)=1|\Gamma_\alpha(x)| = 1); both empty and two-label sets are rejected. The resulting classifier

mα(x)={y^Γα(x)=1 Rotherwisem_\alpha(x) = \begin{cases} \hat{y} & |\Gamma_\alpha(x)| = 1\ \mathcal{R} & \text{otherwise} \end{cases}

has the key property

P(yy^Γα(x)=1)αP(y \neq \hat{y} \land |\Gamma_\alpha(x)| = 1) \leq \alpha

exactly in full/online CP, and conservatively in split/inductive CP with optional training-conditional tightening.

4. Algorithms and Empirical Evaluation

Bayesian ERR implementation proceeds as:

  1. Obtain calibration set with trials and raw scores.
  2. Fit calibration function (e.g., logistic regression on scores).
  3. Fix prior π\pi and error/cost preferences.
  4. Compute optimal threshold τ\tau^\ast.
  5. On independent test set, compute miss, false-accept, error or cost via counts.
  6. For fixed target error rate α\alpha, invert r(τ)r(\tau) to find τ\tau yielding desired rejection fraction.

Conformal prediction ERR (full or inductive) requires:

  1. Compute nonconformity scores (online or split protocol).
  2. For each test xx, calculate p0(x),p1(x)p_0(x), p_1(x).
  3. Form prediction set Γα(x)\Gamma_\alpha(x).
  4. Accept only if Γα(x)=1|\Gamma_\alpha(x)| = 1; otherwise, reject.
  5. Empirically estimate error and reject rates on held-out data.

Practitioners plot empirical error–reject curves R(α)R(\alpha) vs. ρ(α)\rho(\alpha) to visualize the trade-off between rigorously guaranteed error and operational abstention rate (Szabadváry et al., 26 Jun 2025).

5. Theoretical Guarantees and Bounds

In Bayesian ERR, the error rate is upper-bounded by the trapezium bound

r(π)min(π,1π,EER)r^\ast(\pi) \le \min(\pi, 1-\pi, \text{EER})

ensuring that the risk never exceeds the highest value dictated by inherent task difficulty or system accuracy.

For conformal prediction ERR, distribution-free validity ensures that for any (exchangeable) data sequence:

  • Full/online CP yields P[error at acceptance]=αP[\text{error at acceptance}] = \alpha exactly and independently.
  • Inductive CP guarantees P[error at acceptance]αP[\text{error at acceptance}] \leq \alpha conservatively, with optional refinement for training-conditional validity:

α=αln(1/δ)2h\alpha' = \alpha - \sqrt{\frac{\ln(1/\delta)}{2h}}

where hh is calibration set size, yielding P[error]αP[\text{error}] \leq \alpha with probability 1δ\geq 1-\delta over calibration choice (Szabadváry et al., 26 Jun 2025).

6. Critique of Direct Score Thresholding and Practical Considerations

Direct thresholding of raw scores ignores explicit modeling of prior probabilities and error/cost profiles, and may not retain validity on independent data. Fixing false-accept rates on calibration sets does not imply generalization. Best practice entails

  • Calibrating model outputs to likelihood ratios (Bayesian) or nonconformity scores (CP).
  • Choosing thresholds with Bayes rule (Bayesian) or significance levels (CP) in accordance with desired error rate or cost.
  • Evaluating operating characteristics empirically on independent (held-out) data.

For any target error rate α\alpha or cost, ERR provides transparent, predictable abstention strategies with formally justified risk bounds. Well-calibrated systems demonstrate empirical error–reject curves that adhere tightly to theoretical bounds, while poorly calibrated systems exhibit excess risk (Brümmer et al., 2021, Szabadváry et al., 26 Jun 2025).

7. Connections, Limitations, and Trade-offs

ERR unifies Bayesian risk minimization and distribution-free conformal prediction as dual approaches to error rate control with rejection. Bayesian ERR presumes calibrated likelihood ratios and known priors; CP-based ERR guarantees are distribution-free given exchangeability.

Full/online CP offers exact, independent guarantees but is computationally intensive (O(n2n^2) per test point), whereas inductive CP is efficient (O(hloghh\,\log\,h)) with conservative validity subject to dependence among trials.

Error–reject curves succinctly encode the trade-off: higher reject rates enable lower guaranteed error rates, and vice versa. A plausible implication is that ERR can be tuned to meet stringent regulatory or operational criteria by adjusting acceptance thresholds or significance levels.

Limitations include reliance on calibration quality (Bayesian) and exchangeability (CP). In both frameworks, rejection is strategic and interpretable—empty prediction sets denote novelty, dual-label sets denote ambiguity—but coverage on rare or adversarial cases remains a subject for empirical exploration.

In sum, Error Rate-Based Rejection offers a rigorous, principled methodology for controlling error probability in classification and decision systems, with both Bayesian and distribution-free instantiations yielding transparent, predictable abstention and risk profiles (Brümmer et al., 2021, Szabadváry et al., 26 Jun 2025).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Error Rate-Based Rejection (ERR).