Conformal Risk Control (CRC)
- Conformal Risk Control (CRC) is a framework that extends conformal prediction to rigorously control arbitrary bounded, monotone risk metrics at finite sample sizes.
- Its methodology unifies score-based post-hoc calibration, enabling the control of set-valued, thresholded, or abstaining predictors with formal non-asymptotic guarantees beyond mere coverage probabilities.
- CRC is applicable in safety-critical systems, model alignment, selective prediction, and handling domain shifts, offering practical risk management in real-world models.
Conformal Risk Control (CRC) is a distribution-free uncertainty quantification framework that extends classical conformal prediction to rigorously control user-specified risk metrics—such as average loss, false negative rate, or other bounded monotone losses—at finite sample sizes. CRC unifies score-based post-hoc calibration procedures for set-valued, thresholded, or abstaining predictors, providing formal non-asymptotic guarantees beyond mere coverage probabilities. It is widely applicable in safety-sensitive applications, model alignment, selective prediction, and under domain shifts.
1. Foundational Principles of Conformal Risk Control
CRC generalizes the split conformal prediction paradigm from controlling miscoverage rates to controlling the expected value of arbitrary bounded and monotone losses. Given exchangeable calibration data, a pre-trained predictive model, a (possibly vector-valued) "conservativeness" parameter that expands prediction sets or thresholds, and a loss function that is non-increasing in , CRC constructs a calibrated predictor such that
where is a user-chosen target-risk level and the expectation is over calibration and test samples (Angelopoulos et al., 2022, andéol et al., 2023).
The core CRC calibration rule (for scalar ) is to select
where $\widehat{R}_n(\