Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
89 tokens/sec
Gemini 2.5 Pro Premium
41 tokens/sec
GPT-5 Medium
23 tokens/sec
GPT-5 High Premium
19 tokens/sec
GPT-4o
96 tokens/sec
DeepSeek R1 via Azure Premium
88 tokens/sec
GPT OSS 120B via Groq Premium
467 tokens/sec
Kimi K2 via Groq Premium
197 tokens/sec
2000 character limit reached

Conditional validity and a fast approximation formula of full conformal prediction sets (2508.05272v1)

Published 7 Aug 2025 in math.ST and stat.TH

Abstract: Prediction sets based on full conformal prediction have seen an increasing interest in statistical learning due to their universal marginal coverage guarantees. However, practitioners have refrained from using it in applications for two reasons: Firstly, it comes at very high computational costs, exceeding even that of cross-validation. Secondly, an applicant is typically not interested in a marginal coverage guarantee which averages over all possible (but not available) training data sets, but rather in a guarantee conditional on the specific training data. This work tackles these problems by, firstly, showing that full conformal prediction sets are conditionally conservative given the training data if the conformity score is stochastically bounded and satisfies a stability condition. Secondly, we propose an approximation for the full conformal prediction set that has asymptotically the same training conditional coverage as full conformal prediction under the stability assumption derived before, and can be computed more easily. Furthermore, we show that under the stability assumption, $n$-fold cross-conformal prediction also has the same asymptotic training conditional coverage guarantees as full conformal prediction. If the conformity score is defined as the out-of-sample prediction error, our approximation of the full conformal set coincides with the symmetrized Jackknife. We conclude that for this conformity score, if based on a stable prediction algorithm, full-conformal, $n$-fold cross-conformal, the Jackknife+, our approximation formula, and hence also the Jackknife, all yield the same asymptotic training conditional coverage guarantees.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube