Papers
Topics
Authors
Recent
2000 character limit reached

Conditional Regret Bounds in Learning

Updated 20 December 2025
  • Conditional regret bounds are advanced measures that quantify the excess risk of prediction and decision-making algorithms by conditioning on auxiliary variables like data batches and internal randomness.
  • They connect performance analysis with information measures—using conditional mutual information and Sibson’s measures—to yield sharper, instance-adaptive guarantees.
  • Applications of conditional regret bounds span universal prediction, online learning, reinforcement learning, and risk-sensitive optimization, enabling refined and data-adaptive analyses.

Conditional regret bounds are a sophisticated tool for characterizing the excess risk or suboptimality of prediction, decision-making, or learning algorithms, subject to conditioning on auxiliary random variables, data batch histories, or aspects of the problem structure. These bounds quantify algorithmic performance not only in the classic minimax or expectation sense but often with respect to an explicit or implicit conditioning variable—history, batch, internal randomness, or auxiliary filtration. The conditional viewpoint enables sharper and more data-adaptive assessments of regret, integrates problem-dependent statistical complexity, and connects regret minimization to conditional mutual information, Sibson’s information measures, and law-of-the-iterated-logarithm arguments across universal prediction, bandits, and reinforcement learning.

1. Formal Definitions and Setup

The central quantity of interest is the conditional regret, generally defined by

R(p^,θ)=D(YY^X)R(\hat p, \theta) = D(Y \| \hat Y \mid X)

where p^\hat p is a predictor, θ\theta parameterizes a statistical model, YY the target variable, XX a conditioning random variable (e.g., training batches or prior observations), and D(X)D(\cdot \| \cdot \mid X) is a conditional divergence—typically conditional Kullback-Leibler or Rényi divergence. In batch universal prediction, the regret against pθp_\theta is measured over the test batch YY given training corpus XnX^n, resulting in

R(p^,θ)=xnpθ(xn)ypθ(y)logpθ(y)p^(yxn)R(\hat p, \theta) = \sum_{x^n} p_\theta(x^n) \sum_y p_\theta(y) \log \frac{p_\theta(y)}{\hat p(y \mid x^n)}

which coincides with the conditional KL D(YY^Xn)D(Y \| \hat Y \mid X^n) (Bondaschi et al., 14 Aug 2025).

Further, conditional regret bounds may also arise as conditional expected regret in Bayesian optimization: EF[RTA]=Ef,{ϵt}[t=1T(f(x)f(xt)){ζt}t1]E_\mathcal{F}[R_T \mid \mathcal{A}]= E_{f,\{\epsilon_t\}} \left[ \sum_{t=1}^T (f(x^*)-f(x_t)) \mid \{\zeta_t\}_{t \geq 1} \right ] where the conditioning is on the algorithm’s internal randomization A\mathcal{A} (Takeno et al., 2 Sep 2024).

In online betting, conditional regret refers to path-wise regret under a Ville event (high-confidence or almost-sure set of sequences), quantifying the regret for each realization with respect to the best fixed strategy in hindsight, with

Rt=LtlnZtR_t = L_t^* - \ln Z_t

where LtL_t^* is the best log-wealth attainable, and ZtZ_t is the mixture martingale (Agrawal et al., 13 Dec 2025).

2. Conditional Regret Bounds in Universal Prediction

The Conditional Regret-Capacity Theorem for batch universal prediction provides a sharp identification of minimax conditional regret with a conditional mutual information: minp^maxθR(p^,θ)=supwIw(θ;YXn)\min_{\hat p} \max_\theta R(\hat p, \theta) = \sup_w I_w(\theta; Y \mid X^n) where Iw(θ;YXn)I_w(\theta; Y \mid X^n) is conditional mutual information between model parameter and test data, given the observed batch, optimized over all priors ww on θ\theta (Bondaschi et al., 14 Aug 2025). The optimal predictor is the conditional mixture with prior ww^*.

For Rényi-type regret, the theorem generalizes: minp^maxθRα(p^,θ)=supwIαw(θ;YXn)\min_{\hat p}\max_\theta R_\alpha(\hat p, \theta) = \sup_w I_\alpha^w(\theta; Y \mid X^n) where RαR_\alpha is the conditional Rényi divergence and IαI_\alpha is conditional Sibson mutual information of order α\alpha, with the minimizer given by the conditional α\alpha-NML predictor (Bondaschi et al., 14 Aug 2025). This establishes a deep connection between regret minimization and conditional information measures.

Batch regret bounds in binary memoryless sources yield tight asymptotics: minp^maxθ[0,1]R(p^,θ)12log(1+1n)+O(log(n)n)\min_{\hat p} \max_{\theta \in [0,1]} R(\hat p, \theta) \geq \frac{1}{2} \log\left(1+\frac{1}{n}\right)+ O\left(\frac{\log(n\ell)}{n\ell}\right) demonstrating the penalty per batch for optimal universal predictors (Bondaschi et al., 14 Aug 2025).

3. Conditional Regret in Online Learning and Betting

The conditional regret framework in online betting and learning connects high-probability and almost-sure concentration via Ville events. For a path-wise (adversarial) regret process with variance proxy VtV_t, the mixture martingale strategy obeys: RtC(1+1+ln2(1/α)Vt+ln(1/α)+lnln(c1+Vt))R_t \leq C\left(1 + \frac{1+\ln^2(1/\alpha)}{V_t} + \ln(1/\alpha) + \ln\ln(c\sqrt{1+V_t})\right) on the Ville event Eα:suptlnZtln(1/α)\mathcal{E}_\alpha: \sup_t \ln Z_t \leq \ln(1/\alpha) (Agrawal et al., 13 Dec 2025). As α0\alpha \to 0, the almost-sure iterated logarithm form emerges,

Rt(1+o(1))lnlnVtR_t \leq (1+o(1))\ln \ln V_t

for all but finitely many tt with probability one under stochastic assumptions, thus bridging adversarial and stochastic analyses.

4. Instance-Dependent and Conditional Regret in Reinforcement Learning

Conditional regret bounds in RL and bandits exploit the problem structure, conditioning on histories or specific state-action pairs. In tabular MDPs, gap-dependent, variance-aware conditional regret bounds take the form: Regret(K)O~(Δh(s,a)>0H2VarmaxcΔh(s,a)logK)\mathrm{Regret}(K) \leq \tilde O\left( \sum_{\Delta_h(s,a)>0} \frac{H^2 \wedge \mathrm{Var}_{\max}^c}{\Delta_h(s,a)} \log K \right) where Varmaxc\mathrm{Var}_{\max}^c denotes the maximum conditional total variance conditioned on visiting any (s,h)(s,h); this refines classical bounds depending only on unconditional total variance, yielding much sharper guarantees when the MDP has a few rare high-variance decision points (Chen et al., 6 Jun 2025).

In risk-sensitive RL, conditional recommendation regret for CVaR-type or quantile-integral objectives scales as: O~(H3/2LGSSAK)\widetilde O \left( H^{3/2} L_G |\mathcal{S}| \sqrt{|\mathcal{S}| |\mathcal{A}| K} \right) where LGL_G is the Lipschitz constant of the quantile/CDF measure. This can be interpreted as conditional regret for tail-optimized objectives (Bastani et al., 2022).

5. Conditional Expected Regret in Bayesian Optimization

Regret analyses for randomized BO algorithms condition on internal randomness, yielding high-probability bounds for the conditional expected regret. For IRGP-UCB, the bound is: PrA{T:EF[RTA]U(T,δ)}1δ\Pr_\mathcal{A}\left\{ \forall T: E_{\mathcal{F}}[R_T \mid \mathcal{A}] \leq U(T, \delta) \right\} \geq 1-\delta where U(T,δ)U(T,\delta) matches classical rates in O(TγTlnX)O(\sqrt{T \gamma_T \ln |\mathcal{X}|}) but avoids time-dependent scaling in the confidence parameter by conditioning on algorithmic randomness (Takeno et al., 2 Sep 2024).

Similarly, Bayesian simple-regret bounds in large-domain GP optimization express regret as a conditional fraction of the optimal achievable value, controlled by the domain size and fixed evaluation budget, rather than assuming exhaustive exploration (Wüthrich et al., 2021).

Enhanced HH-consistency bounds leverage conditional regret inequalities between surrogate and target losses. By introducing instance-dependent scaling factors α(h,x)\alpha(h,x) and β(h,x)\beta(h,x), these results allow inequalities of the type: Ψ(Δ2,H(h,x)EX[β(h,X)]β(h,x))α(h,x)Δ1,H(h,x)\Psi\left( \frac{ \Delta_{\ell_2, \mathcal{H}}(h,x) \mathbb{E}_X[\beta(h,X)] }{ \beta(h,x) } \right) \leq \alpha(h,x) \Delta_{\ell_1, \mathcal{H}}(h,x) which imply, after marginalization,

Ψ(R2(h)R2(H))γ(h)(R1(h)R1(H))\Psi\left( R_{\ell_2}(h) - R_{\ell_2}^*(\mathcal{H}) \right) \leq \gamma(h) \left( R_{\ell_1}(h) - R_{\ell_1}^*(\mathcal{H}) \right)

yielding strictly sharper finite-sample error bounds by accounting for conditional regret at each instance (Mao et al., 18 Jul 2024).

Applications span multi-class classification, estimation under low-noise Tsybakov conditions, and bipartite ranking. Notably, techniques recover conventional HH-consistency as a special case when αβ1\alpha \equiv \beta \equiv 1.

7. Connections and Implications

  • Conditional regret bounds allow nuanced quantification of algorithmic performance: rates can be much tighter and more adaptive than unconditional minimax bounds.
  • Information-theoretic characterizations via conditional mutual information and conditional Sibson mutual information serve as sharp lower bounds; conditional α\alpha-NML predictors are saddle-point optimal (Bondaschi et al., 14 Aug 2025).
  • Gap- and variance-conditional bounds in RL precisely capture how local structure can sharply reduce total regret mass; in many practical settings, the conditional total variance is parametrically smaller than unconditional alternatives (Chen et al., 6 Jun 2025, Zanette et al., 2019).
  • Path-wise, Ville-event conditional regret bounds provide a robust bridge between adversarial and stochastic approaches in online learning, including for unbounded data, yielding law-of-the-iterated-logarithm rates (Agrawal et al., 13 Dec 2025).
  • Enhanced H-consistency bounds rigorously separate instance-dependent effects, yielding sharper sample complexity and risk bounds in statistical learning (Mao et al., 18 Jul 2024).

These frameworks suggest richer, more data-adaptive analyses of regret, and connect deeply with modern developments in universal prediction, bandit theory, and reinforcement learning.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Conditional Regret Bounds.