Tradeoffs between Mistakes and ERM Oracle Calls in Online and Transductive Online Learning (2506.00135v1)
Abstract: We study online and transductive online learning when the learner interacts with the concept class only via Empirical Risk Minimization (ERM) or weak consistency oracles on arbitrary instance subsets. This contrasts with standard online models, where the learner knows the entire class. The ERM oracle returns a hypothesis minimizing loss on a given subset, while the weak consistency oracle returns a binary signal indicating whether the subset is realizable by some concept. The learner is evaluated by the number of mistakes and oracle calls. In the standard online setting with ERM access, we prove tight lower bounds in both realizable and agnostic cases: $\Omega(2{d_{VC}})$ mistakes and $\Omega(\sqrt{T 2{d_{LD}}})$ regret, where $T$ is the number of timesteps and $d_{LD}$ is the Littlestone dimension. We further show that existing online learning results with ERM access carry over to the weak consistency setting, incurring an additional $O(T)$ in oracle calls. We then consider the transductive online model, where the instance sequence is known but labels are revealed sequentially. For general Littlestone classes, we show that optimal realizable and agnostic mistake bounds can be achieved using $O(T{d_{VC}+1})$ weak consistency oracle calls. On the negative side, we show that limiting the learner to $\Omega(T)$ weak consistency queries is necessary for transductive online learnability, and that restricting the learner to $\Omega(T)$ ERM queries is necessary to avoid exponential dependence on the Littlestone dimension. Finally, for certain concept classes, we reduce oracle calls via randomized algorithms while maintaining similar mistake bounds. In particular, for Thresholds on an unknown ordering, $O(\log T)$ ERM queries suffice; for $k$-Intervals, $O(T3 2{2k})$ weak consistency queries suffice.