Develop an adaptive FTRL algorithm that achieves the best of both regret bounds without prior smoothness knowledge
Investigate whether there exists a single version of follow-the-regularised-leader with ellipsoidal smoothing that, by adaptively tuning the learning rates (and smoothing parameters), simultaneously achieves the stronger regret bound for smooth losses and the regret bound for general bounded convex losses, without requiring prior knowledge of smoothness.
References
You should wonder if it is possible to obtain the best of both bounds with a single algorithm by adaptively tuning the learning rates. At present this is not known as far as we know.
                — Bandit Convex Optimisation
                
                (2402.06535 - Lattimore, 9 Feb 2024) in Chapter "Self-concordant regularisation", Notes, item 6