Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Provable Regret Bounds for Deep Online Learning and Control (2110.07807v3)

Published 15 Oct 2021 in cs.LG

Abstract: The theory of deep learning focuses almost exclusively on supervised learning, non-convex optimization using stochastic gradient descent, and overparametrized neural networks. It is common belief that the optimizer dynamics, network architecture, initialization procedure, and other factors tie together and are all components of its success. This presents theoretical challenges for analyzing state-based and/or online deep learning. Motivated by applications in control, we give a general black-box reduction from deep learning to online convex optimization. This allows us to decouple optimization, regret, expressiveness, and derive agnostic online learning guarantees for fully-connected deep neural networks with ReLU activations. We quantify convergence and regret guarantees for any range of parameters and allow any optimization procedure, such as adaptive gradient methods and second order methods. As an application, we derive provable algorithms for deep control in the online episodic setting.

Citations (6)

Summary

We haven't generated a summary for this paper yet.