Papers
Topics
Authors
Recent
2000 character limit reached

A Differential Game Theoretic Neural Optimizer for Training Residual Networks

Published 17 Jul 2020 in cs.LG, math.OC, and stat.ML | (2007.08880v1)

Abstract: Connections between Deep Neural Networks (DNNs) training and optimal control theory has attracted considerable attention as a principled tool of algorithmic design. Differential Dynamic Programming (DDP) neural optimizer is a recently proposed method along this line. Despite its empirical success, the applicability has been limited to feedforward networks and whether such a trajectory-optimization inspired framework can be extended to modern architectures remains unclear. In this work, we derive a generalized DDP optimizer that accepts both residual connections and convolution layers. The resulting optimal control representation admits a game theoretic perspective, in which training residual networks can be interpreted as cooperative trajectory optimization on state-augmented dynamical systems. This Game Theoretic DDP (GT-DDP) optimizer enjoys the same theoretic connection in previous work, yet generates a much complex update rule that better leverages available information during network propagation. Evaluation on image classification datasets (e.g. MNIST and CIFAR100) shows an improvement in training convergence and variance reduction over existing methods. Our approach highlights the benefit gained from architecture-aware optimization.

Citations (2)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.