Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Unified Framework for Training Neural Networks (1805.09214v1)

Published 23 May 2018 in cs.LG and stat.ML

Abstract: The lack of mathematical tractability of Deep Neural Networks (DNNs) has hindered progress towards having a unified convergence analysis of training algorithms, in the general setting. We propose a unified optimization framework for training different types of DNNs, and establish its convergence for arbitrary loss, activation, and regularization functions, assumed to be smooth. We show that framework generalizes well-known first- and second-order training methods, and thus allows us to show the convergence of these methods for various DNN architectures and learning tasks, as a special case of our approach. We discuss some of its applications in training various DNN architectures (e.g., feed-forward, convolutional, linear networks), to regression and classification tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hadi Ghauch (22 papers)
  2. Hossein Shokri-Ghadikolaei (23 papers)
  3. Carlo Fischione (97 papers)
  4. Mikael Skoglund (211 papers)

Summary

We haven't generated a summary for this paper yet.