Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Computational Complexity of Training ReLU(s) (1810.04207v2)

Published 9 Oct 2018 in cs.CC

Abstract: We consider the computational complexity of training depth-2 neural networks composed of rectified linear units (ReLUs). We show that, even for the case of a single ReLU, finding a set of weights that minimizes the squared error (even approximately) for a given training set is NP-hard. We also show that for a simple network consisting of two ReLUs, the error minimization problem is NP-hard, even in the realizable case. We complement these hardness results by showing that, when the weights and samples belong to the unit ball, one can (agnostically) properly and reliably learn depth-2 ReLUs with $k$ units and error at most $\epsilon$ in time $2{(k/\epsilon){O(1)}}n{O(1)}$; this extends upon a previous work of Goel, Kanade, Klivans and Thaler (2017) which provided efficient improper learning algorithms for ReLUs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Pasin Manurangsi (127 papers)
  2. Daniel Reichman (39 papers)
Citations (42)

Summary

We haven't generated a summary for this paper yet.