2000 character limit reached
Addressing common misinterpretations of KART and UAT in neural network literature (2408.16389v4)
Published 29 Aug 2024 in cs.LG and cs.NE
Abstract: This note addresses the Kolmogorov-Arnold Representation Theorem (KART) and the Universal Approximation Theorem (UAT), focusing on their common and frequent misinterpretations in many papers related to neural network approximation. Our remarks aim to support a more accurate understanding of KART and UAT among neural network specialists. In addition, we explore the minimal number of neurons required for universal approximation, showing that KART's lower bounds extend to standard multilayer perceptrons, even with smooth activation functions.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.