2000 character limit reached
Operator theory, kernels, and Feedforward Neural Networks (2301.01327v2)
Published 3 Jan 2023 in cs.LG, math.FA, and math.OA
Abstract: In this paper we show how specific families of positive definite kernels serve as powerful tools in analyses of iteration algorithms for multiple layer feedforward Neural Network models. Our focus is on particular kernels that adapt well to learning algorithms for data-sets/features which display intrinsic self-similarities at feedforward iterations of scaling.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Collections
Sign up for free to add this paper to one or more collections.