2000 character limit reached
Activation functions enabling the addition of neurons and layers without altering outcomes (2410.12625v2)
Published 16 Oct 2024 in math.NA and cs.NA
Abstract: In this work, we propose activation functions for neuronal networks that are refinable and sum the identity. This new class of activation functions allows the insertion of new layers between existing ones and/or the increase of neurons in a layer, both without altering the network outputs. Our approach is grounded in subdivision theory. The proposed activation functions are constructed from basic limit functions of convergent subdivision schemes. As a showcase of our results, we introduce a family of spline activation functions and provide comprehensive details for their practical implementation.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.