Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 21 tok/s Pro
GPT-4o 81 tok/s Pro
Kimi K2 231 tok/s Pro
GPT OSS 120B 435 tok/s Pro
Claude Sonnet 4 33 tok/s Pro
2000 character limit reached

Activation functions enabling the addition of neurons and layers without altering outcomes (2410.12625v2)

Published 16 Oct 2024 in math.NA and cs.NA

Abstract: In this work, we propose activation functions for neuronal networks that are refinable and sum the identity. This new class of activation functions allows the insertion of new layers between existing ones and/or the increase of neurons in a layer, both without altering the network outputs. Our approach is grounded in subdivision theory. The proposed activation functions are constructed from basic limit functions of convergent subdivision schemes. As a showcase of our results, we introduce a family of spline activation functions and provide comprehensive details for their practical implementation.

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 post and received 0 likes.