Bias-learning with discontinuous activations (e.g., Heaviside)
Investigate the applicability of the bias-learning framework to discontinuous activation functions, specifically the Heaviside step function, by determining whether neural networks with fixed random weights and learned biases retain universal approximation guarantees when the activation is discontinuous and does not satisfy the continuity assumption in the definition of γ-bias-learning activations.
Sponsor
References
We leave the study of discontinuous functions like the Heaviside to future work.
— Expressivity of Neural Networks with Random Weights and Learned Biases
(2407.00957 - Williams et al., 1 Jul 2024) in Section 2.1 (Feed-forward neural networks), after Definition \ref{def:bias-learning}