Extend the functional LDP to linear-growth activations
Establish a large deviation principle for the vector of random covariance functions (K^2_{N_1}, …, K^{L+1}_{N_L}) of fully connected Gaussian deep neural networks on the space of continuous, symmetric, positive-definite kernels ^{+,s}, under the activation growth condition (σ(x))^2 ≤ A(1+|x|^2) (i.e., linear growth). This should remove the current restriction to sub-linear growth and provide the good rate function in the infinite-dimensional setting for linear-growth activations such as ReLU.
References
However, in the present work, we do not yet cover the case of activation functions with linear growth in this infinite-dimensional setting—a task we leave for future research.
                — LDP for the covariance process in fully connected neural networks
                
                (2505.08062 - Andreis et al., 12 May 2025) in Section 3.4 (Literature review and comparison)