Dice Question Streamline Icon: https://streamlinehq.com

Extend the functional LDP to linear-growth activations

Establish a large deviation principle for the vector of random covariance functions (K^2_{N_1}, …, K^{L+1}_{N_L}) of fully connected Gaussian deep neural networks on the space of continuous, symmetric, positive-definite kernels ^{+,s}, under the activation growth condition (σ(x))^2 ≤ A(1+|x|^2) (i.e., linear growth). This should remove the current restriction to sub-linear growth and provide the good rate function in the infinite-dimensional setting for linear-growth activations such as ReLU.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper proves functional large deviation principles (LDPs) for the covariance process of fully connected Gaussian deep neural networks, both in the operator space L_1{+,s} and in the kernel space {+,s}. These results currently require a polynomial growth condition on the activation function with exponent r<2, excluding linear-growth activations such as ReLU.

Recent finite-dimensional results have treated linear-growth activations, but extending these to the infinite-dimensional functional setting considered here requires additional technical work. The authors explicitly note that covering the linear-growth case in their functional framework is left for future research.

References

However, in the present work, we do not yet cover the case of activation functions with linear growth in this infinite-dimensional setting—a task we leave for future research.

LDP for the covariance process in fully connected neural networks (2505.08062 - Andreis et al., 12 May 2025) in Section 3.4 (Literature review and comparison)