Papers
Topics
Authors
Recent
2000 character limit reached

On the Generalization Properties of Learning the Random Feature Models with Learnable Activation Functions (2510.15327v1)

Published 17 Oct 2025 in cs.LG

Abstract: This paper studies the generalization properties of a recently proposed kernel method, the Random Feature models with Learnable Activation Functions (RFLAF). By applying a data-dependent sampling scheme for generating features, we provide by far the sharpest bounds on the required number of features for learning RFLAF in both the regression and classification tasks. We provide a unified theorem that describes the complexity of the feature number $s$, and discuss the results for the plain sampling scheme and the data-dependent leverage weighted scheme. Through weighted sampling, the bound on $s$ in the MSE loss case is improved from $\Omega(1/\epsilon2)$ to $\tilde{\Omega}((1/\epsilon){1/t})$ in general $(t\geq 1)$, and even to $\Omega(1)$ when the Gram matrix has a finite rank. For the Lipschitz loss case, the bound is improved from $\Omega(1/\epsilon2)$ to $\tilde{\Omega}((1/\epsilon2){1/t})$. To learn the weighted RFLAF, we also propose an algorithm to find an approximate kernel and then apply the leverage weighted sampling. Empirical results show that the weighted RFLAF achieves the same performances with a significantly fewer number of features compared to the plainly sampled RFLAF, validating our theories and the effectiveness of this method.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Open Problems

We found no open problems mentioned in this paper.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.