Optimal test loss at the interpolation threshold for grid-extended KANs
Prove that for Kolmogorov–Arnold Networks trained via grid extension on supervised function-fitting tasks, the test loss is minimized near the interpolation threshold where the number of trainable parameters equals the number of training samples.
References
We conjecture that the optimal test loss is achieved at the interpolation threshold when the number of parameters match the number of data points.
                — KAN: Kolmogorov-Arnold Networks
                
                (2404.19756 - Liu et al., 30 Apr 2024) in Subsection 2.4, For accuracy: Grid Extension (Toy example: staircase-like loss curves)