Causal relationship between KAG and neural collapse

Determine whether Kolmogorov-Arnold geometry in early layers of neural networks enables neural collapse in later layers or, conversely, whether the formation of collapsed final-layer representations induces Kolmogorov-Arnold geometry in earlier layers, by analyzing the joint evolution of Jacobian-based geometric structure across all layers during training.

Background

Neural collapse is a phenomenon observed late in training where class means align into a symmetric simplex and within-class variability diminishes. This paper studies Kolmogorov-Arnold geometry (KAG) through Jacobians of hidden layers in shallow MLPs trained on MNIST and finds that KAG emerges robustly and is scale-agnostic. The authors raise the possibility that KAG and neural collapse could be connected and explicitly state that clarifying this relationship remains open.

The open question concerns the directionality and mechanism linking the emergence of KAG in early layers with neural collapse in later layers. Resolving this requires careful, layer-wise, temporal analysis of the geometry during training to establish causal or correlational pathways.

References

It's tempting to speculate that these phenomena might be connected. Where KAG in early layers provides the structured substrate that enables neural collapse in later layers, or conversely, that the pull toward collapsed final representations induces geometric organization earlier in the network. But these remain open questions requiring careful analysis of the joint evolution of geometry across all layers during training.

Scale-Agnostic Kolmogorov-Arnold Geometry in Neural Networks (2511.21626 - Vanherreweghe et al., 26 Nov 2025) in Section 4 (Discussion)