Scale-invariance of star-convex basin volume in higher-dimensional neural networks

Establish whether star-convex basin volume estimates of minima are invariant under layer-wise rescaling (scale invariance) in higher-dimensional neural network parameter spaces, beyond the demonstrated two-parameter toy model; either prove invariance generally or construct explicit counterexamples showing dependence on scale.

Background

The paper highlights scale invariance as a major challenge for flatness measures, noting that many such measures can be manipulated by reparameterization. In a two-parameter toy model, the authors show the star-convex basin volume to be invariant to rescaling, suggesting potential robustness compared to Hessian-based sharpness measures.

However, the authors explicitly state uncertainty about whether this invariance holds in higher dimensions or more general settings, leaving open the question of the general behavior of basin volume under scale transformations in realistic neural networks.

References

It is unclear if this holds in higher dimensions or more generally, but this toy model is sufficient to stop eigenvalue-based sharpness metrics.

Sharp Minima Can Generalize: A Loss Landscape Perspective On Data (2511.04808 - Fan et al., 6 Nov 2025) in Appendix — Analytical Example of Basin Volume Scale Invariance (Appendix A)