From Low Intrinsic Dimensionality to Non-Vacuous Generalization Bounds in Deep Multi-Task Learning (2501.19067v2)
Abstract: Deep learning methods are known to generalize well from training to future data, even in an overparametrized regime, where they could easily overfit. One explanation for this phenomenon is that even when their ambient dimensionality, (i.e. the number of parameters) is large, the models' intrinsic dimensionality is small; specifically, their learning takes place in a small subspace of all possible weight configurations. In this work, we confirm this phenomenon in the setting of deep multi-task learning. We introduce a method to parametrize multi-task network directly in the low-dimensional space, facilitated by the use of random expansions techniques. We then show that high-accuracy multi-task solutions can be found with much smaller intrinsic dimensionality (fewer free parameters) than what single-task learning requires. Subsequently, we show that the low-dimensional representations in combination with weight compression and PAC-Bayesian reasoning lead to the first non-vacuous generalization bounds for deep multi-task networks.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.