Papers
Topics
Authors
Recent
2000 character limit reached

From Low Intrinsic Dimensionality to Non-Vacuous Generalization Bounds in Deep Multi-Task Learning (2501.19067v2)

Published 31 Jan 2025 in cs.LG and stat.ML

Abstract: Deep learning methods are known to generalize well from training to future data, even in an overparametrized regime, where they could easily overfit. One explanation for this phenomenon is that even when their ambient dimensionality, (i.e. the number of parameters) is large, the models' intrinsic dimensionality is small; specifically, their learning takes place in a small subspace of all possible weight configurations. In this work, we confirm this phenomenon in the setting of deep multi-task learning. We introduce a method to parametrize multi-task network directly in the low-dimensional space, facilitated by the use of random expansions techniques. We then show that high-accuracy multi-task solutions can be found with much smaller intrinsic dimensionality (fewer free parameters) than what single-task learning requires. Subsequently, we show that the low-dimensional representations in combination with weight compression and PAC-Bayesian reasoning lead to the first non-vacuous generalization bounds for deep multi-task networks.

Summary

We haven't generated a summary for this paper yet.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.