Effect of source–target similarity on transfer learning efficiency

Determine how the similarity between source and target tasks influences transfer learning efficiency, specifying the quantitative relationship between task relatedness and the improvement in generalization performance on the target task when leveraging information from the source task.

Background

The paper develops a single-instance Franz–Parisi formalism in the proportional limit to analyze transfer learning in fully connected neural networks. Within this framework, the authors introduce a renormalized source–target kernel that quantifies task relatedness and show empirically and analytically that transfer effectiveness depends on data structure and the degree of source–target correlation.

Despite these advances, the authors explicitly state that fundamental questions remain unresolved, including the precise way in which source–target similarity governs transfer learning efficiency. Clarifying this dependence would establish principled criteria for when transfer is beneficial and how to design source–target pairs to maximize performance gains.

References

Despite being among the dominating paradigms in deep learning applications, TL remains poorly understood from a theoretical perspective, with several fundamental questions still open. For instance, (i) how does the source-target similarity affect TL efficiency?

Statistical mechanics of transfer learning in fully-connected networks in the proportional limit (2407.07168 - Ingrosso et al., 9 Jul 2024) in Introduction (Section 1)