Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Survey on Negative Transfer (2009.00909v4)

Published 2 Sep 2020 in cs.LG, cs.CV, and stat.ML

Abstract: Transfer learning (TL) utilizes data or knowledge from one or more source domains to facilitate the learning in a target domain. It is particularly useful when the target domain has very few or no labeled data, due to annotation expense, privacy concerns, etc. Unfortunately, the effectiveness of TL is not always guaranteed. Negative transfer (NT), i.e., leveraging source domain data/knowledge undesirably reduces the learning performance in the target domain, has been a long-standing and challenging problem in TL. Various approaches have been proposed in the literature to handle it. However, there does not exist a systematic survey on the formulation of NT, the factors leading to NT, and the algorithms that mitigate NT. This paper fills this gap, by first introducing the definition of NT and its factors, then reviewing about fifty representative approaches for overcoming NT, according to four categories: secure transfer, domain similarity estimation, distant transfer, and NT mitigation. NT in related fields, e.g., multi-task learning, lifelong learning, and adversarial attacks, are also discussed.

Citations (178)

Summary

  • The paper provides a comprehensive analysis of negative transfer, categorizing methodologies to mitigate its adverse effects in transfer learning.
  • It details secure transfer, domain similarity estimation, and distant transfer approaches to safeguard model performance.
  • The paper underscores practical implications and future research directions for enhancing transfer learning across diverse domains.

A Comprehensive Survey on Negative Transfer

The paper "A Survey on Negative Transfer" presents a thorough examination of the phenomenon known as negative transfer (NT) within the domain of transfer learning (TL). The authors delve into various aspects of NT, a situation where leveraging data or knowledge from one or more source domains undesirably reduces the learning performance in a target domain, and discuss approaches to counteract this issue. The paper is structured to provide a foundational understanding of NT, categorize methodologies for its mitigation, and explore related fields where NT might manifest.

Overview

Transfer learning is a widely adopted machine learning paradigm aimed at improving model performance by transferring knowledge from related domains. However, the effectiveness of transfer learning is contingent upon certain assumptions, such as the relatedness of learning tasks and similar data distributions across domains. Violation of these assumptions often results in NT, adversely affecting target domain learning performance. The paper identifies NT as a critical and challenging problem that warrants systematic investigation. The authors categorize strategies to address NT into the following areas: secure transfer, domain similarity estimation, distant transfer, and NT mitigation.

Secure Transfer

Secure transfer approaches are designed to inherently avoid NT, regardless of the similarity between source and target domains. The paper reports on several methodologies that achieve this through theoretical guarantees embedded within their objective functions. Key approaches in this category, such as adaptive learning and performance gain-based strategies, focus on ensuring that the inclusion of source domain data does not deteriorate the target domain model's performance.

Domain Similarity Estimation

Estimating domain similarity, or transferability, plays a vital role in the effective application of transfer learning and mitigating NT. The authors categorize these estimation techniques into feature statistics-based methods, test performance-based evaluations, and fine-tuning-based assessments. The mathematical rigor involved in measures such as Maximum Mean Discrepancy (MMD) and KL divergence provides a quantitative basis for determining domain relatedness, which can guide the selection of appropriate transfer strategies.

Distant Transfer

Distant transfer, or transitive transfer, involves using intermediate domains to bridge significant discrepancies between source and target domains. By leveraging intermediate domains, this approach aims to mitigate the effects of NT when the direct transfer between the primary domains is not feasible. The paper illustrates how this methodology can be particularly useful in scenarios with scarce training data or domains with little apparent similarity.

NT Mitigation

The authors present a variety of strategies tailored to mitigate NT, focusing on improving the transferability of data, models, and target predictions. Methods such as feature enhancement, model transferability improvement, and selective pseudo-labeling are discussed in detail. These approaches emphasize boosting the compatibility of transferred knowledge with target domain characteristics to enhance learning outcomes.

Implications and Future Directions

The implications of this research extend beyond theoretical advancements; they offer practical insights into optimizing transfer learning applications across diverse fields such as multi-task learning, lifelong learning, and adversarial attacks. The paper suggests several future research directions, including developing secure transfer methods applicable to various TL paradigms and investigating NT mitigation strategies for regression problems.

Overall, this survey paper serves as a substantial contribution to the understanding of negative transfer in transfer learning. It provides a structured framework for researchers to explore NT, thereby advancing the development of more robust and effective transfer learning models.