Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Locality Preserving Joint Transfer for Domain Adaptation (1906.07441v1)

Published 18 Jun 2019 in cs.CV

Abstract: Domain adaptation aims to leverage knowledge from a well-labeled source domain to a poorly-labeled target domain. A majority of existing works transfer the knowledge at either feature level or sample level. Recent researches reveal that both of the paradigms are essentially important, and optimizing one of them can reinforce the other. Inspired by this, we propose a novel approach to jointly exploit feature adaptation with distribution matching and sample adaptation with landmark selection. During the knowledge transfer, we also take the local consistency between samples into consideration, so that the manifold structures of samples can be preserved. At last, we deploy label propagation to predict the categories of new instances. Notably, our approach is suitable for both homogeneous and heterogeneous domain adaptation by learning domain-specific projections. Extensive experiments on five open benchmarks, which consist of both standard and large-scale datasets, verify that our approach can significantly outperform not only conventional approaches but also end-to-end deep models. The experiments also demonstrate that we can leverage handcrafted features to promote the accuracy on deep features by heterogeneous adaptation.

Citations (201)

Summary

  • The paper introduces a novel domain adaptation method that jointly optimizes feature and sample transfers while preserving local manifold structures.
  • The method employs joint optimization, graph Laplacians, and label propagation to align domain distributions and maintain discriminative feature properties.
  • Experiments on benchmark datasets show superior performance, validating its effectiveness for homogeneous and heterogeneous domain adaptation tasks.

An Overview of "Locality Preserving Joint Transfer for Domain Adaptation"

The paper "Locality Preserving Joint Transfer for Domain Adaptation" presented by Li et al. introduces a novel technique for domain adaptation, which aims to transfer knowledge from a well-labeled source domain to a poorly labeled target domain. This method improves upon prior approaches by jointly considering both feature-level and sample-level adaptations, while integrating locality preservation within the transfer process. The dual focus on feature alignment and sample weighting, combined with the consideration of local manifold structures, marks a notable advance in optimization strategies for transfer learning.

Key Contributions

  1. Joint Optimization of Feature and Sample Levels: The proposed method conducts both feature adaptation—via distribution matching—and sample adaptation—through landmark selection. This dual adaptation strategy helps align both marginal and conditional distributions between source and target domains.
  2. Locality Preservation: Recognizing the importance of local sample consistency, the researchers incorporate manifold learning techniques. This ensures that samples maintain their locality structure during transfer, thereby enhancing the discriminative power of the learned features.
  3. Flexible Application to Heterogeneous Domains: Unlike many conventional domain adaptation methods that are limited to homogeneous settings, this approach learns distinct mappings for each domain, allowing it to accommodate heterogeneous feature spaces and varying dimensionalities.
  4. Effective Use of Handcrafted and Deep Features: The work demonstrates that the adaptation model can leverage handcrafted features to improve the performance of deep features, and vice versa, showcasing the versatility and strength of the proposed method across different types of features.

Methodology

The algorithm optimizes a joint objective that minimizes domain shifts by aligning distributions and leveraging landmark weights to select samples vital for bridging domains. The integration of graph Laplacians serves to preserve intrinsic sample structures and enhance dataset discriminability. Additionally, the use of label propagation within the model iteratively refines the predictions, ensuring robustness against initial estimation errors.

The optimization process involves a balance between maximizing locality-preserving properties and minimizing domain discrepancies, formulated as a trace-ratio criterion. The authors propose a computationally efficient solution, ensuring that the algorithm converges in a reasonable time frame, as evidenced by the experimental results.

Experimental Evaluation

Li et al. validate their method on several benchmark datasets, including Office+Caltech, CMU PIE, and VisDA 2017, demonstrating superior performance over established baselines and even some end-to-end deep learning models. The accuracy improvements, notably on large-scale datasets, underline the practical applicability of the method in complex real-world scenarios. The experiments also investigate heterogeneous domain cases, showing that domain adaptation can be successful even when source and target data lie in different intrinsic feature spaces.

Implications and Future Work

The implications of this research are twofold: practically, it offers a robust transfer learning framework applicable to diverse real-world domains; theoretically, it provides a pathway to explore further the interplay between feature and sample adaptations under the umbrella of manifold learning. Future work might delve into extending the kernelization of the algorithm for non-linear domain adaptation tasks or exploring multi-source adaptation scenarios to leverage even greater diversity from varying source domains.

Overall, the "Locality Preserving Joint Transfer" model stands out as a significant contribution to the field of domain adaptation, particularly in its well-rounded approach to optimizing both feature and sample-level transfer while preserving the underlying data structure essential for effective learning.