Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Continual Domain Adaptation through Pruning-aided Domain-specific Weight Modulation (2304.07560v1)

Published 15 Apr 2023 in cs.CV

Abstract: In this paper, we propose to develop a method to address unsupervised domain adaptation (UDA) in a practical setting of continual learning (CL). The goal is to update the model on continually changing domains while preserving domain-specific knowledge to prevent catastrophic forgetting of past-seen domains. To this end, we build a framework for preserving domain-specific features utilizing the inherent model capacity via pruning. We also perform effective inference using a novel batch-norm based metric to predict the final model parameters to be used accurately. Our approach achieves not only state-of-the-art performance but also prevents catastrophic forgetting of past domains significantly. Our code is made publicly available.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Prasanna B (1 paper)
  2. Sunandini Sanyal (4 papers)
  3. R. Venkatesh Babu (108 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.