Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Not all domains are equally complex: Adaptive Multi-Domain Learning (2003.11504v1)

Published 25 Mar 2020 in cs.LG, cs.CV, and stat.ML

Abstract: Deep learning approaches are highly specialized and require training separate models for different tasks. Multi-domain learning looks at ways to learn a multitude of different tasks, each coming from a different domain, at once. The most common approach in multi-domain learning is to form a domain agnostic model, the parameters of which are shared among all domains, and learn a small number of extra domain-specific parameters for each individual new domain. However, different domains come with different levels of difficulty; parameterizing the models of all domains using an augmented version of the domain agnostic model leads to unnecessarily inefficient solutions, especially for easy to solve tasks. We propose an adaptive parameterization approach to deep neural networks for multi-domain learning. The proposed approach performs on par with the original approach while reducing by far the number of parameters, leading to efficient multi-domain learning solutions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Ali Senhaji (1 paper)
  2. Jenni Raitoharju (50 papers)
  3. Moncef Gabbouj (167 papers)
  4. Alexandros Iosifidis (153 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.