Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Flexible Multi-task Networks by Learning Parameter Allocation (1910.04915v2)

Published 10 Oct 2019 in cs.LG and stat.ML

Abstract: This paper proposes a novel learning method for multi-task applications. Multi-task neural networks can learn to transfer knowledge across different tasks by using parameter sharing. However, sharing parameters between unrelated tasks can hurt performance. To address this issue, we propose a framework to learn fine-grained patterns of parameter sharing. Assuming that the network is composed of several components across layers, our framework uses learned binary variables to allocate components to tasks in order to encourage more parameter sharing between related tasks, and discourage parameter sharing otherwise. The binary allocation variables are learned jointly with the model parameters by standard back-propagation thanks to the Gumbel-Softmax reparametrization method. When applied to the Omniglot benchmark, the proposed method achieves a 17% relative reduction of the error rate compared to state-of-the-art.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Krzysztof Maziarz (15 papers)
  2. Efi Kokiopoulou (12 papers)
  3. Andrea Gesmundo (20 papers)
  4. Luciano Sbaiz (7 papers)
  5. Gabor Bartok (10 papers)
  6. Jesse Berent (18 papers)
Citations (12)