Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Sparse Sharing Architectures for Multiple Tasks (1911.05034v2)

Published 12 Nov 2019 in cs.CL and cs.LG

Abstract: Most existing deep multi-task learning models are based on parameter sharing, such as hard sharing, hierarchical sharing, and soft sharing. How choosing a suitable sharing mechanism depends on the relations among the tasks, which is not easy since it is difficult to understand the underlying shared factors among these tasks. In this paper, we propose a novel parameter sharing mechanism, named \emph{Sparse Sharing}. Given multiple tasks, our approach automatically finds a sparse sharing structure. We start with an over-parameterized base network, from which each task extracts a subnetwork. The subnetworks of multiple tasks are partially overlapped and trained in parallel. We show that both hard sharing and hierarchical sharing can be formulated as particular instances of the sparse sharing framework. We conduct extensive experiments on three sequence labeling tasks. Compared with single-task models and three typical multi-task learning baselines, our proposed approach achieves consistent improvement while requiring fewer parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Tianxiang Sun (35 papers)
  2. Yunfan Shao (19 papers)
  3. Xiaonan Li (48 papers)
  4. Pengfei Liu (191 papers)
  5. Hang Yan (86 papers)
  6. Xipeng Qiu (257 papers)
  7. Xuanjing Huang (287 papers)
Citations (125)