Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SkillNet-NLU: A Sparsely Activated Model for General-Purpose Natural Language Understanding (2203.03312v4)

Published 7 Mar 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Prevailing deep models are single-purpose and overspecialize at individual tasks. However, when being extended to new tasks, they typically forget previously learned skills and learn from scratch. We address this issue by introducing SkillNet-NLU, a general-purpose model that stitches together existing skills to learn new tasks more effectively. The key feature of our approach is that it is sparsely activated guided by predefined skills. Different from traditional dense models that always activate all the model parameters, SkillNet-NLU only activates parts of the model parameters whose skills are relevant to the target task. When learning for a new task, our approach precisely activates required skills and also provides an option to add new skills. We evaluate on natural language understandings tasks and have the following findings. First, with only one model checkpoint, SkillNet-NLU performs better than task-specific fine-tuning and two multi-task learning baselines (i.e., dense model and Mixture-of-Experts model) on six tasks. Second, sparsely activated pre-training further improves the overall performance. Third, SkillNet-NLU significantly outperforms baseline systems when being extended to new tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Fan Zhang (686 papers)
  2. Duyu Tang (65 papers)
  3. Yong Dai (33 papers)
  4. Cong Zhou (39 papers)
  5. Shuangzhi Wu (29 papers)
  6. Shuming Shi (126 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.