Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Empirical Evaluation of Multi-task Learning in Deep Neural Networks for Natural Language Processing (1908.07820v2)

Published 16 Aug 2019 in cs.CL and cs.LG

Abstract: Multi-Task Learning (MTL) aims at boosting the overall performance of each individual task by leveraging useful information contained in multiple related tasks. It has shown great success in NLP. Currently, a number of MLT architectures and learning mechanisms have been proposed for various NLP tasks. However, there is no systematic exploration and comparison of different MLT architectures and learning mechanisms for their strong performance in-depth. In this paper, we conduct a thorough examination of typical MTL methods on a broad range of representative NLP tasks. Our primary goal is to understand the merits and demerits of existing MTL methods in NLP tasks, thus devising new hybrid architectures intended to combine their strengths.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Jianquan Li (18 papers)
  2. Xiaokang Liu (28 papers)
  3. Wenpeng Yin (69 papers)
  4. Min Yang (239 papers)
  5. Liqun Ma (8 papers)
  6. Yaohong Jin (2 papers)
Citations (13)