Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Analyzing Multi-Task Learning for Abstractive Text Summarization (2210.14606v2)

Published 26 Oct 2022 in cs.CL and cs.AI

Abstract: Despite the recent success of multi-task learning and pre-finetuning for natural language understanding, few works have studied the effects of task families on abstractive text summarization. Task families are a form of task grouping during the pre-finetuning stage to learn common skills, such as reading comprehension. To close this gap, we analyze the influence of multi-task learning strategies using task families for the English abstractive text summarization task. We group tasks into one of three strategies, i.e., sequential, simultaneous, and continual multi-task learning, and evaluate trained models through two downstream tasks. We find that certain combinations of task families (e.g., advanced reading comprehension and natural language inference) positively impact downstream performance. Further, we find that choice and combinations of task families influence downstream performance more than the training scheme, supporting the use of task families for abstractive text summarization.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Frederic Kirstein (8 papers)
  2. Jan Philip Wahle (31 papers)
  3. Terry Ruas (46 papers)
  4. Bela Gipp (98 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.