Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explaining the Effectiveness of Multi-Task Learning for Efficient Knowledge Extraction from Spine MRI Reports (2205.02979v1)

Published 6 May 2022 in cs.LG, cs.AI, and cs.CL

Abstract: Pretrained Transformer based models finetuned on domain specific corpora have changed the landscape of NLP. However, training or fine-tuning these models for individual tasks can be time consuming and resource intensive. Thus, a lot of current research is focused on using transformers for multi-task learning (Raffel et al.,2020) and how to group the tasks to help a multi-task model to learn effective representations that can be shared across tasks (Standley et al., 2020; Fifty et al., 2021). In this work, we show that a single multi-tasking model can match the performance of task specific models when the task specific models show similar representations across all of their hidden layers and their gradients are aligned, i.e. their gradients follow the same direction. We hypothesize that the above observations explain the effectiveness of multi-task learning. We validate our observations on our internal radiologist-annotated datasets on the cervical and lumbar spine. Our method is simple and intuitive, and can be used in a wide range of NLP problems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Arijit Sehanobish (20 papers)
  2. McCullen Sandora (30 papers)
  3. Nabila Abraham (5 papers)
  4. Jayashri Pawar (3 papers)
  5. Danielle Torres (2 papers)
  6. Anasuya Das (3 papers)
  7. Murray Becker (2 papers)
  8. Richard Herzog (3 papers)
  9. Benjamin Odry (5 papers)
  10. Ron Vianu (2 papers)
Citations (3)