Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines (2311.10418v1)

Published 17 Nov 2023 in cs.DC and cs.LG

Abstract: Multi-task model training has been adopted to enable a single deep neural network model (often a LLM) to handle multiple tasks (e.g., question answering and text summarization). Multi-task training commonly receives input sequences of highly different lengths due to the diverse contexts of different tasks. Padding (to the same sequence length) or packing (short examples into long sequences of the same length) is usually adopted to prepare input samples for model training, which is nonetheless not space or computation efficient. This paper proposes a dynamic micro-batching approach to tackle sequence length variation and enable efficient multi-task model training. We advocate pipeline-parallel training of the large model with variable-length micro-batches, each of which potentially comprises a different number of samples. We optimize micro-batch construction using a dynamic programming-based approach, and handle micro-batch execution time variation through dynamic pipeline and communication scheduling, enabling highly efficient pipeline training. Extensive evaluation on the FLANv2 dataset demonstrates up to 4.39x higher training throughput when training T5, and 3.25x when training GPT, as compared with packing-based baselines. DynaPipe's source code is publicly available at https://github.com/awslabs/optimizing-multitask-training-through-dynamic-pipelines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Chenyu Jiang (6 papers)
  2. Zhen Jia (34 papers)
  3. Shuai Zheng (67 papers)
  4. Yida Wang (62 papers)
  5. Chuan Wu (68 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.