Papers
Topics
Authors
Recent
Search
2000 character limit reached

CoBa: Convergence Balancer for Multitask Finetuning of Large Language Models

Published 9 Oct 2024 in cs.CL and cs.LG | (2410.06741v2)

Abstract: Multi-task learning (MTL) benefits the fine-tuning of LLMs by providing a single model with improved performance and generalization ability across tasks, presenting a resource-efficient alternative to developing separate models for each task. Yet, existing MTL strategies for LLMs often fall short by either being computationally intensive or failing to ensure simultaneous task convergence. This paper presents CoBa, a new MTL approach designed to effectively manage task convergence balance with minimal computational overhead. Utilizing Relative Convergence Scores (RCS), Absolute Convergence Scores (ACS), and a Divergence Factor (DF), CoBa dynamically adjusts task weights during the training process, ensuring that the validation loss of all tasks progress towards convergence at an even pace while mitigating the issue of individual task divergence. The results of our experiments involving three disparate datasets underscore that this approach not only fosters equilibrium in task convergence but enhances the LLMs' performance by up to 13% relative to the second-best baselines. Code is open-sourced at https://github.com/codefuse-ai/MFTCoder.

Summary

  • The paper introduces CoBa, a novel approach that dynamically adjusts task weights to ensure balanced convergence across multiple tasks during finetuning.
  • It leverages Relative and Absolute Convergence Scores along with a Divergence Factor to mitigate early task divergence while maintaining computational efficiency.
  • Experimental results across diverse datasets show up to a 4% improvement in code completion and enhanced performance for low-resource languages.

Convergence Balancer for Multitask Finetuning of LLMs

The paper presents CoBa (Convergence Balancer), a novel method for the multitask learning (MTL) of LLMs. The focus is on achieving balanced convergence across tasks while maintaining computational efficiency. The authors address the limitations of existing MTL strategies, which often involve high computational cost or fail to ensure simultaneous task convergence.

Methodology

CoBa introduces a dynamic adjustment of task weights during the training process, leveraging the Relative Convergence Scores (RCS), Absolute Convergence Scores (ACS), and a Divergence Factor (DF). This approach ensures that all tasks progress toward convergence at an even pace, minimizing individual task divergence.

Key components of CoBa include:

  1. Relative Convergence Scores (RCS): Used to assess the relative convergence speed among tasks. Tasks that converge faster are assigned smaller weights, while those converging slower receive larger weights.
  2. Absolute Convergence Scores (ACS): Focuses on individual task performance, reducing weights for diverging tasks while increasing them for consistently converging ones.
  3. Divergence Factor (DF): Balances the influence of RCS and ACS, emphasizing RCS when all tasks are converging and ACS when divergences are detected.

The paper details efficient computation methods for these scores and factors, ensuring minimal computational overhead while being highly compatible with parallel training architectures.

Experimental Results

The CoBa method was tested on four datasets: the Code Completion (CC) Dataset, the Code-Related Task (CRT) Dataset, XTREME-UP, and the Multi-Domain QA Dataset. Across all datasets, CoBa demonstrated superior performance compared to existing methods.

  • Code Completion Dataset: CoBa achieved up to a 4% improvement in average Pass@1 scores, effectively balancing convergence and mitigating task divergence issues, such as with the Python task.
  • Code-Related Tasks Dataset: CoBa showed notable improvements in code completion and unit test generation tasks. The method prevented early divergence in certain tasks, highlighting its efficacy in ensuring convergence balance.
  • XTREME-UP: CoBa outperformed baselines in all task configurations (3, 6, and 9 tasks), significantly improving performance for low-resource languages, demonstrating robust adaptability.
  • Multi-Domain QA Dataset: CoBa achieved the lowest perplexity across diverse QA tasks, ensuring performance consistency across different domains.

Implications and Future Work

CoBa's ability to balance convergence across multiple tasks with low computational complexity is significant for the advancement of MTL in LLMs. It offers a practical solution for deploying LLMs in diverse applications where tasks have varying complexities and resource requirements. The method also provides a framework adaptable to other modalities beyond NLP.

Future work could involve extending CoBa to integrate with Mixture of Experts frameworks, ensuring task-specific parameter optimization while mitigating task interference. Another promising area is enhancing CoBa to adapt dynamically in curriculum learning scenarios, potentially prioritizing tasks based on evolving training stages.

In summary, CoBa represents a meaningful step forward in efficient multitask finetuning of LLMs, offering a harmonious balance between task performance and computational demands.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.