Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Boosting Tool Use of Large Language Models via Iterative Reinforced Fine-Tuning (2501.09766v1)

Published 15 Jan 2025 in cs.CL, cs.AI, and cs.LG

Abstract: Augmenting LLMs with external tools is a promising approach to enhance their capabilities. Effectively leveraging this potential for complex tasks hinges crucially on improving their ability to use tools. Synthesizing tool use data by simulating the real world is an effective approach. Nevertheless, our investigation reveals that training gains significantly decay as the scale of these data increases. The primary factor is the model's poor performance (a.k.a deficiency) in complex scenarios, which hinders learning from data using SFT. Driven by this objective, we propose an iterative reinforced fine-tuning strategy to continually guide the model to alleviate it. Specifically, we first identify deficiency-related data based on feedback from the policy model, then perform a Monte Carlo Tree Search to collect fine-grained preference pairs to pinpoint deficiencies. Subsequently, we update the policy model using preference optimization to align with ground truth and misalign with deficiencies. This process can be iterated. Moreover, before the iteration, we propose an easy-to-hard warm-up SFT strategy to facilitate learning from challenging data. The experiments demonstrate our models go beyond the same parametric models, outperforming many larger open-source and closed-source models. Additionally, it has achieved notable training gains in complex tool use scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yirong Zeng (3 papers)
  2. Xiao Ding (38 papers)
  3. Yuxian Wang (5 papers)
  4. Weiwen Liu (59 papers)
  5. Wu Ning (5 papers)
  6. Yutai Hou (23 papers)
  7. Xu Huang (56 papers)
  8. Bing Qin (186 papers)
  9. Ting Liu (329 papers)