Less Data Less Tokens: Multilingual Unification Learning for Efficient Test-Time Reasoning in LLMs (2506.18341v1)
Abstract: This paper explores the challenges of test-time scaling of LLMs, regarding both the data and inference efficiency. We highlight the diversity of multi-lingual reasoning based on our pilot studies, and then introduce a novel approach, (L2) multi-lingual unification learning with a decoding intervention strategy for further investigation. The basic idea of (L2) is that the reasoning process varies across different languages, which may be mutually beneficial to enhance both model performance and efficiency. In specific, there are two types of multi-lingual data: the entire long chain-of-thought annotations in different languages and the step-wise mixture of languages. By further tuning based on them, we show that even small amounts of data can significantly improve reasoning capabilities. Our findings suggest that multilingual learning reduces both the required data and the number of inference tokens while maintaining a comparable performance. Furthermore, (L2) is orthogonal to other data efficient methods. Thus, we also emphasize the importance of diverse data selection. The (L2) method offers a promising solution to the challenges of data collection and test-time compute efficiency in LLMs.
Sponsored by Paperpile, the PDF & BibTeX manager trusted by top AI labs.
Get 30 days freePaper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.