Papers
Topics
Authors
Recent
2000 character limit reached

Less Data Less Tokens: Multilingual Unification Learning for Efficient Test-Time Reasoning in LLMs (2506.18341v1)

Published 23 Jun 2025 in cs.CL

Abstract: This paper explores the challenges of test-time scaling of LLMs, regarding both the data and inference efficiency. We highlight the diversity of multi-lingual reasoning based on our pilot studies, and then introduce a novel approach, (L2) multi-lingual unification learning with a decoding intervention strategy for further investigation. The basic idea of (L2) is that the reasoning process varies across different languages, which may be mutually beneficial to enhance both model performance and efficiency. In specific, there are two types of multi-lingual data: the entire long chain-of-thought annotations in different languages and the step-wise mixture of languages. By further tuning based on them, we show that even small amounts of data can significantly improve reasoning capabilities. Our findings suggest that multilingual learning reduces both the required data and the number of inference tokens while maintaining a comparable performance. Furthermore, (L2) is orthogonal to other data efficient methods. Thus, we also emphasize the importance of diverse data selection. The (L2) method offers a promising solution to the challenges of data collection and test-time compute efficiency in LLMs.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.