Reverse That Number! Decoding Order Matters in Arithmetic Learning (2403.05845v1)
Abstract: Recent advancements in pretraining have demonstrated that modern LLMs possess the capability to effectively learn arithmetic operations. However, despite acknowledging the significance of digit order in arithmetic computation, current methodologies predominantly rely on sequential, step-by-step approaches for teaching LLMs arithmetic, resulting in a conclusion where obtaining better performance involves fine-grained step-by-step. Diverging from this conventional path, our work introduces a novel strategy that not only reevaluates the digit order by prioritizing output from the least significant digit but also incorporates a step-by-step methodology to substantially reduce complexity. We have developed and applied this method in a comprehensive set of experiments. Compared to the previous state-of-the-art (SOTA) method, our findings reveal an overall improvement of in accuracy while requiring only a third of the tokens typically used during training. For the purpose of facilitating replication and further research, we have made our code and dataset publicly available at \url{https://anonymous.4open.science/r/RAIT-9FB7/}.
- GPT-4 technical report. CoRR, abs/2303.08774.
- Gemini: A family of highly capable multimodal models. CoRR, abs/2312.11805.
- Sparks of Artificial General Intelligence: Early experiments with GPT-4.
- PAL: program-aided language models. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, pages 10764–10799.
- MathPrompter: Mathematical reasoning using large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track), pages 37–42.
- Length generalization in arithmetic transformers. CoRR, abs/2306.15400.
- Teaching arithmetic to small transformers. CoRR, abs/2307.03381.
- Soochan Lee and Gunhee Kim. 2023. Recursion of thought: A divide-and-conquer approach to multi-context reasoning with language models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 623–658.
- Tiedong Liu and Bryan Kian Hsiang Low. 2023. Goat: Fine-tuned llama outperforms gpt-4 on arithmetic tasks.
- Evaluating transformer language models on arithmetic operations using number decomposition. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 291–297, Marseille, France. European Language Resources Association.
- Show your work: Scratchpads for intermediate computation with language models.
- Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, pages 27730–27744.
- Limitations of language models in arithmetic and symbolic induction. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9285–9298.
- Toolformer: Language models can teach themselves to use tools. CoRR, abs/2302.04761.
- Llama 2: Open Foundation and Fine-Tuned Chat Models.
- Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008.
- Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.
- GPT can solve mathematical problems without a calculator. CoRR, abs/2309.03241.
- How well do large language models perform in arithmetic tasks? CoRR, abs/2304.02015.
- Teaching algorithmic reasoning via in-context learning. CoRR, abs/2211.09066.
- Daniel Zhang-Li (10 papers)
- Nianyi Lin (6 papers)
- Jifan Yu (49 papers)
- Zheyuan Zhang (61 papers)
- Zijun Yao (50 papers)
- Xiaokang Zhang (42 papers)
- Lei Hou (127 papers)
- Jing Zhang (731 papers)
- Juanzi Li (144 papers)