Large Language Models Are Unconscious of Unreasonability in Math Problems (2403.19346v3)
Abstract: LLMs demonstrate substantial capabilities in solving math problems. However, they tend to produce hallucinations when given questions containing unreasonable errors. In this paper, we study the behavior of LLMs when faced with unreasonable math problems and further explore their potential to address these problems. We construct the Unreasonable Math Problem (UMP) benchmark to examine the error detection ability of LLMs. Experiments show that LLMs are able to detect unreasonable errors, but still fail in generating non-hallucinatory content. In order to improve their ability of error detection and correction, we further design a strategic prompt template called Critical Calculation and Conclusion(CCC). With CCC, LLMs can better self-evaluate and detect unreasonable errors in math questions, making them more reliable and safe in practical application scenarios.
- 2023. Gemini: A family of highly capable multimodal models.
- 2023. Gpt-4 technical report.
- Qwen technical report. arXiv preprint arXiv:2309.16609.
- Baichuan. 2023. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305.
- Training verifiers to solve math word problems.
- Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320–335.
- Complexity-based prompting for multi-step reasoning.
- Measuring mathematical problem solving with the math dataset. NeurIPS.
- Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems, volume 35, pages 22199–22213. Curran Associates, Inc.
- Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 951–958.
- Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct.
- Llama 2: Open foundation and fine-tuned chat models.
- Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models. arXiv preprint arXiv:2305.04091.
- Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Curran Associates, Inc.
- Outcome-supervised verifiers for planning in mathematical reasoning.
- Metamath: Bootstrap your own mathematical questions for large language models.
- Jingyuan Ma (9 papers)
- Damai Dai (38 papers)
- Zhifang Sui (89 papers)
- Lei Sha (34 papers)