Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
129 tokens/sec
GPT-4o
28 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reflection of Thought: Inversely Eliciting Numerical Reasoning in Language Models via Solving Linear Systems (2210.05075v1)

Published 11 Oct 2022 in cs.CL, cs.IR, cs.NA, and math.NA

Abstract: Numerical reasoning over natural language has been a long-standing goal for the research community. However, cutting-edge LLMs have proven difficult to reliably generalize to a broad range of numbers, although they have shown proficiency in reasoning over common and simple numbers. In this paper, we propose a novel method to elicit and exploit the numerical reasoning knowledge hidden in pre-trained LLMs using simple anchor numbers. Concretely, we first leverage simple numbers as anchors to probe the implicitly inferred arithmetic expressions from LLMs, and then explicitly apply the expressions on complex numbers to get corresponding answers. To inversely elicit arithmetic expressions, we transform and formulate the task as an analytically solvable linear system. Experimental results on several numerical reasoning benchmarks demonstrate that our approach significantly improves numerical reasoning capabilities of existing LMs. More importantly, our approach is training-free and simply works in the inference phase, making it highly portable and achieving consistent performance benefits across a variety of LLMs (GPT-3, T5, BART, etc) in all zero-shot, few-shot, and fine-tuning scenarios.

Citations (5)

Summary

We haven't generated a summary for this paper yet.