Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning by Analogy: Enhancing Few-Shot Prompting for Math Word Problem Solving with Computational Graph-Based Retrieval (2411.16454v1)

Published 25 Nov 2024 in cs.CL

Abstract: LLMs are known to struggle with complicated reasoning tasks such as math word problems (MWPs). In this paper, we present how analogy from similarly structured questions can improve LLMs' problem-solving capabilities for MWPs. Specifically, we rely on the retrieval of problems with similar computational graphs to the given question to serve as exemplars in the prompt, providing the correct reasoning path for the generation model to refer to. Empirical results across six math word problem datasets demonstrate the effectiveness of our proposed method, which achieves a significant improvement of up to 6.7 percent on average in absolute value, compared to baseline methods. These results highlight our method's potential in addressing the reasoning challenges in current LLMs.

Enhancing Few-Shot Prompting for Math Word Problems with Computational Graph-Based Retrieval

The paper "Learning by Analogy: Enhancing Few-Shot Prompting for Math Word Problem Solving with Computational Graph-Based Retrieval" offers a novel approach to overcoming the reasoning challenges that LLMs face, particularly in solving math word problems (MWPs). The authors propose a method that draws inspiration from human problem-solving techniques, particularly the use of analogies. They recognize the limitations of existing LLMs in addressing MWPs due to their complex integration of language comprehension and mathematical reasoning.

Methodological Innovations

The core contribution of this work is a computational graph-based retrieval system for enhancing few-shot prompting in LLMs. This methodology involves retrieving exemplars with computational graphs structurally similar to the target problem, thus providing LLMs with relevant reasoning paths to follow. The approach departs from commonly used random or semantic similarity-based retrieval systems, which often lack structural alignment with the mathematical intricacies of MWPs. Instead, the authors train a retriever model using contrastive learning, which identifies structural analogies between problems. This model is integrated into the LLM's inference workflow without altering the model's inherent parameters, ensuring modular effectiveness.

Empirical Results and Insights

Empirical evaluations were conducted across six MWP datasets, demonstrating compelling performance improvements. The authors report an average exact match score improvement of up to 6.7% over semantic-based retrieval, and 19.5% over random selection methods. These findings underline the capability of their approach to better align retrieval processes with mathematical reasoning requirements, marking a significant step towards refining the LLM's problem-solving abilities in MWPs. Notably, improvements are more substantial with smaller LLMs compared to their larger counterparts, suggesting that larger models possess inherent reasoning capacities that could compensate for structural mismatches in retrieval.

Implications and Future Prospects

The implications of this paper are twofold. Practically, the enhanced problem-solving capabilities translate into potential applications in educational technologies and automated tutoring systems. Theoretically, the approach challenges existing methodologies by emphasizing structural analogies over surface-level semantic similarities, inviting further exploration into structurally informed learning paradigms for other complex reasoning tasks beyond MWPs.

The authors also acknowledge the importance of computational graph annotations in training their retrieval model, which could be a bottleneck for widespread application. They address this by demonstrating a method to generate training data without human intervention, utilizing LLMs to synthetically create structurally equivalent problem pairs. This opens pathways for further research into automated data generation techniques for training retrieval models, which would be essential in scaling such methodologies across diverse domains.

Conclusion

This paper makes a significant contribution to the field of artificial intelligence, particularly in the context of improving MWPs solving capabilities of LLMs. By aligning retrieval methods with structural similarities in mathematical reasoning, this work presents a promising direction for advancing both the practical utility and theoretical understanding of LLMs in complex reasoning tasks. Future research can expand on this foundation, potentially exploring its applicability to a broader set of reasoning-intensive problems and improving automated methods for constructing computational graphs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xiaocong Yang (6 papers)
  2. Jiacheng Lin (22 papers)
  3. Ziqi Wang (92 papers)
  4. ChengXiang Zhai (64 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com