This paper introduces Hint of Thought (HoT) prompting, a novel method designed to enhance the reasoning capabilities of LLMs in zero-shot settings.
- The HoT prompting approach decomposes complex problems into explainable sub-questions, encouraging LLMs to generate pseudocode for logical reasoning, thereby improving the interpretability of the reasoning process.
- Experimental results on datasets such as GSM8K, AQUA, SVAMP, ADDSUB, and StrategyQA reveal that HoT prompting significantly outperforms zero-shot CoT, achieving accuracy improvements from 40.50\% to 70.65\% on GSM8K and from 52.3\% to 82.96\% on StrategyQA.
- Ablation studies indicate that both the sub-question decomposition and pseudocode generation components of HoT contribute to its performance, with sub-questions enhancing interpretability and pseudocode providing a more precise logical reasoning process.