Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Hint of Thought prompting: an explainable and zero-shot approach to reasoning tasks with LLMs (2305.11461v7)

Published 19 May 2023 in cs.AI
Hint of Thought prompting: an explainable and zero-shot approach to reasoning tasks with LLMs

Abstract: Prompting becomes an increasingly important research topic for better utilization of LLMs. Although simple prompting performs well on single-step questions, it cannot permanently activate the correct knowledge path for multi-step reasoning tasks. The chain of thought (CoT), which often contains zero-shot CoT and few-shot CoT, is a recently developed prompting method that can explain the reasoning process to the LLM and outperforms simple prompting in three challenging reasoning tasks, including arithmetic, symbolic, and commonsense reasoning. Inspired by zero-shot CoT, and further extending the zero-shot ability, this paper proposes a novel hint of thought (HoT) prompting with explain-ability and zero-shot generalization. It is decomposed into three steps: explainable sub-questions, logical reasoning, and answering. Such three steps are sequentially ordered in step-by-step hints, which can be easily adjusted and explained to different tasks. Finally, experimental results demonstrate that our HoT prompting has a significant advantage on the zero-shot reasoning task compared to existing zero-shot CoT. We did zero-shot experiments on math tasks like GSM8K, ADDSUB, AQUA, SVAMP, and commonsense tasks such as StrategyQA. In particular, the accuracy of the proposed HoT prompting is improved with GSM8K from 40.50% to 70.65%, with AQUA from 31.9% to 46.4%, with SVAMP from 63.7% to 76.9%, and with ADDSUB from 74.7% to 87.34%, respectively, which even defeats the competitive PoT approach on GSM8k, AQUA, and SVAMP.

This paper introduces Hint of Thought (HoT) prompting, a novel method designed to enhance the reasoning capabilities of LLMs in zero-shot settings.

  • The HoT prompting approach decomposes complex problems into explainable sub-questions, encouraging LLMs to generate pseudocode for logical reasoning, thereby improving the interpretability of the reasoning process.
  • Experimental results on datasets such as GSM8K, AQUA, SVAMP, ADDSUB, and StrategyQA reveal that HoT prompting significantly outperforms zero-shot CoT, achieving accuracy improvements from 40.50\% to 70.65\% on GSM8K and from 52.3\% to 82.96\% on StrategyQA.
  • Ablation studies indicate that both the sub-question decomposition and pseudocode generation components of HoT contribute to its performance, with sub-questions enhancing interpretability and pseudocode providing a more precise logical reasoning process.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Ioktong Lei (1 paper)
  2. Zhidong Deng (22 papers)
Citations (3)