Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Convinced Prompting: Few-Shot Question Answering with Repeated Introspection (2310.05035v2)

Published 8 Oct 2023 in cs.CL and cs.AI

Abstract: While LLMs such as ChatGPT and PaLM have demonstrated remarkable performance in various language understanding and generation tasks, their capabilities in complex reasoning and intricate knowledge utilization still fall short of human-level proficiency. Recent studies have established the effectiveness of prompts in steering LLMs towards generating desired outputs. Building on these insights, we introduce a novel framework that harnesses the potential of large-scale pre-trained LLMs, to iteratively enhance performance of the LLMs. Our framework incorporates three components: \textit{Normal CoT}, a \textit{Convincer}, and an \textit{Answerer}. It processes the output of a typical few-shot chain-of-thought prompt, assesses the correctness of the response, scrutinizes the answer, refines the reasoning, and ultimately produces a new solution. Experimental results on the 7 datasets of miscellaneous problems validate the efficacy of the Self-Convince framework, achieving substantial improvements compared to the baselines. This study contributes to the burgeoning body of research focused on integrating pre-trained LLMs with tailored prompts and iterative refinement processes to augment their performance in complex tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Haodi Zhang (6 papers)
  2. Min Cai (14 papers)
  3. Xinhe Zhang (4 papers)
  4. Chen Jason Zhang (25 papers)
  5. Rui Mao (54 papers)
  6. Kaishun Wu (23 papers)
Citations (7)