Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Active Prompting with Chain-of-Thought for Large Language Models (2302.12246v5)

Published 23 Feb 2023 in cs.CL
Active Prompting with Chain-of-Thought for Large Language Models

Abstract: The increasing scale of LLMs brings emergent abilities to various complex tasks requiring reasoning, such as arithmetic and commonsense reasoning. It is known that the effective design of task-specific prompts is critical for LLMs' ability to produce high-quality answers. In particular, an effective approach for complex question-and-answer tasks is example-based prompting with chain-of-thought (CoT) reasoning, which significantly improves the performance of LLMs. However, current CoT methods rely on a fixed set of human-annotated exemplars, which are not necessarily the most effective examples for different tasks. This paper proposes a new method, Active-Prompt, to adapt LLMs to different tasks with task-specific example prompts (annotated with human-designed CoT reasoning). For this purpose, we propose a solution to the key problem of determining which questions are the most important and helpful ones to annotate from a pool of task-specific queries. By borrowing ideas from the related problem of uncertainty-based active learning, we introduce several metrics to characterize the uncertainty so as to select the most uncertain questions for annotation. Experimental results demonstrate the superiority of our proposed method, achieving state-of-the-art on eight complex reasoning tasks. Further analyses of different uncertainty metrics, pool sizes, zero-shot learning, and accuracy-uncertainty relationship demonstrate the effectiveness of our method. Our code will be available at https://github.com/shizhediao/active-prompt.

Active Prompting with Chain-of-Thought for LLMs

LLMs have exhibited remarkable abilities across a range of complex reasoning tasks, including arithmetic, commonsense reasoning, and symbolic reasoning. A crucial factor in harnessing these capabilities is the design of task-specific prompts that effectively guide the LLMs to produce accurate outputs. This paper introduces "Active-Prompt," a novel method that leverages task-specific prompting augmented with human-designed chain-of-thought (CoT) reasoning, aiming to adapt LLMs more efficiently to various complex reasoning tasks.

The traditional approach of employing chain-of-thought prompting involves using a fixed set of human-annotated exemplars. However, these exemplars might not always be optimal for every task as they are often fixed and manually curated without tailoring to task-specific nuances. To address this limitation, the authors propose the Active-Prompt method, which dynamically selects the most critical questions to annotate, drawing from a pool of task-specific queries, thereby optimizing the examples used in the prompting phase.

The core of the Active-Prompt method lies in the selection mechanism for annotating questions, which borrows ideas from uncertainty-based active learning. The method introduces several uncertainty metrics to guide the selection process, including disagreement, entropy, variance, and self-confidence. These metrics assess the uncertainty in LLM predictions and identify the most uncertain questions for subsequent human annotation. Once annotated, these examples provide tailored exemplary reasoning chains for LLMs during inference.

Experimental evaluations on eight complex reasoning tasks demonstrated the efficacy of Active-Prompt. The method achieved state-of-the-art results, significantly outperforming existing practices, including baseline chain-of-thought and self-consistency approaches. The assessments covered various reasoning challenges, with particular emphasis on arithmetic and commonsense reasoning.

For instance, in arithmetic reasoning tasks such as GSM8K and AQuA, Active-Prompt improved results by judiciously selecting contextually relevant and uncertain examples to be probed and annotated. Additionally, in commonsense tasks, where variability and ambiguity are higher, adoption of entropy-based selection proved advantageous.

The implications of this research are multifaceted:

  • Practically, the Active-Prompt method reduces reliance on extensive human-curated exemplars by employing a structured strategy to identify and annotate only the most impactful questions, optimizing both time and resources.
  • Theoretically, the paper enriches the understanding of integrating active learning principles with in-context learning paradigms in large models, paving the way for intelligent prompting mechanisms that dynamically adapt to the evolving capacities of LLMs.

Future research could explore further interplay between uncertainty and diversity in exemplars, and experiment with the collaborative potential wherein active selections could be tuned with learned models or meta-learning frameworks, pushing the boundaries of LLM efficiency and performance in new domains.

In conclusion, Active-Prompt represents a significant advance in optimizing LLM reasoning through intelligent prompting, offering fresh insights and techniques that could be instrumental in the continued development of adaptive natural language understanding systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Shizhe Diao (47 papers)
  2. Pengcheng Wang (25 papers)
  3. Yong Lin (77 papers)
  4. Tong Zhang (569 papers)
  5. Rui Pan (67 papers)
  6. Xiang Liu (475 papers)
Citations (100)
Youtube Logo Streamline Icon: https://streamlinehq.com