Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 171 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 38 tok/s Pro
GPT-5 High 43 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 173 tok/s Pro
GPT OSS 120B 442 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

Dynamic Least-to-Most Prompting

Updated 7 November 2025
  • Dynamic least-to-most prompting is a compositional approach that decomposes complex tasks into ordered, interdependent subproblems.
  • It employs adaptive exemplar selection and stage-wise prompt construction to boost data efficiency and improve generalization across tasks.
  • Empirical benchmarks such as SCAN, CFQ, and COGS demonstrate its superior performance compared to standard chain-of-thought methods.

Dynamic least-to-most prompting is a compositional prompting paradigm for LLMs in which a complex task is systematically decomposed into a structured sequence of simpler subproblems. The LLM processes these subproblems incrementally—starting from the least complex and proceeding to the most complex—often using the outputs of preceding subproblems as context for solving subsequent ones. Dynamic variants of least-to-most prompting further incorporate instance-adaptive decomposition and support tailored in-context learning for enhanced data efficiency and compositional generalization, particularly in semantic parsing and reasoning tasks.

1. Core Principles of Least-to-Most Prompting

Least-to-most prompting is distinct from standard chain-of-thought (CoT) prompting in that it requires explicit problem decomposition followed by sequential subproblem solving, rather than a flat rationale generation. In the canonical framework:

  1. Decomposition Stage: The complex input task is broken down (usually via prompt-based exemplars or model-in-the-loop parsing) into an ordered, dependency-respecting sequence of subproblems, typically reflecting compositional or hierarchical structure.
  2. Sequential Solution Stage: The LLM solves each subproblem in order, with each solution optionally added to the prompt context for subsequent subproblems. Dependence across steps is crucial: earlier answers are often explicitly referenced in later subproblem resolutions.

Unlike static decomposition, dynamic least-to-most prompting adapts the decomposition and in-context example selection to the structure of each input, leveraging model-driven or algorithmic parsing strategies to tailor problem breakdown per instance (Drozdov et al., 2022).

This approach directly addresses the weaknesses of CoT prompting in tasks demanding generalization to input complexities or compositions not observed in prompt exemplars (‘easy-to-hard generalization’) (Zhou et al., 2022).

2. Methodological Advances: Dynamic Problem Decomposition

Dynamic least-to-most prompting incorporates adaptive, often model-in-the-loop, mechanisms for decomposition and support selection. Key design elements include:

  • Tree-Structured Decomposition: For linguistic tasks such as semantic parsing, inputs are parsed into syntactic/semantic trees (e.g., via LM-prompted clause, phrase, and relation identification). Nodes of the tree correspond to subproblems, which are then linearized into a “least-to-most” sequence for model prediction (Drozdov et al., 2022).
  • Dynamic Exemplar Selection: Given a pre-collected candidate pool, in-context exemplars are selected dynamically according to their structural coverage of decomposition subtrees or local subproblem types, maximizing context relevance within the prompt window (Drozdov et al., 2022).
  • Stage-wise Prompt Construction: At each solution step, both static exemplars (for grounding) and dynamically matched in-context examples are incorporated, along with a running history of already-solved subproblems and their solutions.

A typical dynamic least-to-most pseudocode for semantic parsing is:

1
2
3
4
5
6
7
8
9
10
for input_x in dataset:
    tree = lm_decompose(input_x)  # dynamic parse
    exemplars = dynamic_select(tree)
    subproblems = linearize(tree)
    context = static_examples + exemplars
    for sp in subproblems:
        prompt = context + solved_subproblems + sp
        sol = LM_generate(prompt)
        context += sol
    final_solution = sol

In symbolic, compositional, or mathematical reasoning tasks, the decomposition can be shallower or domain-specific (e.g., recursion over lists or compositional commands in SCAN).

3. Empirical Findings and Benchmark Performance

Dynamic least-to-most prompting has been empirically validated on challenging compositional semantic parsing and reasoning benchmarks:

  • SCAN Benchmark: On the length split, least-to-most prompting achieves ≥99% accuracy using only 14 exemplars, compared to ~16% for CoT and standard prompting, matching the performance of neural-symbolic systems trained on >15,000 samples (Zhou et al., 2022).
  • CFQ (Compositional Freebase Questions): Dynamic least-to-most prompting achieves 95.0% average accuracy (across MCD splits), reducing error by 45% over the previous best grammar-induction-driven model, while using only 1% of the training data as an exemplar pool (Drozdov et al., 2022).
  • COGS: With instance-adaptive decomposition and exemplars, achieves 99.2% accuracy using only 0.4% of exemplars (Drozdov et al., 2022).
  • Mathematical Reasoning (GSM8K, DROP): Outperforms CoT for problems requiring ≥5 reasoning steps, highlighting robustness for “hard” cases (Zhou et al., 2022).

Component-wise accuracy evaluation in downstream Text-to-SQL translation demonstrates superior performance in SQL clause and operation correctness relative to iterative “least-to-most” and CoT variants (Tai et al., 2023).

4. Implementation Workflow and Practical Considerations

Dynamic least-to-most prompting for real-world semantic parsing involves:

  • Developing exemplar pools covering relevant subproblem and solution patterns, possibly pre-filtering for coverage and diversity (Arora et al., 2023).
  • Constructing dynamic decomposition procedures, either through model-in-the-loop prompting (for tree parsing, subproblem labeling) or deterministic syntactic heuristics.
  • Designing selection heuristics or model-driven matchers to dynamically retrieve in-context exemplars congruent with the decomposed problem’s tree structure.
  • Supporting error-tolerant prompt linearization: conditioning each solution not just on input, but also on cumulative prior subproblem outputs.

For Text-to-SQL, additional adaptations such as domain adaptation (constructing schema-aligned generic prompts for new domains) and staging (NL decomposition → intermediate representations → SQL generation) further augment cross-domain and cross-compositional robustness (Arora et al., 2023).

Resource requirements remain low compared to full supervised retraining or large exemplar sets: dynamic least-to-most prompting uses only tens to hundreds of exemplars for data-rich tasks, with substantial performance gains.

5. Strengths, Limitations, and Error Modes

Strengths

  • Enables generalization to novel compositions and longer input sequences than those seen in prompt exemplars (“easy-to-hard” generalization).
  • Provides fine-grained control over in-context support, improving data efficiency.
  • Outperforms both static and chain-of-thought prompting, and in certain benchmarks surpasses data-augmented supervised systems (Drozdov et al., 2022).
  • Maintains model-agnosticity: requires only prompt engineering, not model architecture changes or fine-tuning.

Limitations

  • Quality of decomposition is task dependent; failure in adaptive parsing or poor decomposition reduces effectiveness.
  • Decomposition/exemplar selection modules, while prompt-based, can become brittle for highly noisy or less-structured domains.
  • Error propagation remains a concern in strictly iterative settings where early subproblem errors bias subsequent solutions; “one-pass” decomposition, where the reasoning chain is constructed before final solution generation, helps to mitigate this (Tai et al., 2023).
  • Performance saturates with limited exemplar diversity or insufficient primitive coverage in the pool.

6. Adaptations and Extensions: Integration with Other Prompting Strategies

Attempts to layer explicit hinting (e.g., Hint-before-Solving Prompting, HSP) onto each subproblem in least-to-most prompting yield limited or inconsistent benefit, and may even prove detrimental due to redundancy with the decomposition’s intrinsic planning (Fu et al., 22 Feb 2024). Naive dynamic hint insertion is therefore not generally beneficial; however, adaptive injection of hints in cases of model uncertainty or subproblem difficulty remains a plausible avenue for future research.

Offline dynamic least-to-most prompting—where coverage-optimal, domain-adapted prompt pools are pre-computed for an entire target database or domain—enables fast adaptation and inference-time efficiency, outperforming both dynamic query-time retrieval and supervised baselines in cross-domain settings such as KaggleDBQA (Arora et al., 2023).

7. Summary Table: Dynamic Least-to-Most Prompting vs. Alternatives

Property Dynamic LtM Prompting Static/CoT Prompting Iterative LtM Prompting
Problem decomposition Adaptive, input-specific Flat or fixed Often fixed
Exemplar selection Dynamic, per-instance Static Static
Generalization Strong (compositional) Weak (length/complexity) Moderate
Data efficiency High Low Moderate
Error propagation Mitigated via context N/A Can be severe
Inference cost Moderate Low High (if fully iterative)

Dynamic least-to-most prompting constitutes a robust, data-efficient approach for compositional generalization in LLMs, especially for tasks exhibiting strong hierarchical or recursive structure. Its applicability to realistic semantic parsing domains and its ability to achieve or exceed prior state-of-the-art with order-of-magnitude less data underscore its utility in modern LLM-based systems (Drozdov et al., 2022, Zhou et al., 2022, Arora et al., 2023).

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Dynamic Least-to-Most Prompting.