Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Decomposed Prompting: A Modular Approach for Solving Complex Tasks (2210.02406v2)

Published 5 Oct 2022 in cs.CL
Decomposed Prompting: A Modular Approach for Solving Complex Tasks

Abstract: Few-shot prompting is a surprisingly powerful way to use LLMs to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn, especially when embedded in more complex tasks. To address this, we propose Decomposed Prompting, a new approach to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks that can be delegated to a library of prompting-based LLMs dedicated to these sub-tasks. This modular structure allows each prompt to be optimized for its specific sub-task, further decomposed if necessary, and even easily replaced with more effective prompts, trained models, or symbolic functions if desired. We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting using GPT3. On symbolic reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into even simpler solvable sub-tasks. When the complexity comes from the input length, we can recursively decompose the task into the same task but with smaller inputs. We also evaluate our approach on textual multi-step reasoning tasks: on long-context multi-hop QA task, we can more effectively teach the sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA, we can incorporate a symbolic information retrieval within our decomposition framework, leading to improved performance on both tasks. Datasets, Code and Prompts available at https://github.com/allenai/DecomP.

Decomposed Prompting: A Modular Approach for Solving Complex Tasks

The paper "Decomposed Prompting: A Modular Approach for Solving Complex Tasks" introduces a sophisticated method to address the limitations of few-shot prompting in complex tasks, particularly in leveraging LLMs like GPT-3. The approach, known as Decomposed Prompting (DecomP), proposes an innovative modular framework, treating complex problems as a series of simpler sub-tasks, with each delegated to specialized sub-task handlers.

Few-shot prompting, despite its proficiency in solving a variety of tasks, encounters challenges with intricate or layered reasoning tasks. Existing methods like Chain-of-Thought (CoT) prompting have attempted to mitigate this by facilitating the step-by-step reasoning process, yet they fall short when faced with tasks that surpass demonstrative simplicity or when the sub-problems themselves are non-trivial.

Core Methodology

DecomP fundamentally restructures task solving by decomposing a primary task into simpler, manageable sub-tasks. Each sub-task is handled by a dedicated LLM, optimized and potentially further decomposable if necessary. This architecture facilitates replacing any sub-task with alternative models or symbolic functions, effectively embracing a modular approach reminiscent of software engineering principles.

By using a 'decomposer prompt,' the framework explicitly defines a sequence of sub-tasks necessary to resolve the overarching task. These sub-tasks can span a spectrum from standard prompts, further decomposed structures, or even symbolic expressions, introducing flexibility and precision. The authors compellingly argue for the advantages of this method, underscoring its ability to independently optimize each component, thus enhancing the overall problem-solving capability.

Empirical Evaluation

The research empirically demonstrates DecomP’s superiority over prior methodologies across eight challenging datasets, using the GPT-3 model as a benchmark. Notably, the paper exhibits its efficacy in three principal dimensions:

  1. Hierarchical Decomposition: Tasks requiring nuanced reasoning for sub-elements (e.g., identifying specific letters in a string) benefit from recursive breaks into more granular sub-tasks.
  2. Recursive Decomposition: For tasks like list reversal, DecomP leverages recursive division into smaller chunks, ensuring the scalability of solutions irrespective of sequence length.
  3. Integration of External APIs: Demonstrated through open-domain question answering, DecomP seamlessly incorporates external knowledge retrieval systems like Elasticsearch, optimizing the retrieval and comprehension processes beyond what standard few-shot models can achieve.

Implications and Future Work

The modularity introduced by DecomP not only enhances task-specific optimizations but also ensures robustness in terms of model efficiency, error correction, and scalability across diverse and complex applications. Practically, this signifies a step towards more dynamic and adaptable AI systems capable of executing higher-order reasoning with precision.

Theoretically, DecomP sets a precedent for integrating symbolic methodologies within neural frameworks, highlighting the potential for hybrid models that bridge logical comprehensibility with the predictive prowess of modern LLMs.

Future trajectories could explore the application of DecomP in more abstract tasks, the potential for autonomous decomposition learning, and integration with more diverse forms of external modules. Understanding the limitations, such as computational costs or the impact of model updates on pre-established decompositions, could further solidify its place within computational linguistics and artificial intelligence research. DecomP embodies a necessary evolution in prompting strategies, offering a blend of flexibility and depth that aligns with the increasing demands for AI-driven problem-solving in complex domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Tushar Khot (53 papers)
  2. Harsh Trivedi (29 papers)
  3. Matthew Finlayson (11 papers)
  4. Yao Fu (83 papers)
  5. Kyle Richardson (44 papers)
  6. Peter Clark (108 papers)
  7. Ashish Sabharwal (84 papers)
Citations (324)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com