Prompt Pattern Catalog
- Prompt Pattern Catalogs are formal repositories that document reusable prompt engineering strategies, enabling systematic reuse and formal evaluation across diverse LLM tasks.
- They use structured documentation—covering intent, context, examples, parameters, and trade-offs—to standardize practices and facilitate adaptation across domains.
- These catalogs support the composition of complex prompting pipelines and enhance security, consistency, and task success through empirical optimization and formal guarantees.
A prompt pattern catalog is a systematically documented collection of reusable strategies—referred to as “patterns”—for structuring, sequencing, and adapting prompts to LLMs in order to solve recurring challenges in output control, interaction design, safety, utility, and robustness. Drawing on principles akin to software design patterns, prompt pattern catalogs codify best practices, encode domain-specific knowledge transfer, and facilitate both adaptation and formal reasoning about prompt engineering across tasks, domains, and agentic LLM architectures.
1. Definition and Purpose
Prompt pattern catalogs are formal registries of prompt engineering techniques, with each pattern representing a reusable solution to a common problem encountered when interacting with LLMs. These catalogs standardize how patterns are documented, using elements such as intent, context, problem statement, structural template, example instantiations, tunable parameters, and consequences or trade-offs. The patterns abstract away model-specific idiosyncrasies, enabling transfer and combination across different tasks, agent frameworks, and domains. Beyond usability, prompt pattern catalogs serve as a foundation for systematic adaptation, empirical evaluation, and, in recent research, formal guarantees for security or correctness (White et al., 2023, White et al., 2023, Beurer-Kellner et al., 10 Jun 2025).
2. Documentation Format and Formal Structure
The documentation of prompt patterns relies on a highly structured scheme. The form typically includes:
- Pattern Name
- Classification (e.g., Input Semantics, Output Customization, Error Identification, Prompt Improvement, Interaction, Context Control)
- Intent
- Context
- Problem Statement
- Motivation
- Fundamental Structure/Key Ideas (often formalized in LaTeX templates)
- Solution Template (fill-in-the-blank or boilerplate prompt)
- Concrete Example
- Tunable Parameters
- Consequences/Trade-offs
- Related Patterns
Mathematical formalization is used for composite prompts, where
denotes textual sequencing of basic patterns. Catalog entries may also incorporate invariants, schemas, or control-flow properties to articulate security or correctness guarantees (White et al., 2023, Beurer-Kellner et al., 10 Jun 2025).
3. Taxonomy and Key Classes of Patterns
Prompt pattern catalogs often organize patterns into taxonomies reflecting the functional locus within LLM-based workflows. For instance, the 16-pattern catalog in (White et al., 2023) uses six principal categories:
| Category | Example Patterns | Main Focus |
|---|---|---|
| Input Semantics | Meta Language Creation | Custom syntax, semantic scoping |
| Output Customization | Template, Persona, Visualization Generator | Formatting, persona, data export |
| Error Identification | Fact Check List, Reflection | Assumption surfacing, logic audit |
| Prompt Improvement | Question Refinement, Alternative Approaches | Iterative enhancement |
| Interaction | Flipped Interaction, Infinite Generation | Dialogue control, interaction |
| Context Control | Context Manager | Reset, ignore elements |
In specialized domains such as LLM-agent security, the catalog may focus on patterns with formal guarantees, e.g., Action-Selector, Plan-Then-Execute, Dual LLM (Beurer-Kellner et al., 10 Jun 2025). In software engineering, patterns are categorized into Requirements Elicitation, System Design & Simulation, Code Quality, and Refactoring (White et al., 2023).
4. Methodologies for Composition and Adaptation
Patterns can be composed to synthesize more sophisticated prompting pipelines, enabling hierarchical or modular specification of complex LLM behaviors. Guidelines for pattern composition emphasize:
- Starting with context control or custom input semantics
- Layering role assignment/persona
- Selecting interaction and output styles
- Applying iterative improvement or verification patterns
Adapting patterns to new domains involves replacing generic markers (e.g., "security") with domain-specific phrases, tuning parameters for task complexity, and combining outputs with downstream toolchains. Formal notation and templates support such adaptation, enabling robust reuse (White et al., 2023).
Catalogs further support the automated search for optimal pattern combinations via empirical evaluation frameworks, as in the AutoPDL optimizer applied to Prompt Declaration Language (PDL) patterns (Vaziri et al., 8 Jul 2025). Patterns themselves, when captured as composable artifacts (e.g., YAML+JSON Schema in PDL), can be stored, versioned, and instantiated via higher-order macros, further facilitating reuse and optimization.
5. Security- and Agent-Oriented Pattern Catalogs
A specialized class of pattern catalogs address LLM agent security, primarily mitigating prompt injection and related threats. The "Design Patterns for Securing LLM Agents against Prompt Injections" catalog (Beurer-Kellner et al., 10 Jun 2025) introduces:
- Action-Selector: Allows only predefined actions; establishes control-flow integrity such that every executed action belongs to an allow-list .
- Plan-Then-Execute: Enforces that plans are fixed before exposure to untrusted data, preventing later hijack.
- LLM Map-Reduce: Contains injections within data fragments by isolating and sanitizing each.
- Dual LLM: Enforces role separation, so only quarantined LLMs see untrusted text.
- Code-Then-Execute: LLM generates code subjected to type/grammar validation prior to execution, confining untrusted data to safe arguments.
- Context-Minimization: Drops untrusted prior turns from LLM context, blocking multi-turn prompt injection.
Each pattern articulates a formal invariant, e.g., non-interference or input confinement, and designates specific trade-offs regarding expressiveness, latency, and engineering complexity.
6. Empirical Results and Case Studies
Prompt pattern catalogs have been leveraged for diverse empirical studies and real-world agent deployments. The introduction of declarative pattern representations, as in Prompt Declaration Language (PDL), enables systematic parameter tuning and optimization. In a compliance agent case study, switching from a “canned” agent to a hand-tuned PDL pattern with a two-stage “Think” (natural-language reasoning, then JSON-locked action selection) yielded a 4× increase in task success on the granite3.2-8b-instruct model (), and higher overall success rates on strong models ( for gpt4o-2024-11-20) (Vaziri et al., 8 Jul 2025). This improvement was attributed to a reduction in tool-call failures and improved schema conformance.
A plausible implication is that such catalogs, when tied to constrained decoding and schema validation, enable robust scaling and cross-domain transfer of prompting workflows by reducing “prompt drift” and enhancing formal controllability.
7. Cataloging, Reuse, and Tooling Infrastructure
Modern prompt pattern catalogs, especially those based on explicit programmatic representations like PDL, support storage as versioned artifacts with metadata describing the pattern’s function, recommended models, and output schemas. Patterns may be parameterized with macros, imported, and composed as higher-order workflows. Catalogs can cover not only the fundamental pattern “core” but also contextual annotations, empirical results, and domain-specific guidelines.
This infrastructure enables systematic reuse across problem domains—e.g., the two-stage JSON-locked action pattern is reported as applicable across compliance, healthcare, legal QA, and customer support tools, with pattern selection and instantiation driven by catalog metadata (Vaziri et al., 8 Jul 2025). Plug-in optimizers automate the search and evaluation of catalog entries, closing the loop between design, empirical benchmarking, and practical LLM agent deployment.