Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 84 tok/s
Gemini 2.5 Pro 37 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
GPT OSS 120B 468 tok/s Pro
Kimi K2 229 tok/s Pro
2000 character limit reached

PDL: A Declarative Prompt Programming Language (2410.19135v1)

Published 24 Oct 2024 in cs.AI and cs.PL

Abstract: LLMs have taken the world by storm by making many previously difficult uses of AI feasible. LLMs are controlled via highly expressive textual prompts and return textual answers. Unfortunately, this unstructured text as input and output makes LLM-based applications brittle. This motivates the rise of prompting frameworks, which mediate between LLMs and the external world. However, existing prompting frameworks either have a high learning curve or take away control over the exact prompts from the developer. To overcome this dilemma, this paper introduces the Prompt Declaration Language (PDL). PDL is a simple declarative data-oriented language that puts prompts at the forefront, based on YAML. PDL works well with many LLM platforms and LLMs. It supports writing interactive applications that call LLMs and tools, and makes it easy to implement common use-cases such as chatbots, RAG, or agents. We hope PDL will make prompt programming simpler, less brittle, and more enjoyable.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Granite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks. https://arxiv.org/abs/2407.00121
  2. Program Synthesis with Large Language Models. https://arxiv.org/abs/2108.07732
  3. Pipeline Combinators for Gradual AutoML. In Advances in Neural Information Processing Systems (NeurIPS). 19705–19718. https://proceedings.neurips.cc/paper/2021/file/a3b36cb25e2e0b93b5f334ffb4e4064e-Paper.pdf
  4. YAML Ain’t Markup Language. http://yaml.org/spec/history/2004-12-28/2004-12-28.pdf
  5. Prompting Is Programming: A Query Language for Large Language Models. In Conference on Programming Language Design and Implementation (PLDI). 1946–1969. https://doi.org/10.1145/3591300
  6. Language Models are Few-Shot Learners. https://arxiv.org/abs/2005.14165
  7. LangChain. https://github.com/langchain-ai/langchain
  8. Links: Web Programming Without Tiers. In Symposium on Formal Methods for Components and Objects (FMCO). 266–296. https://doi.org/10.1007/978-3-540-74792-5_12
  9. Formally Specifying the High-Level Behavior of LLM-Based Agents. https://arxiv.org/abs/2310.08535
  10. PAL: Program-aided Language Models. In International Conference on Machine Learning (ICML). 10764–10799. https://proceedings.mlr.press/v202/gao23f.html
  11. Granite Team, IBM. 2024. Granite 3.0 Language Models. https://github.com/ibm-granite/granite-3.0-language-models/blob/main/paper.pdf
  12. Hugging Face. 2023. Chat Templates. https://huggingface.co/docs/transformers/en/chat_templating
  13. IBM. 2023. watsonx. https://www.ibm.com/watsonx
  14. SWE-bench: Can Language Models Resolve Real-World GitHub Issues?. In International Conference on Learning Representations (ICLR). https://openreview.net/forum?id=VTF8yNQM66
  15. DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines. https://arxiv.org/abs/2310.03714
  16. Codellm-Devkit: A Framework for Contextualizing Code LLMs with Program Analysis Insights. https://arxiv.org/abs/2410.13007
  17. Efficient Memory Management for Large Language Model Serving with PagedAttention. In Symposium on Operating Systems Principles (SOSP). 611–626. https://doi.org/10.1145/3600006.3613165
  18. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. In Conference on Neural Information Processing Systems (NeurIPS). 9459–9474. https://proceedings.neurips.cc/paper/2020/hash/6b493230205f780e1bc26945df7481e5-Abstract.html
  19. Relational Programming with Foundation Models. In Conference on Artificial Intelligence (AAAI). 10635–10644. https://doi.org/10.1609/aaai.v38i9.28934
  20. Prompting Frameworks for Large Language Models: A Survey. https://arxiv.org/abs/2311.12785
  21. John McCarthy. 1960. Recursive functions of symbolic expressions and their computation by machine, Part I. Communications of the ACM (CACM) 3, 4 (April 1960), 184–195. https://doi.org/10.1145/367177.367199
  22. When and how to develop domain-specific languages. ACM Computing Surveys (CSUR) 37, 4 (2005), 316–344. https://doi.org/10.1145/1118890.1118892
  23. Microsoft. 2023. {guidance}: A guidance language for controlling large language models. https://github.com/langchain-ai/langchain
  24. João Moura. 2023. CrewAI: Framework for orchestrating role-playing, autonomous AI agents. https://github.com/crewAIInc/crewAI
  25. Foundations of JSON Schema. In International Conference on World Wide Web (WWW). 263–273. https://doi.org/10.1145/2872427.2883029
  26. Armin Ronacher. 2008. Jinja2 Documentation Release 2.0. http://mitsuhiko.pocoo.org/jinja2docs/Jinja2.pdf
  27. Alexander Rush. 2023. MiniChain: A Small Library for Coding with Large Language Models. In Conference on Empirical Methods in Natural Language Processing: System Demonstrations (EMNLP-Demo). 311–317. https://aclanthology.org/2023.emnlp-demo.27/
  28. Toolformer: Language Models Can Teach Themselves to Use Tools. In Advances in Neural Information Processing Systems (NeurIPS). https://proceedings.neurips.cc/paper_files/paper/2023/hash/d842425e4bf79ba039352da0f658a906-Abstract-Conference.html
  29. PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models. In Conference on Empirical Methods in Natural Language Processing (EMNLP). 9895–9901. https://doi.org/10.18653/v1/2021.emnlp-main.779
  30. Revised Report on the Algorithmic Language ALGOL 68. ACM SIGPLAN Notices 12, 5 (May 1977), 1–70. https://doi.org/10.1145/954652.1781176
  31. A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. https://arxiv.org/abs/2302.11382
  32. AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation. https://arxiv.org/abs/2308.08155
  33. Decoupling Reasoning from Observations for Efficient Augmented Language Models. https://openreview.net/forum?id=CpgoO6j6W1
  34. ReAct: Synergizing Reasoning and Acting in Language Models. In International Conference on Learning Representations (ICLR). https://openreview.net/forum?id=WE_vluYUL-X
  35. Efficiently Programming Large Language Models using SGLang. https://arxiv.org/abs/2312.07104
  36. GPTSwarm: Language Agents as Optimizable Graphs. In International Conference on Machine Learning (ICML). https://openreview.net/forum?id=uTC9AFXIhg
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Ai Generate Text Spark Streamline Icon: https://streamlinehq.com

Paper Prompts

Sign up for free to create and run prompts on this paper using GPT-5.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com