Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Prompting Frameworks for Large Language Models: A Survey (2311.12785v1)

Published 21 Nov 2023 in cs.SE
Prompting Frameworks for Large Language Models: A Survey

Abstract: Since the launch of ChatGPT, a powerful AI Chatbot developed by OpenAI, LLMs have made significant advancements in both academia and industry, bringing about a fundamental engineering paradigm shift in many areas. While LLMs are powerful, it is also crucial to best use their power where "prompt'' plays a core role. However, the booming LLMs themselves, including excellent APIs like ChatGPT, have several inherent limitations: 1) temporal lag of training data, and 2) the lack of physical capabilities to perform external actions. Recently, we have observed the trend of utilizing prompt-based tools to better utilize the power of LLMs for downstream tasks, but a lack of systematic literature and standardized terminology, partly due to the rapid evolution of this field. Therefore, in this work, we survey related prompting tools and promote the concept of the "Prompting Framework" (PF), i.e. the framework for managing, simplifying, and facilitating interaction with LLMs. We define the lifecycle of the PF as a hierarchical structure, from bottom to top, namely: Data Level, Base Level, Execute Level, and Service Level. We also systematically depict the overall landscape of the emerging PF field and discuss potential future research and challenges. To continuously track the developments in this area, we maintain a repository at https://github.com/lxx0628/Prompting-Framework-Survey, which can be a useful resource sharing platform for both academic and industry in this field.

Unveiling the Challenges and Future Directions of Prompting Frameworks in LLMs

In-Depth Analysis and Comparative Study

Recent advancements in LLMs like ChatGPT have presented a paradigm shift in the application of artificial intelligence across various domains. However, efficiently harnessing these models for specific tasks poses significant challenges due to their inherent limitations, including handling of unconventional inputs, invocation costs, and the interaction with external tools. Prompting Frameworks (PFs) have emerged as pivotal in bridging these gaps, enhancing LLMs' applicability in real-world scenarios. This paper provides a comprehensive survey and critical analysis of current PFs, underscoring the necessity for a systematic approach in understanding and evaluating these frameworks. Additionally, the paper delineates the challenges PFs face, offering insights into future developmental directions.

Understanding Prompting Frameworks

Prompting Frameworks are defined through a hierarchical lens, encompassing Data Level, Base Level, Execute Level, and Service Level, each playing a role in enhancing the interaction between LLMs and the external world. However, the surveyed PFs exhibit variations in their design and efficiency, prominently influenced by their compatibility with programming languages and LLMs, capacity in addressing LLMs' limitations, documentation quality, and community support.

Compatibility: A Dual-faceted Evaluation

The paper emphasizes the compatibility of PFs with programming languages and LLMs as a vital consideration. Notably, LLM-SH exhibits the strongest compatibility, providing interfaces supporting multiple mainstream programming languages and seamlessly integrating with a variety of LLMs. Contrastingly, LLM-RSTR and LLM-LNG showcase limited compatibility, particularly in supporting a broader range of LLMs.

Capabilities and Features: Where Improvements are Needed

Despite the strides made by PFs in mitigating LLMs' inherent limitations, the paper identifies areas needing enhancement. Among these, the capacity for handling unconventional inputs, controlling output, reducing invocation costs, and utilizing external tools are highlighted. LLM-SH outshines in most capabilities, especially in handling unconventional contents and utilizing external tools, while LLM-LNG and LLM-RSTR demonstrate potential for improvement.

Charting the Future: Towards More Streamlined and Secure Frameworks

The paper concludes by advocating for the next generation of PFs to transcend current limitations, advocating for frameworks that are more streamlined, secure, versatile, and standardized. Emphasizing security, the paper calls for robust mechanisms to defend against prompt-based attacks and to safeguard LLMs' behavior, ensuring the generation of secure and compliant content. Moreover, highlighting the need for versatility, it suggests future frameworks should seamlessly integrate with a wider array of external applications, operating within a more standardized and organic LLM ecosystem.

Conclusion

This comprehensive survey and analysis underscore the crucial role of Prompting Frameworks in maximizing the utility of LLMs across various domains. While existing frameworks lay a significant foundation, the paper elucidates existing challenges and limitations, paving the way for future innovations. As the landscape of artificial intelligence continues to evolve, the development of more sophisticated, secure, and user-friendly PFs will undoubtedly play a pivotal role in the broader adoption and application of LLMs in real-world scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xiaoxia Liu (10 papers)
  2. Jingyi Wang (105 papers)
  3. Jun Sun (210 papers)
  4. Xiaohan Yuan (7 papers)
  5. Guoliang Dong (10 papers)
  6. Peng Di (16 papers)
  7. Wenhai Wang (123 papers)
  8. Dongxia Wang (18 papers)
Citations (18)
Youtube Logo Streamline Icon: https://streamlinehq.com