Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Communication Theory Perspective on Prompting Engineering Methods for Large Language Models (2310.18358v1)

Published 24 Oct 2023 in cs.CL and cs.AI

Abstract: The springing up of LLMs has shifted the community from single-task-orientated NLP research to a holistic end-to-end multi-task learning paradigm. Along this line of research endeavors in the area, LLM-based prompting methods have attracted much attention, partially due to the technological advantages brought by prompt engineering (PE) as well as the underlying NLP principles disclosed by various prompting methods. Traditional supervised learning usually requires training a model based on labeled data and then making predictions. In contrast, PE methods directly use the powerful capabilities of existing LLMs (i.e., GPT-3 and GPT-4) via composing appropriate prompts, especially under few-shot or zero-shot scenarios. Facing the abundance of studies related to the prompting and the ever-evolving nature of this field, this article aims to (i) illustrate a novel perspective to review existing PE methods, within the well-established communication theory framework; (ii) facilitate a better/deeper understanding of developing trends of existing PE methods used in four typical tasks; (iii) shed light on promising research directions for future PE methods.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yuanfeng Song (27 papers)
  2. Yuanqin He (9 papers)
  3. Xuefang Zhao (4 papers)
  4. Hanlin Gu (33 papers)
  5. Di Jiang (42 papers)
  6. Haijun Yang (18 papers)
  7. Lixin Fan (77 papers)
  8. Qiang Yang (202 papers)
Citations (1)