Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
98 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Planning with Large Language Models for Conversational Agents (2407.03884v1)

Published 4 Jul 2024 in cs.CL and cs.AI

Abstract: Controllability and proactivity are crucial properties of autonomous conversational agents (CAs). Controllability requires the CAs to follow the standard operating procedures (SOPs), such as verifying identity before activating credit cards. Proactivity requires the CAs to guide the conversation towards the goal during user uncooperation, such as persuasive dialogue. Existing research cannot be unified with controllability, proactivity, and low manual annotation. To bridge this gap, we propose a new framework for planning-based conversational agents (PCA) powered by LLMs, which only requires humans to define tasks and goals for the LLMs. Before conversation, LLM plans the core and necessary SOP for dialogue offline. During the conversation, LLM plans the best action path online referring to the SOP, and generates responses to achieve process controllability. Subsequently, we propose a semi-automatic dialogue data creation framework and curate a high-quality dialogue dataset (PCA-D). Meanwhile, we develop multiple variants and evaluation metrics for PCA, e.g., planning with Monte Carlo Tree Search (PCA-M), which searches for the optimal dialogue action while satisfying SOP constraints and achieving the proactive of the dialogue. Experiment results show that LLMs finetuned on PCA-D can significantly improve the performance and generalize to unseen domains. PCA-M outperforms other CoT and ToT baselines in terms of conversation controllability, proactivity, task success rate, and overall logical coherence, and is applicable in industry dialogue scenarios. The dataset and codes are available at XXXX.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (15)
  1. Zhigen Li (3 papers)
  2. Jianxiang Peng (3 papers)
  3. Yanmeng Wang (12 papers)
  4. Tianhao Shen (15 papers)
  5. Minghui Zhang (42 papers)
  6. Linxi Su (1 paper)
  7. Shang Wu (22 papers)
  8. Yihang Wu (14 papers)
  9. Yuqian Wang (6 papers)
  10. Ye Wang (248 papers)
  11. Wei Hu (308 papers)
  12. Jianfeng Li (37 papers)
  13. Shaojun Wang (29 papers)
  14. Jing Xiao (267 papers)
  15. Deyi Xiong (103 papers)