Learning to Plan for Retrieval-Augmented Large Language Models from Knowledge Graphs (2406.14282v3)
Abstract: Improving the performance of LLMs in complex question-answering (QA) scenarios has always been a research focal point. Recent studies have attempted to enhance LLMs' performance by combining step-wise planning with external retrieval. While effective for advanced models like GPT-3.5, smaller LLMs face challenges in decomposing complex questions, necessitating supervised fine-tuning. Previous work has relied on manual annotation and knowledge distillation from teacher LLMs, which are time-consuming and not accurate enough. In this paper, we introduce a novel framework for enhancing LLMs' planning capabilities by using planning data derived from knowledge graphs (KGs). LLMs fine-tuned with this data have improved planning capabilities, better equipping them to handle complex QA tasks that involve retrieval. Evaluations on multiple datasets, including our newly proposed benchmark, highlight the effectiveness of our framework and the benefits of KG-derived planning data.
- Junjie Wang (164 papers)
- Mingyang Chen (45 papers)
- Binbin Hu (42 papers)
- Dan Yang (60 papers)
- Ziqi Liu (78 papers)
- Yue Shen (243 papers)
- Peng Wei (112 papers)
- Zhiqiang Zhang (129 papers)
- Jinjie Gu (50 papers)
- Jun Zhou (370 papers)
- Jeff Z. Pan (78 papers)
- Wen Zhang (170 papers)
- Huajun Chen (198 papers)