Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning (2410.16843v1)
Abstract: Trustworthiness is an essential prerequisite for the real-world application of LLMs. In this paper, we focus on the trustworthiness of LLMs with respect to retrieval augmentation. Despite being supported with external evidence, retrieval-augmented generation still suffers from hallucinations, one primary cause of which is the conflict between contextual and parametric knowledge. We deem that retrieval-augmented LLMs have the inherent capabilities of supplying response according to both contextual and parametric knowledge. Inspired by aligning LLMs with human preference, we take the first step towards aligning retrieval-augmented LLMs to a status where it responds relying merely on the external evidence and disregards the interference of parametric knowledge. Specifically, we propose a reinforcement learning based algorithm Trustworthy-Alignment, theoretically and experimentally demonstrating LLMs' capability of reaching a trustworthy status without explicit supervision on how to respond. Our work highlights the potential of LLMs on exploring its intrinsic abilities by its own and expands the application scenarios of alignment from fulfilling human preference to creating trustworthy agents.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.