Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Parrot: Enhancing Multi-Turn Instruction Following for Large Language Models (2310.07301v2)

Published 11 Oct 2023 in cs.CL

Abstract: Humans often interact with LLMs in multi-turn interaction to obtain desired answers or more information. However, most existing studies overlook the multi-turn instruction following ability of LLMs, in terms of training dataset, training method, and evaluation benchmark. In this paper, we introduce Parrot, a solution aiming to enhance multi-turn instruction following for LLMs. First, we introduce an efficient but effective method for collecting multi-turn instructions that feature human-like queries, such as anaphora and ellipsis. Second, we propose a context-aware preference optimization strategy to further enhance LLMs for complex queries in multi-turn interaction. Moreover, to quantitatively evaluate LLMs in multi-turn instruction following, we manually build a multi-turn benchmark derived from existing ones. Extensive experiments show that Parrot improves current LLMs by up to 7.2% in multi-turn instruction following. Our dataset and codes will be open-sourced to facilitate future research.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yuchong Sun (10 papers)
  2. Che Liu (59 papers)
  3. Jinwen Huang (2 papers)
  4. Ruihua Song (48 papers)
  5. Fuzheng Zhang (60 papers)
  6. Di Zhang (230 papers)
  7. Kun Gai (125 papers)
  8. Kun Zhou (217 papers)
  9. Wayne Xin Zhao (196 papers)
Citations (5)