Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large Language Models (2404.15846v2)

Published 24 Apr 2024 in cs.CL

Abstract: It is imperative for LLMs to follow instructions with elaborate requirements (i.e. Complex Instructions Following). Yet, it remains under-explored how to enhance the ability of LLMs to follow complex instructions with multiple constraints. To bridge the gap, we initially study what training data is effective in enhancing complex constraints following abilities. We found that training LLMs with instructions containing multiple constraints enhances their understanding of complex instructions, especially those with lower complexity levels. The improvement can even generalize to compositions of out-of-domain constraints. Additionally, we further propose methods addressing how to obtain and utilize the effective training data. Finally, we conduct extensive experiments to prove the effectiveness of our methods in terms of overall performance and training efficiency. We also demonstrate that our methods improve models' ability to follow instructions generally and generalize effectively across out-of-domain, in-domain, and adversarial settings, while maintaining general capabilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Qianyu He (26 papers)
  2. Jie Zeng (19 papers)
  3. Qianxi He (3 papers)
  4. Jiaqing Liang (62 papers)
  5. Yanghua Xiao (151 papers)
Citations (7)