Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 63 tok/s Pro
Kimi K2 212 tok/s Pro
GPT OSS 120B 426 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Following Length Constraints in Instructions (2406.17744v1)

Published 25 Jun 2024 in cs.CL

Abstract: Aligned instruction following models can better fulfill user requests than their unaligned counterparts. However, it has been shown that there is a length bias in evaluation of such models, and that training algorithms tend to exploit this bias by learning longer responses. In this work we show how to train models that can be controlled at inference time with instructions containing desired length constraints. Such models are superior in length instructed evaluations, outperforming standard instruction following models such as GPT4, Llama 3 and Mixtral.

Citations (8)

Summary

  • The paper introduces Length-Instruction Fine-Tuning (LIFT) to embed explicit length constraints into model training, reducing inherent biases.
  • It proposes two new benchmarks, AlpacaEval-LI and MT-Bench-LI, to rigorously measure LLM adherence to specified length limits.
  • Experimental results demonstrate that LIFT-DPO models achieve low violation rates while maintaining strong performance on standard instruction-following tasks.

Following Length Constraints in Instructions

The paper addresses a significant challenge in the field of instruction-following models, focusing on the inherent length bias present in both training and evaluation mechanisms. Specifically, the paper reveals how existing large instruction-following LLMs such as GPT-4, Llama 3, and others are susceptible to producing longer outputs due to biases in current evaluation methodologies. This length bias leads to the preference for longer responses, subsequently impacting the training process.

Key Contributions and Methodology

The primary contribution of this paper is the introduction of a model training framework that incorporates explicit length constraints within the instructions given to these models. The method proposed, named Length-Instruction Fine-Tuning (LIFT), presents a robust approach to curbing the tendency of LLMs to exploit length biases, ensuring models can adhere to specified length constraints effectively during inference.

Length-Instructed Benchmarks

Two new benchmarks, AlpacaEval-Length-Instructed (AlpacaEval-LI) and MT-Bench-Length-Instructed (MT-Bench-LI), are introduced. The benchmarks augment existing instruction-following datasets by inserting explicit length constraints into the prompts. The target length limits are derived from the minimum generation length of responses by three strong LLMs, ensuring a challenging yet relevant constraint.

Notably, the introduction of these benchmarks allows for a nuanced evaluation of models' ability to adhere to length constraints while maintaining response quality. This methodology addresses the limitations of existing benchmarks by incorporating length penalties directly into the evaluation process, ensuring models do not exploit length preferences to game the metrics.

Length-Instruction Fine-Tuning (LIFT)

The LIFT method involves the construction of augmented training data with length instructions embedded into the prompts. This process generates new preference pairs for training, accounting for both length constraints and response quality. The resulting datasets are used to fine-tune models via Direct Preference Optimization (DPO), ensuring models are capable of adhering to explicit length instructions during inference.

Experimental Evaluation

Extensive experiments demonstrate the efficacy of the LIFT method. Results show a significant reduction in violation rates of length constraints compared to existing models and training methods. For instance, models trained with LIFT-DPO exhibit violation rates as low as 2.7% on AlpacaEval-LI and 6.7% on MT-Bench-LI, compared to much higher violation rates observed in standard models.

Additionally, the models trained using LIFT-DPO maintain competitive performance on standard instruction-following benchmarks, demonstrating no degradation in the absence of length constraints. This balances the need for adherence to length instructions and the quality of the responses, making these models robust across different types of instructions.

Practical and Theoretical Implications

The practical implications of this research are substantial. By ensuring models can follow length constraints reliably, the developed approach enhances the usability of AI systems in real-world applications where response length can be critically important—for example, on mobile devices or where concise answers are preferred.

Theoretically, the paper broadens our understanding of the modeling and evaluation of instruction-following within LLMs. It proposes a framework to mitigate length bias directly rather than employing indirect penalty-based mechanisms. This encourages further exploration into the intrinsic properties of model training and evaluation, potentially leading to fairer and more reliable models.

Future Directions

Looking forward, future lines of research could involve exploring different metrics for length constraints beyond word count, such as character count, time-based constraints for voice responses, or domain-specific length limits tailored to particular applications. Another potential direction is the generalization of length constraints to more flexible formats, allowing models to parse and understand varied user-defined constraints naturally.

A broader investigation into human preferences for response lengths in diverse contexts could also inform more sophisticated alignment strategies, helping to ensure models meet human expectations more closely.

In conclusion, this paper provides a crucial advancement in developing instruction-following models by addressing and mitigating length bias. The proposed benchmarks and training methodology contribute significantly to enhancing the controllability and fairness of LLMs, fostering more practical and aligned AI systems.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 18 likes.

Upgrade to Pro to view all of the tweets about this paper: