Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The SIFo Benchmark: Investigating the Sequential Instruction Following Ability of Large Language Models (2406.19999v2)

Published 28 Jun 2024 in cs.CL

Abstract: Following multiple instructions is a crucial ability for LLMs. Evaluating this ability comes with significant challenges: (i) limited coherence between multiple instructions, (ii) positional bias where the order of instructions affects model performance, and (iii) a lack of objectively verifiable tasks. To address these issues, we introduce a benchmark designed to evaluate models' abilities to follow multiple instructions through sequential instruction following (SIFo) tasks. In SIFo, the successful completion of multiple instructions is verifiable by examining only the final instruction. Our benchmark evaluates instruction following using four tasks (text modification, question answering, mathematics, and security rules), each assessing different aspects of sequential instruction following. Our evaluation of popular LLMs, both closed-source and open-source, shows that more recent and larger models significantly outperform their older and smaller counterparts on the SIFo tasks, validating the benchmark's effectiveness. All models struggle with following sequences of instructions, hinting at an important lack of robustness of today's LLMs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Xinyi Chen (78 papers)
  2. Baohao Liao (17 papers)
  3. Jirui Qi (7 papers)
  4. Panagiotis Eustratiadis (10 papers)
  5. Christof Monz (53 papers)
  6. Arianna Bisazza (43 papers)
  7. Maarten de Rijke (261 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets