Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions (2308.11483v1)

Published 22 Aug 2023 in cs.CL, cs.AI, and cs.LG
Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions

Abstract: LLMs have demonstrated remarkable capabilities in various NLP tasks. However, previous works have shown these models are sensitive towards prompt wording, and few-shot demonstrations and their order, posing challenges to fair assessment of these models. As these models become more powerful, it becomes imperative to understand and address these limitations. In this paper, we focus on LLMs robustness on the task of multiple-choice questions -- commonly adopted task to study reasoning and fact-retrieving capability of LLMs. Investigating the sensitivity of LLMs towards the order of options in multiple-choice questions, we demonstrate a considerable performance gap of approximately 13% to 75% in LLMs on different benchmarks, when answer options are reordered, even when using demonstrations in a few-shot setting. Through a detailed analysis, we conjecture that this sensitivity arises when LLMs are uncertain about the prediction between the top-2/3 choices, and specific options placements may favor certain prediction between those top choices depending on the question caused by positional bias. We also identify patterns in top-2 choices that amplify or mitigate the model's bias toward option placement. We found that for amplifying bias, the optimal strategy involves positioning the top two choices as the first and last options. Conversely, to mitigate bias, we recommend placing these choices among the adjacent options. To validate our conjecture, we conduct various experiments and adopt two approaches to calibrate LLMs' predictions, leading to up to 8 percentage points improvement across different models and benchmarks.

Sensitivity of LLMs to Options Order in Multiple-Choice Questions

The investigated article concerns the robustness of LLMs when subjected to multiple-choice questions (MCQs), with a specific focus on how the order of answer options influences model performance. LLMs have gained remarkable traction for their capacity to handle natural language processing tasks, exhibiting competency even beyond certain human capabilities. Despite this, the sensitivity of these models to prompt formulation and sequence arrangements in input data requires further examination.

Sensitivity Implications and Investigation

The authors investigate how the order of options in MCQs modulates the ability of LLMs to make accurate predictions. By conducting experiments with different LLMs, including GPT-4 and InstructGPT, the paper demonstrates a substantial disparity in performance contingent on the sequencing of options. A notable finding is the performance variability ranging between 13% and 75% across different MCQ benchmarks when altering the order of options. This sensitivity raises profound implications for evaluating the reasoning and fact-retrieving skills of LLMs in structured assessments such as MCQs.

Underpinning Factors for Sensitivity

Through detailed analysis, it is posited that this sensitivity emerges primarily due to two factors:

  1. Uncertainty in Predictions: LLMs often exhibit indecisiveness between the top few choices, indicating inherent uncertainty in selecting the most apt answer when options are semantically similar or close in relevancy.
  2. Positional Bias: The order of options manifests a significant bias, where LLMs may display a preference for specific answer placements. This is magnified in situations where the model’s confidence in its predictions is low.

The paper uncovers patterns that amplify or mitigate this bias, offering solutions such as strategically placing the most probable answers in adjacent positions to enhance robustness. These findings elucidate an intrinsic aspect of LLM function, revealing vulnerabilities when feedback is structured in a particular order.

Enhancing Model Robustness

To mitigate LLMs' susceptibility to option ordering, the authors propose calibration techniques that include using a majority voting scheme over predictions from various permutations of option orders. Notably, this approach resulted in performance enhancements of up to eight percentage points in some cases. Conversely, attempts at explanation-driven prediction calibration, such as the Multiple Evidence Calibration (MEC) method, yielded contrasting results, particularly adverse for models like InstructGPT.

Future Perspectives and Implications

The insights derived from examining option order sensitivity have tangible implications for the development of LLMs and their application in real-world situations. In practical terms, this implies a need for more sophisticated methods to evaluate LLM capabilities, ensuring that their decisions reflect genuine comprehension rather than positional bias. Moreover, this could influence how datasets, especially those involving structured response formats like MCQs, are designed and interpreted within AI research.

Extending this understanding could lead to considerable advancements in LLM reliability and performance, not just constrained to MCQs but extending to other complex NLP tasks where multiple elements or orderings are involved. As AI models evolve, incorporating nuanced calibration strategies during both training and evaluation phases may lead to more robust AI models capable of consistently strong performance across varied scenarios.

In summary, this research sheds light on an aspect of LLM functioning that has been relatively unexplored and underscores the necessity for ongoing scrutiny and refinement of these models in handling ordered input data. The findings serve as a foundation for future studies aimed at minimizing biases inherent in AI models, propelling forward their application in more diverse and unstructured environments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Pouya Pezeshkpour (25 papers)
  2. Estevam Hruschka (23 papers)
Citations (95)
Youtube Logo Streamline Icon: https://streamlinehq.com