Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Strengthened Symbol Binding Makes Large Language Models Reliable Multiple-Choice Selectors (2406.01026v2)

Published 3 Jun 2024 in cs.CL

Abstract: Multiple-Choice Questions (MCQs) constitute a critical area of research in the study of LLMs. Previous works have investigated the selection bias problem in MCQs within few-shot scenarios, in which the LLM's performance may be influenced by the presentation of answer choices, leaving the selection bias during Supervised Fine-Tuning (SFT) unexplored. In this paper, we reveal that selection bias persists in the SFT phase , primarily due to the LLM's inadequate Multiple Choice Symbol Binding (MCSB) ability. This limitation implies that the model struggles to associate the answer options with their corresponding symbols (e.g., A/B/C/D) effectively. To enhance the model's MCSB capability, we first incorporate option contents into the loss function and subsequently adjust the weights of the option symbols and contents, guiding the model to understand the option content of the current symbol. Based on this, we introduce an efficient SFT algorithm for MCQs, termed Point-wise Intelligent Feedback (PIF). PIF constructs negative instances by randomly combining the incorrect option contents with all candidate symbols, and proposes a point-wise loss to provide feedback on these negative samples into LLMs. Our experimental results demonstrate that PIF significantly reduces the model's selection bias by improving its MCSB capability. Remarkably, PIF exhibits a substantial enhancement in the accuracy for MCQs.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Mengge Xue (6 papers)
  2. Zhenyu Hu (8 papers)
  3. Meng Zhao (48 papers)
  4. Liqun Liu (8 papers)
  5. Kuo Liao (5 papers)
  6. Shuang Li (203 papers)
  7. Honglin Han (2 papers)
  8. Chengguo Yin (3 papers)
Citations (4)