Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Single-Sentence Reader: A Novel Approach for Addressing Answer Position Bias (2308.04566v4)

Published 8 Aug 2023 in cs.CL

Abstract: Machine Reading Comprehension (MRC) models tend to take advantage of spurious correlations (also known as dataset bias or annotation artifacts in the research community). Consequently, these models may perform the MRC task without fully comprehending the given context and question, which is undesirable since it may result in low robustness against distribution shift. The main focus of this paper is answer-position bias, where a significant percentage of training questions have answers located solely in the first sentence of the context. We propose a Single-Sentence Reader as a new approach for addressing answer position bias in MRC. Remarkably, in our experiments with six different models, our proposed Single-Sentence Readers trained on biased dataset achieve results that nearly match those of models trained on normal dataset, proving their effectiveness in addressing the answer position bias. Our study also discusses several challenges our Single-Sentence Readers encounter and proposes a potential solution.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Son Quoc Tran (7 papers)
  2. Matt Kretchmar (5 papers)

Summary

We haven't generated a summary for this paper yet.