Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Pre-Trained Generative Language Models with Question Attended Span Extraction on Machine Reading Comprehension (2404.17991v3)

Published 27 Apr 2024 in cs.CL

Abstract: Machine Reading Comprehension (MRC) poses a significant challenge in the field of NLP. While mainstream MRC methods predominantly leverage extractive strategies using encoder-only models such as BERT, generative approaches face the issue of out-of-control generation -- a critical problem where answers generated are often incorrect, irrelevant, or unfaithful to the source text. To address these limitations in generative models for MRC, we introduce the Question-Attended Span Extraction (QASE) module. Integrated during the fine-tuning phase of pre-trained generative LLMs (PLMs), QASE significantly enhances their performance, allowing them to surpass the extractive capabilities of advanced LLMs such as GPT-4 in few-shot settings. Notably, these gains in performance do not come with an increase in computational demands. The efficacy of the QASE module has been rigorously tested across various datasets, consistently achieving or even surpassing state-of-the-art (SOTA) results, thereby bridging the gap between generative and extractive models in extractive MRC tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Lin Ai (15 papers)
  2. Zheng Hui (27 papers)
  3. Zizhou Liu (5 papers)
  4. Julia Hirschberg (37 papers)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com