Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Long Context Question Answering via Supervised Contrastive Learning (2112.08777v2)

Published 16 Dec 2021 in cs.CL and cs.AI

Abstract: Long-context question answering (QA) tasks require reasoning over a long document or multiple documents. Addressing these tasks often benefits from identifying a set of evidence spans (e.g., sentences), which provide supporting evidence for answering the question. In this work, we propose a novel method for equipping long-context QA models with an additional sequence-level objective for better identification of the supporting evidence. We achieve this via an additional contrastive supervision signal in finetuning, where the model is encouraged to explicitly discriminate supporting evidence sentences from negative ones by maximizing question-evidence similarity. The proposed additional loss exhibits consistent improvements on three different strong long-context transformer models, across two challenging question answering benchmarks -- HotpotQA and QAsper.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Avi Caciularu (46 papers)
  2. Ido Dagan (72 papers)
  3. Jacob Goldberger (41 papers)
  4. Arman Cohan (121 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.