Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Adversarial Domain Adaptation for Machine Reading Comprehension (1908.09209v1)

Published 24 Aug 2019 in cs.CL

Abstract: In this paper, we focus on unsupervised domain adaptation for Machine Reading Comprehension (MRC), where the source domain has a large amount of labeled data, while only unlabeled passages are available in the target domain. To this end, we propose an Adversarial Domain Adaptation framework (AdaMRC), where ($i$) pseudo questions are first generated for unlabeled passages in the target domain, and then ($ii$) a domain classifier is incorporated into an MRC model to predict which domain a given passage-question pair comes from. The classifier and the passage-question encoder are jointly trained using adversarial learning to enforce domain-invariant representation learning. Comprehensive evaluations demonstrate that our approach ($i$) is generalizable to different MRC models and datasets, ($ii$) can be combined with pre-trained large-scale LLMs (such as ELMo and BERT), and ($iii$) can be extended to semi-supervised learning.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Huazheng Wang (44 papers)
  2. Zhe Gan (135 papers)
  3. Xiaodong Liu (162 papers)
  4. Jingjing Liu (139 papers)
  5. Jianfeng Gao (344 papers)
  6. Hongning Wang (107 papers)
Citations (63)

Summary

We haven't generated a summary for this paper yet.