Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Attention-Guided Answer Distillation for Machine Reading Comprehension (1808.07644v4)

Published 23 Aug 2018 in cs.CL

Abstract: Despite that current reading comprehension systems have achieved significant advancements, their promising performances are often obtained at the cost of making an ensemble of numerous models. Besides, existing approaches are also vulnerable to adversarial attacks. This paper tackles these problems by leveraging knowledge distillation, which aims to transfer knowledge from an ensemble model to a single model. We first demonstrate that vanilla knowledge distillation applied to answer span prediction is effective for reading comprehension systems. We then propose two novel approaches that not only penalize the prediction on confusing answers but also guide the training with alignment information distilled from the ensemble. Experiments show that our best student model has only a slight drop of 0.4% F1 on the SQuAD test set compared to the ensemble teacher, while running 12x faster during inference. It even outperforms the teacher on adversarial SQuAD datasets and NarrativeQA benchmark.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Minghao Hu (14 papers)
  2. Yuxing Peng (22 papers)
  3. Furu Wei (291 papers)
  4. Zhen Huang (114 papers)
  5. Dongsheng Li (240 papers)
  6. Nan Yang (182 papers)
  7. Ming Zhou (182 papers)
Citations (73)