Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BERTVision -- A Parameter-Efficient Approach for Question Answering (2202.12210v1)

Published 24 Feb 2022 in cs.CL and cs.LG

Abstract: We present a highly parameter efficient approach for Question Answering that significantly reduces the need for extended BERT fine-tuning. Our method uses information from the hidden state activations of each BERT transformer layer, which is discarded during typical BERT inference. Our best model achieves maximal BERT performance at a fraction of the training time and GPU or TPU expense. Performance is further improved by ensembling our model with BERTs predictions. Furthermore, we find that near optimal performance can be achieved for QA span annotation using less training data. Our experiments show that this approach works well not only for span annotation, but also for classification, suggesting that it may be extensible to a wider range of tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Siduo Jiang (2 papers)
  2. Cristopher Benge (1 paper)
  3. William Casey King (1 paper)
Citations (1)

Summary

We haven't generated a summary for this paper yet.