2000 character limit reached
Transformer-based Language Models for Factoid Question Answering at BioASQ9b (2109.07185v1)
Published 15 Sep 2021 in cs.CL
Abstract: In this work, we describe our experiments and participating systems in the BioASQ Task 9b Phase B challenge of biomedical question answering. We have focused on finding the ideal answers and investigated multi-task fine-tuning and gradual unfreezing techniques on transformer-based LLMs. For factoid questions, our ALBERT-based systems ranked first in test batch 1 and fourth in test batch 2. Our DistilBERT systems outperformed the ALBERT variants in test batches 4 and 5 despite having 81% fewer parameters than ALBERT. However, we observed that gradual unfreezing had no significant impact on the model's accuracy compared to standard fine-tuning.
- Urvashi Khanna (2 papers)
- Diego Mollá (11 papers)