Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transferability of Natural Language Inference to Biomedical Question Answering (2007.00217v4)

Published 1 Jul 2020 in cs.CL

Abstract: Biomedical question answering (QA) is a challenging task due to the scarcity of data and the requirement of domain expertise. Pre-trained LLMs have been used to address these issues. Recently, learning relationships between sentence pairs has been proved to improve performance in general QA. In this paper, we focus on applying BioBERT to transfer the knowledge of natural language inference (NLI) to biomedical QA. We observe that BioBERT trained on the NLI dataset obtains better performance on Yes/No (+5.59%), Factoid (+0.53%), List type (+13.58%) questions compared to performance obtained in a previous challenge (BioASQ 7B Phase B). We present a sequential transfer learning method that significantly performed well in the 8th BioASQ Challenge (Phase B). In sequential transfer learning, the order in which tasks are fine-tuned is important. We measure an unanswerable rate of the extractive QA setting when the formats of factoid and list type questions are converted to the format of the Stanford Question Answering Dataset (SQuAD).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Minbyul Jeong (18 papers)
  2. Mujeen Sung (20 papers)
  3. Gangwoo Kim (10 papers)
  4. Donghyeon Kim (26 papers)
  5. Wonjin Yoon (13 papers)
  6. Jaehyo Yoo (5 papers)
  7. Jaewoo Kang (83 papers)
Citations (37)