Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PubMedQA: A Dataset for Biomedical Research Question Answering (1909.06146v1)

Published 13 Sep 2019 in cs.CL, cs.LG, and q-bio.QM

Abstract: We introduce PubMedQA, a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions. Our best performing model, multi-phase fine-tuning of BioBERT with long answer bag-of-word statistics as additional supervision, achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy and majority-baseline of 55.2% accuracy, leaving much room for improvement. PubMedQA is publicly available at https://pubmedqa.github.io.

PubMedQA: A Dataset for Biomedical Research Question Answering

The paper "PubMedQA: A Dataset for Biomedical Research Question Answering" introduces a new dataset aimed at the complex task of question answering within the domain of biomedical research. The dataset, named PubMedQA, is specifically designed to address the challenges presented by biomedical texts, which often require nuanced reasoning over quantitative research content.

Dataset Composition and Features

PubMedQA comprises three subsets:

  • PQA-L (Labeled): 1,000 expert-annotated instances
  • PQA-U (Unlabeled): 61,200 unlabeled instances
  • PQA-A (Artificial): 211,300 artificially generated instances

Each instance includes a question derived from PubMed article titles, a context derived from the article's abstract (excluding the conclusion), a long answer (the abstract's conclusion), and a yes/no/maybe summary of the conclusion. The diversity and scale of this dataset distinguish it as the first of its kind where reasoning over biomedical texts, especially those with quantitative data, is essential.

Methodology

The researchers employed a multi-phase fine-tuning approach using BioBERT, a domain-specific version of BERT pre-trained on biomedical texts. The paper underscores the utility of leveraging large, automatically constructed datasets (PQA-A) and bootstrapping techniques applied to unlabeled data (PQA-U) to enhance performance on the smaller expert-annotated dataset (PQA-L).

Key innovations include:

  • Long Answer Supervision: Utilizing the abstract's conclusion as an auxiliary signal in training models, improving predictive accuracy by leveraging contextual cues.
  • Multi-phase Fine-tuning: Sequentially fine-tuning on different dataset subsets to maximize the exploitation of available data, adapting models incrementally to the specific task requirements.

Results

The best-performing model, a multi-phase fine-tuned BioBERT incorporating long answer supervision, demonstrated an accuracy of 68.1%, in contrast to the human performance benchmark of 78.0%. This highlights significant progress and opens avenues for improvement.

Implications and Future Directions

The development of PubMedQA provides an important benchmark for scientific reasoning in NLP systems. It challenges models to engage with content that demands both domain-specific knowledge and complex inferential capabilities.

Looking forward, the nuanced handling of quantitative content remains a major challenge, as significant portions of context lack explicit textual interpretation. Incorporating advanced techniques for numerical understanding could prove beneficial. Additionally, the paper hints at the potential of exploring long answer generation for richer auxiliary tasks, which might offer further insights and enhancements.

In sum, PubMedQA represents a valuable resource and significant step forward in biomedical NLP, outlining a clear trajectory for future advancements in AI's ability to process and reason over scientific literature.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Qiao Jin (74 papers)
  2. Bhuwan Dhingra (66 papers)
  3. Zhengping Liu (1 paper)
  4. William W. Cohen (79 papers)
  5. Xinghua Lu (16 papers)
Citations (644)
Github Logo Streamline Icon: https://streamlinehq.com