Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Question-Entailment Approach to Question Answering (1901.08079v1)

Published 23 Jan 2019 in cs.CL, cs.AI, cs.IR, and cs.LG

Abstract: One of the challenges in large-scale information retrieval (IR) is to develop fine-grained and domain-specific methods to answer natural language questions. Despite the availability of numerous sources and datasets for answer retrieval, Question Answering (QA) remains a challenging problem due to the difficulty of the question understanding and answer extraction tasks. One of the promising tracks investigated in QA is to map new questions to formerly answered questions that are `similar'. In this paper, we propose a novel QA approach based on Recognizing Question Entailment (RQE) and we describe the QA system and resources that we built and evaluated on real medical questions. First, we compare machine learning and deep learning methods for RQE using different kinds of datasets, including textual inference, question similarity and entailment in both the open and clinical domains. Second, we combine IR models with the best RQE method to select entailed questions and rank the retrieved answers. To study the end-to-end QA approach, we built the MedQuAD collection of 47,457 question-answer pairs from trusted medical sources, that we introduce and share in the scope of this paper. Following the evaluation process used in TREC 2017 LiveQA, we find that our approach exceeds the best results of the medical task with a 29.8% increase over the best official score. The evaluation results also support the relevance of question entailment for QA and highlight the effectiveness of combining IR and RQE for future QA efforts. Our findings also show that relying on a restricted set of reliable answer sources can bring a substantial improvement in medical QA.

An Analysis of a Question-Entailment Approach for Medical Question Answering

The paper presents a novel approach to Question Answering (QA) in the medical domain, leveraging Recognizing Question Entailment (RQE) to enhance the retrieval and ranking of answers. Despite the increasing availability of datasets for answer retrieval, developing QA systems that effectively understand complex questions and accurately extract answers remains a formidable challenge, particularly in the domain-specific context of medical information retrieval.

Summary of the QA System

The proposed QA system is built upon a combination of Information Retrieval (IR) models with RQE to address consumer health questions that map new inquiries to previously answered ones using question entailment. Essential to this approach is the MedQuAD dataset, a comprehensive collection of 47,457 trusted question-answer pairs harvested from NIH medical resources, which forms the backbone for both training and evaluation of the system.

The paper describes a rigorous comparison between ML models, specifically logistic regression, and deep learning (DL) approaches utilizing neural networks with GloVe embeddings for classifying question entailment. For this purpose, multiple training datasets spanning open, clinical, and consumer health domains were employed. The RQE strategies are then integrated with IR techniques to filter and rank potential question candidates based on entailment scores, thereby bolstering the quality of answer selection and re-ranking.

Key Numerical Findings

A significant performance increase of 29.8% over the top score in TREC 2017 LiveQA medical tasks was reported for the system, validating the effectiveness of RQE in the retrieval and ranking process of QA. This improvement is particularly noteworthy given that the proposed system relies solely on a restricted yet reliable collection of answer sources, emphasizing the practical benefit of employing question entailment in medical information retrieval.

Implications and Future Developments

Practically, this research underscores the potential for RQE to innovate QA systems, advocating for meticulous question mapping to pre-established knowledge repositories to optimize the answer retrieval process. Theoretically, it calls attention to the importance of entailment relations in understanding the semantics of medical inquiries, prompting future explorations into enhanced question type classifications and focus recognition to refine entailment processes further.

With the observed benefits of using trusted sources and the promising results demonstrated through the current RQE-based QA approach, the paper suggests expanding the MedQuAD collection and investigating deeper architectures, including transfer learning, to further improve DL model performances. A noteworthy aspect is the potential for integrating this methodology to open-domain QA systems, which could capitalize on the entailment-driven insights from the medical field.

In conclusion, the paper contributes a robust framework for improving the accuracy and relevance of medical QA systems using question entailment, with significant implications for future advancements in AI-driven information retrieval—a critical consideration as domain-specific searches continue to demand precision and reliability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Asma Ben Abacha (11 papers)
  2. Dina Demner-Fushman (21 papers)
Citations (162)