Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Benchmarking Retrieval-Augmented Large Language Models in Biomedical NLP: Application, Robustness, and Self-Awareness (2405.08151v2)

Published 13 May 2024 in cs.CL
Benchmarking Retrieval-Augmented Large Language Models in Biomedical NLP: Application, Robustness, and Self-Awareness

Abstract: LLMs (LLM) have demonstrated remarkable capabilities in various biomedical NLP tasks, leveraging the demonstration within the input context to adapt to new tasks. However, LLM is sensitive to the selection of demonstrations. To address the hallucination issue inherent in LLM, retrieval-augmented LLM (RAL) offers a solution by retrieving pertinent information from an established database. Nonetheless, existing research work lacks rigorous evaluation of the impact of retrieval-augmented LLMs on different biomedical NLP tasks. This deficiency makes it challenging to ascertain the capabilities of RAL within the biomedical domain. Moreover, the outputs from RAL are affected by retrieving the unlabeled, counterfactual, or diverse knowledge that is not well studied in the biomedical domain. However, such knowledge is common in the real world. Finally, exploring the self-awareness ability is also crucial for the RAL system. So, in this paper, we systematically investigate the impact of RALs on 5 different biomedical tasks (triple extraction, link prediction, classification, question answering, and natural language inference). We analyze the performance of RALs in four fundamental abilities, including unlabeled robustness, counterfactual robustness, diverse robustness, and negative awareness. To this end, we proposed an evaluation framework to assess the RALs' performance on different biomedical NLP tasks and establish four different testbeds based on the aforementioned fundamental abilities. Then, we evaluate 3 representative LLMs with 3 different retrievers on 5 tasks over 9 datasets.

Evaluating Biomedical Applications of Retrieval-Augmented LLMs

Introduction

In the world of biomedical NLP, LLMs like ChatGPT have demonstrated powerful capabilities. However, they are prone to issues like factual hallucinations—generating information that appears plausible but is actually incorrect. This paper explores a promising approach to mitigate this problem: Retrieval-Augmented LLMs (RALs), which enhance LLMs by fetching relevant information from external databases.

The Basics of RALs

Imagine you're using an LLM to extract biomedical information. Instead of relying solely on its pre-trained knowledge, a Retrieval-Augmented LLM can search an external source—like a specialized database or corpus—for pertinent information. This retrieved data, when combined with the model's input, helps generate more accurate outputs.

Tasks Studied

The evaluation focuses on five key biomedical NLP tasks:

  • Triple Extraction
  • Link Prediction
  • Text Classification
  • Question Answering (QA)
  • Natural Language Inference (NLI)

The paper also systematically explores various capabilities of RALs including their robustness to unlabeled data, counterfactual data, diverse datasets, and their awareness of negative examples.

Key Findings

Robust Performance in Triple Extraction and Classification

The paper reports strong performance gains when using RALs for triple extraction and classification tasks. For instance, in the ChemProt dataset, RALs boosted the original LLM's F1 score by 49%, reaching an impressive 86.91% with the Contriver retriever.

The Curious Case of Question Answering

Interestingly, the paper finds that RALs perform worse on the question-answering task compared to traditional LLMs. This is attributed to the limited scope of the retriever corpus used in the paper, which did not access extensive biomedical databases like PubMed. The lesson here: the effectiveness of a retriever largely depends on the richness of the external data source.

Robustness Analysis

The paper introduces a comprehensive evaluation framework, BioRAB, to test the abilities of RALs. Here are the four testbeds they used:

  1. Unlabeled Robustness: Can RALs perform well with unlabeled retrieval corpus?
  2. Counterfactual Robustness: How well do RALs handle mislabeled data?
  3. Diverse Robustness: Can RALs benefit from diverse datasets across different tasks?
  4. Negative Awareness: Can RALs identify and handle harmful (negative) information?

Some Mixed Results

  • Unlabeled Robustness: RALs showed dependency on labeled data, especially for label-intensive tasks. However, in datasets like ChemProt, they still outperformed the original LLMs even with an unlabeled corpus.
  • Counterfactual Robustness: Higher rates of mislabeled data adversely impacted RAL performance, but lower levels (20%) seemed manageable.
  • Diverse Robustness: Using datasets from different tasks offered mixed results—sometimes beneficial, but often treated as noise.
  • Negative Awareness: The RALs struggled with identifying harmful information from negative examples, a crucial area needing further research.

The Implications

Practical Applications

The insights from this paper have significant implications for clinical applications and biomedical research. RALs can potentially transform tasks like patient record analysis, clinical decision support, and drug interaction studies by providing more accurate information—so long as the retrievers are adept at fetching and the corpora are rich and diverse.

Theoretical Implications

From a theoretical standpoint, this research highlights the challenges in making LLMs more robust and reliable. The struggle with counterfactual and diverse data corpuses suggests that the design of retrievers and the quality of input corpora are pivotal in RALs' effectiveness.

Future Directions

Improving the retrieval process—especially expanding corpora for question answering and other tasks—appears to be the next logical step. Moreover, enhancing the models' self-awareness abilities to discern between useful and misleading information will be crucial for the widespread application of RALs in sensitive domains like biomedical research.

Overall, this paper provides valuable insights into the capabilities and limitations of retrieval-augmented LLMs in the biomedical domain, illuminating paths for future advancements.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (29)
  1. Bionli: Generating a biomedical nli dataset using lexico-semantic constraints for adversarial examples. arXiv preprint arXiv:2210.14814, 2022.
  2. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. Journal of Biomedical Informatics, 45(5):885 – 892, 2012a. ISSN 1532-0464. doi: https://doi.org/10.1016/j.jbi.2012.04.008. URL http://www.sciencedirect.com/science/article/pii/S1532046412000615. Text Mining and Natural Language Processing in Pharmacogenomics.
  3. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. Journal of biomedical informatics, 45(5):885–892, 2012b.
  4. Retrieval augmented language model pre-training. In International conference on machine learning, pp.  3929–3938. PMLR, 2020.
  5. Systematic integration of biomedical knowledge prioritizes drugs for repurposing. Elife, 6:e26726, 2017.
  6. Unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118, 2021.
  7. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1–38, 2023.
  8. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020.
  9. Understand the dynamic world: An end-to-end knowledge informed framework for open domain entity state tracking. arXiv preprint arXiv:2304.13854, 2023.
  10. How far is language model from 100% few-shot named entity recognition in medical domain. arXiv preprint arXiv:2307.00186, 2023.
  11. Petailor: Improving large language model by tailored chunk scorer in biomedical triple extraction. arXiv preprint arXiv:2310.18463, 2023.
  12. Biomedrag: A retrieval augmented large language model for biomedicine. arXiv preprint arXiv:2405.00465, 2024a.
  13. A condensed transition graph framework for zero-shot link prediction with large language models. arXiv preprint arXiv:2402.10779, 2024b.
  14. Dr. icl: Demonstration-retrieved in-context learning. arXiv preprint arXiv:2305.14128, 2023.
  15. Fine-tuning or retrieval? comparing knowledge injection in llms. arXiv preprint arXiv:2312.05934, 2023.
  16. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. In Conference on health, inference, and learning, pp.  248–260. PMLR, 2022.
  17. In-context retrieval-augmented language models. arXiv preprint arXiv:2302.00083, 2023.
  18. Towards expert-level medical question answering with large language models. arXiv preprint arXiv:2305.09617, 2023.
  19. One embedder, any task: Instruction-finetuned text embeddings. acl2023 findings, 2023.
  20. Mrc4bioer: joint extraction of biomedical entities and relations in the machine reading comprehension framework. Journal of Biomedical Informatics, 125:103956, 2022.
  21. Chemprot: a disease chemical biology database. Nucleic acids research, 39(suppl_1):D367–D372, 2010.
  22. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
  23. Rizvi Rubina Zhang Rui Vasilakes Jake A. Bionli: Generating a biomedical nli dataset using lexico-semantic constraints for adversarial examples. https://conservancy.umn.edu/handle/11299/194965, 2018.
  24. Jacob White. Pubmed 2.0. Medical reference services quarterly, 39(4):382–387, 2020.
  25. Pmc-llama: Further finetuning llama on medical papers. arXiv preprint arXiv:2304.14454, 2023.
  26. Benchmarking retrieval-augmented generation for medicine. arXiv preprint arXiv:2402.13178, 2024.
  27. Almanac—retrieval-augmented language models for clinical medicine. NEJM AI, 1(2):AIoa2300068, 2024.
  28. Siren’s song in the ai ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219, 2023.
  29. Pharmkg: a dedicated knowledge graph benchmark for bomedical data mining. Briefings in bioinformatics, 22(4):bbaa344, 2021.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Mingchen Li (50 papers)
  2. Zaifu Zhan (10 papers)
  3. Han Yang (61 papers)
  4. Yongkang Xiao (7 papers)
  5. Jiatan Huang (3 papers)
  6. Rui Zhang (1138 papers)
Citations (2)
Youtube Logo Streamline Icon: https://streamlinehq.com