Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

OpenMedLM: Prompt engineering can out-perform fine-tuning in medical question-answering with open-source large language models (2402.19371v1)

Published 29 Feb 2024 in cs.CL, cs.AI, and cs.IR
OpenMedLM: Prompt engineering can out-perform fine-tuning in medical question-answering with open-source large language models

Abstract: LLMs have become increasingly capable at accomplishing a range of specialized-tasks and can be utilized to expand equitable access to medical knowledge. Most medical LLMs have involved extensive fine-tuning, leveraging specialized medical data and significant, thus costly, amounts of computational power. Many of the top performing LLMs are proprietary and their access is limited to very few research groups. However, open-source (OS) models represent a key area of growth for medical LLMs due to significant improvements in performance and an inherent ability to provide the transparency and compliance required in healthcare. We present OpenMedLM, a prompting platform which delivers state-of-the-art (SOTA) performance for OS LLMs on medical benchmarks. We evaluated a range of OS foundation LLMs (7B-70B) on four medical benchmarks (MedQA, MedMCQA, PubMedQA, MMLU medical-subset). We employed a series of prompting strategies, including zero-shot, few-shot, chain-of-thought (random selection and kNN selection), and ensemble/self-consistency voting. We found that OpenMedLM delivers OS SOTA results on three common medical LLM benchmarks, surpassing the previous best performing OS models that leveraged computationally costly extensive fine-tuning. The model delivers a 72.6% accuracy on the MedQA benchmark, outperforming the previous SOTA by 2.4%, and achieves 81.7% accuracy on the MMLU medical-subset, establishing itself as the first OS LLM to surpass 80% accuracy on this benchmark. Our results highlight medical-specific emergent properties in OS LLMs which have not yet been documented to date elsewhere, and showcase the benefits of further leveraging prompt engineering to improve the performance of accessible LLMs for medical applications.

OpenMedLM: Advancing Medical Question-Answering with Open-Source LLMs through Prompt Engineering

Introduction

The development and application of LLMs have shown remarkable progress in various specialized tasks, including those in the medical field. Despite the significant advancements, the use of medical LLMs often involves extensive fine-tuning and considerable computation resources, which can be a barrier to widespread use, especially in a field as critical as healthcare. OpenMedLM introduces an innovative approach that leverages the power of open-source LLMs and prompt engineering to deliver state-of-the-art performance in medical question-answering tasks without the need for extensive model fine-tuning. This approach not only demonstrates the potential of open-source models in specialized domains but also emphasizes the importance of prompt engineering in optimizing LLM performance.

Methodology

OpenMedLM's methodology centers around the evaluation of various open-source LLMs across several medical benchmarks to identify the most effective model, which was found to be Yi 34B. Using a multifaceted prompt engineering strategy, OpenMedLM employs techniques such as zero-shot, few-shot, chain-of-thought prompting, and ensemble/self-consistency voting to optimize the model's question-answering capabilities. The methodology meticulously outlines the selection process of the LLMs, the preparation and implementation of different prompting strategies, and the evaluation across four major medical benchmarks: MedQA, MedMCQA, PubMedQA, and the medical-subset of MMLU. The paper’s rigorous approach to prompt engineering highlights its potential to enhance the performance of open-source models in medical applications significantly.

Results

OpenMedLM’s implementation of advanced prompt engineering resulted in remarkable success across multiple medical benchmarks. Specifically, the model achieved a 72.6% accuracy on the MedQA benchmark and an 81.7% accuracy on the MMLU medical-subset, surpassing the previous state-of-the-art performances for open-source models in these contexts. These findings underscore the effectiveness of the prompt engineering techniques employed and represent a significant step forward in the use of open-source LLMs for medical question-answering tasks.

Implications

The outcomes of this research have profound implications for both the theoretical understanding and practical applications of LLMs in healthcare. Theoretically, the success of OpenMedLM in achieving state-of-the-art performance without extensive fine-tuning challenges the prevailing paradigm in the development of specialized LLMs. Practically, the use of open-source models complements the need for transparency and compliance in healthcare applications, offering a viable path towards the democratization of advanced AI tools in medical settings. Moreover, the promising results invite further exploration into the potential synergies between fine-tuning and prompt engineering to possibly uncover new optimization strategies for LLMs.

Future Directions

The success of OpenMedLM suggests several avenues for future research, including the exploration of other domain-specific tasks where prompt engineering could similarly optimize the performance of open-source LLMs. Additionally, further investigation into the emergent properties of open-source LLMs could provide insights into the underlying capabilities of these models and how they can be leveraged for complex problem-solving tasks beyond the medical domain. Lastly, integrating LLM capabilities with other AI algorithms in healthcare could pave the way for more comprehensive and powerful tools that support clinical decision-making and patient care.

Conclusion

OpenMedLM's approach to leveraging prompt engineering for optimizing open-source LLMs in medical question-answering tasks not only sets new benchmarks for performance but also highlights the transformative potential of accessible AI tools in healthcare. This research underscores the importance of innovative methodologies in unlocking the capabilities of LLMs and broadens the prospects for their application in specialized tasks, contributing to the advancement of equitable access to medical knowledge through AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Jenish Maharjan (1 paper)
  2. Anurag Garikipati (1 paper)
  3. Navan Preet Singh (1 paper)
  4. Leo Cyrus (1 paper)
  5. Mayank Sharma (27 papers)
  6. Madalina Ciobanu (1 paper)
  7. Gina Barnes (1 paper)
  8. Rahul Thapa (16 papers)
  9. Qingqing Mao (13 papers)
  10. Ritankar Das (2 papers)
Citations (9)