Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PRISM: Patient Records Interpretation for Semantic Clinical Trial Matching using Large Language Models (2404.15549v2)

Published 23 Apr 2024 in cs.CL and cs.AI

Abstract: Clinical trial matching is the task of identifying trials for which patients may be potentially eligible. Typically, this task is labor-intensive and requires detailed verification of patient electronic health records (EHRs) against the stringent inclusion and exclusion criteria of clinical trials. This process is manual, time-intensive, and challenging to scale up, resulting in many patients missing out on potential therapeutic options. Recent advancements in LLMs have made automating patient-trial matching possible, as shown in multiple concurrent research studies. However, the current approaches are confined to constrained, often synthetic datasets that do not adequately mirror the complexities encountered in real-world medical data. In this study, we present the first, end-to-end large-scale empirical evaluation of clinical trial matching using real-world EHRs. Our study showcases the capability of LLMs to accurately match patients with appropriate clinical trials. We perform experiments with proprietary LLMs, including GPT-4 and GPT-3.5, as well as our custom fine-tuned model called OncoLLM and show that OncoLLM, despite its significantly smaller size, not only outperforms GPT-3.5 but also matches the performance of qualified medical doctors. All experiments were carried out on real-world EHRs that include clinical notes and available clinical trials from a single cancer center in the United States.

Evaluation and Application of LLMs in Clinical Trial Matching: Insights from the PRISM Study

The paper presents a comprehensive paper on the application of LLMs such as GPT-4 and GPT-3.5 in clinical trial matching, with a specific focus on oncology. The labor-intensive and time-consuming nature of patient-trial matching is addressed by developing an end-to-end pipeline known as PRISM, which leverages LLMs to automate the process, utilizing real-world Electronic Health Records (EHRs). This paper demonstrates the potential of LLMs to identify eligible trials for cancer patients by evaluating the capability of these models to interpret and process unstructured EHR data.

Contributions and Approach

The paper introduces the PRISM pipeline, which encompasses the integration of patient records interpretation and semantic clinical trial matching. The model performance was benchmarked against qualified medical professionals, highlighting that the developed OncoLLM model, despite its smaller size, achieved a performance level comparable to GPT-4. This model was fine-tuned specifically for oncology-related tasks and demonstrates significant efficiency gains.

The pipeline utilizes a multi-modular approach:

  • Trial Composition Module: Converts trial inclusion and exclusion criteria into a structured question format, facilitating downstream processing.
  • Chunking and Retrieval: Processes large volumes of unstructured data, extracting relevant information aspects using advanced semantic retrieval techniques.
  • Question-Answering Module: Engages in zero-shot prompting, providing confidence-scaled answers with detailed explanations and evidence references.

Experimental Results

The PRISM pipeline was evaluated on a dataset consisting of real-world EHRs, comprising more than 200 trials and over 10,000 clinical trial criteria. The paper demonstrated that the OncoLLM model outperformed larger proprietary models in criterion-specific accuracy, achieving a near performance parity with experienced clinicians, achieving approximately 63% accuracy in inclusion criteria questions and 66% when ambiguity (‘N/A’ answers) was reduced.

Furthermore, OncoLLM also excelled in the task of ranking trials for patients, being able to suggest the correct clinical trial amongst other contenders within the top three ranks in over 65% of cases. This suggests a significant potential for reducing the manual workload of medical professionals and enhancing trial enroLLMent efficiency.

Implications and Future Directions

The potential implications of this research are substantial, suggesting a paradigm shift in how clinical trials are conducted, particularly in oncology, where patient eligibility is often nuanced and variable. The adoption of LLMs in this context could lead to more accurate and timely patient-trial matches, ultimately improving patient outcomes and accelerating data-driven medical research.

However, there are several considerations necessary for real-world implementation. The current reliance on unstructured data presents challenges, notably in missing or incomplete data contexts, which may necessitate the integration of structured data solutions. Additionally, the refinement of the retrieval mechanisms and model inference times will further enhance the pipeline’s applicability in clinical environments.

As the model is deployed in privacy-sensitive environments, the ability to host OncoLLM on private infrastructure addresses several concerns regarding data security and compliance with regulatory standards. Its cost-efficiency also positions it as a viable alternative to more expensive, cloud-based proprietary models.

In conclusion, the paper highlights how innovations in machine learning, particularly with LLMs, can revolutionize clinical trial methodologies by optimizing the matching process and potentially improving patient responses. However, continued research focusing on broader datasets, advanced model tuning, and real-world trial deployments will be crucial in realizing these models' full potential in clinical settings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (13)
  1. Shashi Kant Gupta (8 papers)
  2. Aditya Basu (5 papers)
  3. Mauro Nievas (2 papers)
  4. Jerrin Thomas (2 papers)
  5. Nathan Wolfrath (2 papers)
  6. Adhitya Ramamurthi (1 paper)
  7. Bradley Taylor (3 papers)
  8. Anai N. Kothari (3 papers)
  9. Therica M. Miller (1 paper)
  10. Sorena Nadaf-Rahrov (1 paper)
  11. Yanshan Wang (50 papers)
  12. Hrituraj Singh (8 papers)
  13. Regina Schwind (1 paper)
Citations (4)
Youtube Logo Streamline Icon: https://streamlinehq.com