Scaling Clinical Trial Matching Using Large Language Models: A Case Study in Oncology (2308.02180v3)
Abstract: Clinical trial matching is a key process in health delivery and discovery. In practice, it is plagued by overwhelming unstructured data and unscalable manual processing. In this paper, we conduct a systematic study on scaling clinical trial matching using LLMs, with oncology as the focus area. Our study is grounded in a clinical trial matching system currently in test deployment at a large U.S. health network. Initial findings are promising: out of box, cutting-edge LLMs, such as GPT-4, can already structure elaborate eligibility criteria of clinical trials and extract complex matching logic (e.g., nested AND/OR/NOT). While still far from perfect, LLMs substantially outperform prior strong baselines and may serve as a preliminary solution to help triage patient-trial candidates with humans in the loop. Our study also reveals a few significant growth areas for applying LLMs to end-to-end clinical trial matching, such as context limitation and accuracy, especially in structuring patient information from longitudinal medical records.
- Cliff Wong (14 papers)
- Sheng Zhang (212 papers)
- Yu Gu (218 papers)
- Christine Moung (1 paper)
- Jacob Abel (2 papers)
- Naoto Usuyama (22 papers)
- Roshanthi Weerasinghe (4 papers)
- Brian Piening (6 papers)
- Tristan Naumann (41 papers)
- Carlo Bifulco (7 papers)
- Hoifung Poon (61 papers)