Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Iterative Prompt Refinement for Radiation Oncology Symptom Extraction Using Teacher-Student Large Language Models (2402.04075v1)

Published 6 Feb 2024 in cs.CL

Abstract: This study introduces a novel teacher-student architecture utilizing LLMs to improve prostate cancer radiotherapy symptom extraction from clinical notes. Mixtral, the student model, initially extracts symptoms, followed by GPT-4, the teacher model, which refines prompts based on Mixtral's performance. This iterative process involved 294 single symptom clinical notes across 12 symptoms, with up to 16 rounds of refinement per epoch. Results showed significant improvements in extracting symptoms from both single and multi-symptom notes. For 59 single symptom notes, accuracy increased from 0.51 to 0.71, precision from 0.52 to 0.82, recall from 0.52 to 0.72, and F1 score from 0.49 to 0.73. In 375 multi-symptom notes, accuracy rose from 0.24 to 0.43, precision from 0.6 to 0.76, recall from 0.24 to 0.43, and F1 score from 0.20 to 0.44. These results demonstrate the effectiveness of advanced prompt engineering in LLMs for radiation oncology use.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Reza Khanmohammadi (11 papers)
  2. Ahmed I Ghanem (21 papers)
  3. Kyle Verdecchia (3 papers)
  4. Ryan Hall (4 papers)
  5. Mohamed Elshaikh (6 papers)
  6. Benjamin Movsas (7 papers)
  7. Hassan Bagher-Ebadian (6 papers)
  8. Indrin Chetty (4 papers)
  9. Mohammad M. Ghassemi (15 papers)
  10. Kundan Thind (7 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com