Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Language Model-Augmented Auto-Delineation of Treatment Target Volume in Radiation Therapy (2407.07296v1)

Published 10 Jul 2024 in physics.med-ph, cs.AI, and cs.CV

Abstract: Radiation therapy (RT) is one of the most effective treatments for cancer, and its success relies on the accurate delineation of targets. However, target delineation is a comprehensive medical decision that currently relies purely on manual processes by human experts. Manual delineation is time-consuming, laborious, and subject to interobserver variations. Although the advancements in AI techniques have significantly enhanced the auto-contouring of normal tissues, accurate delineation of RT target volumes remains a challenge. In this study, we propose a visual LLM-based RT target volume auto-delineation network termed Radformer. The Radformer utilizes a hierarichal vision transformer as the backbone and incorporates LLMs to extract text-rich features from clinical data. We introduce a visual language attention module (VLAM) for integrating visual and linguistic features for language-aware visual encoding (LAVE). The Radformer has been evaluated on a dataset comprising 2985 patients with head-and-neck cancer who underwent RT. Metrics, including the Dice similarity coefficient (DSC), intersection over union (IOU), and 95th percentile Hausdorff distance (HD95), were used to evaluate the performance of the model quantitatively. Our results demonstrate that the Radformer has superior segmentation performance compared to other state-of-the-art models, validating its potential for adoption in RT practice.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Praveenbalaji Rajendran (3 papers)
  2. Yong Yang (237 papers)
  3. Thomas R. Niedermayr (1 paper)
  4. Michael Gensheimer (3 papers)
  5. Beth Beadle (1 paper)
  6. Quynh-Thu Le (2 papers)
  7. Lei Xing (83 papers)
  8. Xianjin Dai (5 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets