Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Harnessing the Power of BERT in the Turkish Clinical Domain: Pretraining Approaches for Limited Data Scenarios (2305.03788v1)

Published 5 May 2023 in cs.CL

Abstract: In recent years, major advancements in NLP have been driven by the emergence of LLMs, which have significantly revolutionized research and development within the field. Building upon this progress, our study delves into the effects of various pre-training methodologies on Turkish clinical LLMs' performance in a multi-label classification task involving radiology reports, with a focus on addressing the challenges posed by limited language resources. Additionally, we evaluated the simultaneous pretraining approach by utilizing limited clinical task data for the first time. We developed four models, including TurkRadBERT-task v1, TurkRadBERT-task v2, TurkRadBERT-sim v1, and TurkRadBERT-sim v2. Our findings indicate that the general Turkish BERT model (BERTurk) and TurkRadBERT-task v1, both of which utilize knowledge from a substantial general-domain corpus, demonstrate the best overall performance. Although the task-adaptive pre-training approach has the potential to capture domain-specific patterns, it is constrained by the limited task-specific corpus and may be susceptible to overfitting. Furthermore, our results underscore the significance of domain-specific vocabulary during pre-training for enhancing model performance. Ultimately, we observe that the combination of general-domain knowledge and task-specific fine-tuning is essential for achieving optimal performance across a range of categories. This study offers valuable insights for developing effective Turkish clinical LLMs and can guide future research on pre-training techniques for other low-resource languages within the clinical domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Hazal Türkmen (1 paper)
  2. Oğuz Dikenelli (3 papers)
  3. Cenk Eraslan (1 paper)
  4. Mehmet Cem Çallı (1 paper)
  5. Süha Süreyya Özbek (1 paper)
Citations (2)