Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Comprehensive Study on German Language Models for Clinical and Biomedical Text Understanding (2404.05694v2)

Published 8 Apr 2024 in cs.CL, cs.AI, and cs.LG

Abstract: Recent advances in NLP can be largely attributed to the advent of pre-trained LLMs such as BERT and RoBERTa. While these models demonstrate remarkable performance on general datasets, they can struggle in specialized domains such as medicine, where unique domain-specific terminologies, domain-specific abbreviations, and varying document structures are common. This paper explores strategies for adapting these models to domain-specific requirements, primarily through continuous pre-training on domain-specific data. We pre-trained several German medical LLMs on 2.4B tokens derived from translated public English medical data and 3B tokens of German clinical data. The resulting models were evaluated on various German downstream tasks, including named entity recognition (NER), multi-label classification, and extractive question answering. Our results suggest that models augmented by clinical and translation-based pre-training typically outperform general domain models in medical contexts. We conclude that continuous pre-training has demonstrated the ability to match or even exceed the performance of clinical models trained from scratch. Furthermore, pre-training on clinical data or leveraging translated texts have proven to be reliable methods for domain adaptation in medical NLP tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (20)
  1. Ahmad Idrissi-Yaghir (6 papers)
  2. Amin Dada (9 papers)
  3. Henning Schäfer (6 papers)
  4. Kamyar Arzideh (1 paper)
  5. Giulia Baldini (4 papers)
  6. Jan Trienes (9 papers)
  7. Max Hasin (5 papers)
  8. Jeanette Bewersdorff (1 paper)
  9. Cynthia S. Schmidt (4 papers)
  10. Marie Bauer (3 papers)
  11. Jiang Bian (229 papers)
  12. Yonghui Wu (115 papers)
  13. Jörg Schlötterer (35 papers)
  14. Torsten Zesch (8 papers)
  15. Peter A. Horn (4 papers)
  16. Christin Seifert (46 papers)
  17. Felix Nensa (11 papers)
  18. Jens Kleesiek (80 papers)
  19. Christoph M. Friedrich (17 papers)
  20. Kaleb E. Smith (5 papers)
Citations (2)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets