Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multilingual Clinical NER: Translation or Cross-lingual Transfer? (2306.04384v1)

Published 7 Jun 2023 in cs.CL, cs.AI, and cs.LG

Abstract: Natural language tasks like Named Entity Recognition (NER) in the clinical domain on non-English texts can be very time-consuming and expensive due to the lack of annotated data. Cross-lingual transfer (CLT) is a way to circumvent this issue thanks to the ability of multilingual LLMs to be fine-tuned on a specific task in one language and to provide high accuracy for the same task in another language. However, other methods leveraging translation models can be used to perform NER without annotated data in the target language, by either translating the training set or test set. This paper compares cross-lingual transfer with these two alternative methods, to perform clinical NER in French and in German without any training data in those languages. To this end, we release MedNERF a medical NER test set extracted from French drug prescriptions and annotated with the same guidelines as an English dataset. Through extensive experiments on this dataset and on a German medical dataset (Frei and Kramer, 2021), we show that translation-based methods can achieve similar performance to CLT but require more care in their design. And while they can take advantage of monolingual clinical LLMs, those do not guarantee better results than large general-purpose multilingual models, whether with cross-lingual transfer or translation.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xavier Fontaine (6 papers)
  2. Félix Gaschi (5 papers)
  3. Parisa Rastin (4 papers)
  4. Yannick Toussaint (5 papers)
Citations (6)