Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
184 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EriBERTa: A Bilingual Pre-Trained Language Model for Clinical Natural Language Processing (2306.07373v1)

Published 12 Jun 2023 in cs.CL

Abstract: The utilization of clinical reports for various secondary purposes, including health research and treatment monitoring, is crucial for enhancing patient care. NLP tools have emerged as valuable assets for extracting and processing relevant information from these reports. However, the availability of specialized LLMs for the clinical domain in Spanish has been limited. In this paper, we introduce EriBERTa, a bilingual domain-specific LLM pre-trained on extensive medical and clinical corpora. We demonstrate that EriBERTa outperforms previous Spanish LLMs in the clinical domain, showcasing its superior capabilities in understanding medical texts and extracting meaningful information. Moreover, EriBERTa exhibits promising transfer learning abilities, allowing for knowledge transfer from one language to another. This aspect is particularly beneficial given the scarcity of Spanish clinical data.

Citations (1)

Summary

We haven't generated a summary for this paper yet.