Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SINA-BERT: A pre-trained Language Model for Analysis of Medical Texts in Persian (2104.07613v1)

Published 15 Apr 2021 in cs.CL

Abstract: We have released Sina-BERT, a LLM pre-trained on BERT (Devlin et al., 2018) to address the lack of a high-quality Persian LLM in the medical domain. SINA-BERT utilizes pre-training on a large-scale corpus of medical contents including formal and informal texts collected from a variety of online resources in order to improve the performance on health-care related tasks. We employ SINA-BERT to complete following representative tasks: categorization of medical questions, medical sentiment analysis, and medical question retrieval. For each task, we have developed Persian annotated data sets for training and evaluation and learnt a representation for the data of each task especially complex and long medical questions. With the same architecture being used across tasks, SINA-BERT outperforms BERT-based models that were previously made available in the Persian language.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Nasrin Taghizadeh (3 papers)
  2. Ehsan Doostmohammadi (11 papers)
  3. Elham Seifossadat (2 papers)
  4. Hamid R. Rabiee (85 papers)
  5. Maedeh S. Tahaei (2 papers)
Citations (7)