Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

UBERT: A Novel Language Model for Synonymy Prediction at Scale in the UMLS Metathesaurus (2204.12716v1)

Published 27 Apr 2022 in cs.CL and cs.AI

Abstract: The UMLS Metathesaurus integrates more than 200 biomedical source vocabularies. During the Metathesaurus construction process, synonymous terms are clustered into concepts by human editors, assisted by lexical similarity algorithms. This process is error-prone and time-consuming. Recently, a deep learning model (LexLM) has been developed for the UMLS Vocabulary Alignment (UVA) task. This work introduces UBERT, a BERT-based LLM, pretrained on UMLS terms via a supervised Synonymy Prediction (SP) task replacing the original Next Sentence Prediction (NSP) task. The effectiveness of UBERT for UMLS Metathesaurus construction process is evaluated using the UMLS Vocabulary Alignment (UVA) task. We show that UBERT outperforms the LexLM, as well as biomedical BERT-based models. Key to the performance of UBERT are the synonymy prediction task specifically developed for UBERT, the tight alignment of training data to the UVA task, and the similarity of the models used for pretrained UBERT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Thilini Wijesiriwardene (7 papers)
  2. Vinh Nguyen (25 papers)
  3. Goonmeet Bajaj (8 papers)
  4. Hong Yung Yip (5 papers)
  5. Vishesh Javangula (2 papers)
  6. Yuqing Mao (2 papers)
  7. Kin Wah Fung (2 papers)
  8. Srinivasan Parthasarathy (76 papers)
  9. Amit P. Sheth (14 papers)
  10. Olivier Bodenreider (7 papers)
Citations (2)