Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Extracting UMLS Concepts from Medical Text Using General and Domain-Specific Deep Learning Models (1910.01274v1)

Published 3 Oct 2019 in cs.CL and cs.NE

Abstract: Entity recognition is a critical first step to a number of clinical NLP applications, such as entity linking and relation extraction. We present the first attempt to apply state-of-the-art entity recognition approaches on a newly released dataset, MedMentions. This dataset contains over 4000 biomedical abstracts, annotated for UMLS semantic types. In comparison to existing datasets, MedMentions contains a far greater number of entity types, and thus represents a more challenging but realistic scenario in a real-world setting. We explore a number of relevant dimensions, including the use of contextual versus non-contextual word embeddings, general versus domain-specific unsupervised pre-training, and different deep learning architectures. We contrast our results against the well-known i2b2 2010 entity recognition dataset, and propose a new method to combine general and domain-specific information. While producing a state-of-the-art result for the i2b2 2010 task (F1 = 0.90), our results on MedMentions are significantly lower (F1 = 0.63), suggesting there is still plenty of opportunity for improvement on this new data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kathleen C. Fraser (22 papers)
  2. Isar Nejadgholi (27 papers)
  3. Berry De Bruijn (4 papers)
  4. Muqun Li (1 paper)
  5. Astha LaPlante (1 paper)
  6. Khaldoun Zine El Abidine (2 papers)
Citations (13)
Youtube Logo Streamline Icon: https://streamlinehq.com