Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

KG-MTT-BERT: Knowledge Graph Enhanced BERT for Multi-Type Medical Text Classification (2210.03970v1)

Published 8 Oct 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Medical text learning has recently emerged as a promising area to improve healthcare due to the wide adoption of electronic health record (EHR) systems. The complexity of the medical text such as diverse length, mixed text types, and full of medical jargon, poses a great challenge for developing effective deep learning models. BERT has presented state-of-the-art results in many NLP tasks, such as text classification and question answering. However, the standalone BERT model cannot deal with the complexity of the medical text, especially the lengthy clinical notes. Herein, we develop a new model called KG-MTT-BERT (Knowledge Graph Enhanced Multi-Type Text BERT) by extending the BERT model for long and multi-type text with the integration of the medical knowledge graph. Our model can outperform all baselines and other state-of-the-art models in diagnosis-related group (DRG) classification, which requires comprehensive medical text for accurate classification. We also demonstrated that our model can effectively handle multi-type text and the integration of medical knowledge graph can significantly improve the performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Yong He (77 papers)
  2. Cheng Wang (386 papers)
  3. Shun Zhang (105 papers)
  4. Nan Li (318 papers)
  5. Zhaorong Li (2 papers)
  6. Zhenyu Zeng (5 papers)
Citations (7)