Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Fine-Tuning Pretrained Language Models With Label Attention for Biomedical Text Classification (2108.11809v3)

Published 26 Aug 2021 in cs.CL and cs.LG

Abstract: The massive scale and growth of textual biomedical data have made its indexing and classification increasingly important. However, existing research on this topic mainly utilized convolutional and recurrent neural networks, which generally achieve inferior performance than the novel transformers. On the other hand, systems that apply transformers only focus on the target documents, overlooking the rich semantic information that label descriptions contain. To address this gap, we develop a transformer-based biomedical text classifier that considers label information. The system achieves this with a label attention module incorporated into the fine-tuning process of pretrained LLMs (PTMs). Our results on two public medical datasets show that the proposed fine-tuning scheme outperforms the vanilla PTMs and state-of-the-art models.

Citations (2)

Summary

We haven't generated a summary for this paper yet.