Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Empirical Study of Multi-Task Learning on BERT for Biomedical Text Mining (2005.02799v1)

Published 6 May 2020 in cs.CL

Abstract: Multi-task learning (MTL) has achieved remarkable success in natural language processing applications. In this work, we study a multi-task learning model with multiple decoders on varieties of biomedical and clinical natural language processing tasks such as text similarity, relation extraction, named entity recognition, and text inference. Our empirical results demonstrate that the MTL fine-tuned models outperform state-of-the-art transformer models (e.g., BERT and its variants) by 2.0% and 1.3% in biomedical and clinical domains, respectively. Pairwise MTL further demonstrates more details about which tasks can improve or decrease others. This is particularly helpful in the context that researchers are in the hassle of choosing a suitable model for new problems. The code and models are publicly available at https://github.com/ncbi-nlp/bluebert

Multi-Task Learning with BERT for Biomedical Text Mining

The empirical paper conducted in this paper investigates the application of Multi-Task Learning (MTL) with BERT-based models for various biomedical and clinical NLP tasks. The paper focuses on improving task performance in data-scarce domains through shared learning. The tasks examined include text similarity, relation extraction, named entity recognition, and text inference, with a particular focus on the Biomedical Language Understanding Evaluation (BLUE) benchmark.

Methodology

The paper outlines a model architecture where shared layers based on BERT are employed alongside task-specific layers. This structure supports the joint learning of eight distinct tasks across different biomedical and clinical datasets. The paper distinguishes three models for comparison: a baseline single-task model utilizing BERT, an MTL refinement model (MT-BERT-Refinement), and a fine-tuned model for each task (MT-BERT-Fine-Tune).

Results

The paper reports a performance enhancement of 2.0% in biomedical domains and 1.3% in clinical domains when using MTL compared to conventional fine-tuned BERT models. Notably, MT-BERT-Fine-Tune achieved new state-of-the-art results on four BLUE benchmark tasks. These improvements signify potential gains for researchers facing the challenge of selecting models for novel problems, particularly when training data is limited.

The research presented intriguing insights into task interactions. Through pairwise MTL analysis, it was observed that specific tasks, such as ShARe/CLEFE benefiting most from the MTL approach, can enhance performance when learned jointly with others. This underlines task suitability as a crucial factor in MTL model gains.

Implications and Future Directions

The findings underscore the effectiveness of MTL in resource-limited biomedical and clinical contexts, suggesting that domain-specific pretraining, especially utilizing datasets like PubMed and MIMIC-III, enhances performance. However, the paper also acknowledges that MTL doesn't universally lead to performance gains across all task combinations within different domains.

Future work could explore deeper analyses of task relationships to understand under which conditions MTL most effectively yields performance improvements. Moreover, the research suggests potential in exploring alternative MTL strategies, such as soft parameter sharing or knowledge distillation, to optimize task interactions and enhance model generalization.

The code and pre-trained models are openly available, offering a resource for further exploration and validation within the research community, potentially fostering advancements in biomedical NLP applications. This work provides a foundation for developing robust, generalizable NLP models that leverage MTL's capabilities to handle multi-faceted biomedical text mining tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yifan Peng (147 papers)
  2. Qingyu Chen (57 papers)
  3. Zhiyong Lu (113 papers)
Citations (102)
Github Logo Streamline Icon: https://streamlinehq.com