Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Explainable Automated Coding of Clinical Notes using Hierarchical Label-wise Attention Networks and Label Embedding Initialisation (2010.15728v4)

Published 29 Oct 2020 in cs.CL and cs.LG

Abstract: Diagnostic or procedural coding of clinical notes aims to derive a coded summary of disease-related information about patients. Such coding is usually done manually in hospitals but could potentially be automated to improve the efficiency and accuracy of medical coding. Recent studies on deep learning for automated medical coding achieved promising performances. However, the explainability of these models is usually poor, preventing them to be used confidently in supporting clinical practice. Another limitation is that these models mostly assume independence among labels, ignoring the complex correlation among medical codes which can potentially be exploited to improve the performance. We propose a Hierarchical Label-wise Attention Network (HLAN), which aimed to interpret the model by quantifying importance (as attention weights) of words and sentences related to each of the labels. Secondly, we propose to enhance the major deep learning models with a label embedding (LE) initialisation approach, which learns a dense, continuous vector representation and then injects the representation into the final layers and the label-wise attention layers in the models. We evaluated the methods using three settings on the MIMIC-III discharge summaries: full codes, top-50 codes, and the UK NHS COVID-19 shielding codes. Experiments were conducted to compare HLAN and LE initialisation to the state-of-the-art neural network based methods. HLAN achieved the best Micro-level AUC and $F_1$ on the top-50 code prediction and comparable results on the NHS COVID-19 shielding code prediction to other models. By highlighting the most salient words and sentences for each label, HLAN showed more meaningful and comprehensive model interpretation compared to its downgraded baselines and the CNN-based models. LE initialisation consistently boosted most deep learning models for automated medical coding.

Overview of Explainable Automated Coding of Clinical Notes

The paper proposes a novel approach to explainable automated coding of clinical notes leveraging Hierarchical Label-wise Attention Networks (HLAN) and Label Embedding Initialisation. The focus is on improving the efficiency and accuracy of medical coding through automation, while addressing critical challenges related to model explainability and label correlations. Automated coding promises to reduce the substantial manual workload involved in traditional diagnostic and procedural coding, but its adoption is impeded by both the need for clear interpretability and the inherent correlations among medical codes.

Methodology

The authors introduce the Hierarchical Label-wise Attention Network (HLAN) to tackle the issue of poor model interpretability. Unlike previous approaches where attention mechanisms are shared across labels, HLAN employs label-wise attention mechanisms at both word and sentence levels. This design choice enables the model to quantify the importance of specific words and sentences with respect to each label, providing more granular and comprehensible interpretations. Additionally, the authors leverage a label embedding (LE) initialisation technique, which incorporates correlations among labels. This enhancement aims to substantiate the model's predictions by better capturing inter-label relationships encoded within clinical narratives.

Three distinct configurations for evaluation are employed using the MIMIC-III data sets: full codes, top-50 codes, and the UK NHS COVID-19 shielding codes. These setups allow for comprehensive assessment of the methods across varying label dimensions and clinical focus areas. Model performance is compared against a range of state-of-the-art techniques including Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).

Results

Experimental results demonstrate that HLAN achieves superior micro-level AUC and F1 scores in the top-50 code setting, with 91.9% and 64.1%, respectively. For the NHS COVID-19 shielding codes, it reports a micro-level AUC of around 97%, comparable to other models. Notably, HLAN furnishes a more detailed and meaningful interpretation by highlighting key semantic aspects specific to each label, thereby improving the explainability compared to CNN-based counterparts.

The label embedding initialisation provides a notable boost, improving the previous state-of-the-art CNN with attention mechanisms on full code prediction to a micro-level F1 of 52.5%. These significant results underscore the potency of embedding label correlations to enhance model efficacy.

Implications and Future Directions

The paper's findings have significant implications for the practical deployment and utilization of AI-driven coding systems in healthcare settings. The enhanced interpretability provided by HLAN could bolster clinician trust and facilitate more informed decision-making processes, reducing the likelihood of erroneous code assignments. Moreover, the incorporation of label correlations through LE initialisation could lead to more accurate predictive performance, particularly in complex multi-label environments.

Future work may explore scaling methodologies for HLAN to broader label sets, integrating external clinical ontologies, and refining training processes to capture rare or emerging labels—strategies essential for bolstering robustness and adaptability of automated coding systems in dynamic clinical contexts. These developments would further drive the conversation about AI's role in healthcare and redefine capabilities in managing large-scale biomedical information more efficiently.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Hang Dong (65 papers)
  2. Víctor Suárez-Paniagua (6 papers)
  3. William Whiteley (8 papers)
  4. Honghan Wu (33 papers)
Citations (68)
Youtube Logo Streamline Icon: https://streamlinehq.com