Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Trust of Explainable AI in Thyroid Nodule Diagnosis (2303.04731v1)

Published 8 Mar 2023 in cs.CV and cs.AI

Abstract: The ability to explain the prediction of deep learning models to end-users is an important feature to leverage the power of AI for the medical decision-making process, which is usually considered non-transparent and challenging to comprehend. In this paper, we apply state-of-the-art eXplainable artificial intelligence (XAI) methods to explain the prediction of the black-box AI models in the thyroid nodule diagnosis application. We propose new statistic-based XAI methods, namely Kernel Density Estimation and Density map, to explain the case of no nodule detected. XAI methods' performances are considered under a qualitative and quantitative comparison as feedback to improve the data quality and the model performance. Finally, we survey to assess doctors' and patients' trust in XAI explanations of the model's decisions on thyroid nodule images.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Truong Thanh Hung Nguyen (11 papers)
  2. Van Binh Truong (6 papers)
  3. Vo Thanh Khang Nguyen (8 papers)
  4. Quoc Hung Cao (5 papers)
  5. Quoc Khanh Nguyen (5 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.