Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Towards Fully Interpretable Deep Neural Networks: Are We There Yet? (2106.13164v1)

Published 24 Jun 2021 in cs.LG and cs.CV

Abstract: Despite the remarkable performance, Deep Neural Networks (DNNs) behave as black-boxes hindering user trust in AI systems. Research on opening black-box DNN can be broadly categorized into post-hoc methods and inherently interpretable DNNs. While many surveys have been conducted on post-hoc interpretation methods, little effort is devoted to inherently interpretable DNNs. This paper provides a review of existing methods to develop DNNs with intrinsic interpretability, with a focus on Convolutional Neural Networks (CNNs). The aim is to understand the current progress towards fully interpretable DNNs that can cater to different interpretation requirements. Finally, we identify gaps in current work and suggest potential research directions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Sandareka Wickramanayake (8 papers)
  2. Wynne Hsu (32 papers)
  3. Mong Li Lee (15 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com