Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
156 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Transparent Prediction through Latent Representation Analysis (2009.07044v2)

Published 13 Sep 2020 in cs.LG, cs.CV, eess.IV, and stat.ML

Abstract: The paper presents a novel deep learning approach, which extracts latent information from trained Deep Neural Networks (DNNs) and derives concise representations that are analyzed in an effective, unified way for prediction purposes. It is well known that DNNs are capable of analyzing complex data; however, they lack transparency in their decision making, in the sense that it is not straightforward to justify their prediction, or to visualize the features on which the decision was based. Moreover, they generally require large amounts of data in order to learn and become able to adapt to different environments. This makes their use difficult in healthcare, where trust and personalization are key issues. Transparency combined with high prediction accuracy are the targeted goals of the proposed approach. It includes both supervised DNN training and unsupervised learning of latent variables extracted from the trained DNNs. Domain Adaptation from multiple sources is also presented as an extension, where the extracted latent variable representations are used to generate predictions in other, non-annotated, environments. Successful application is illustrated through a large experimental study in various fields: prediction of Parkinson's disease from MRI and DaTScans; prediction of COVID-19 and pneumonia from CT scans and X-rays; optical character verification in retail food packaging.

Citations (86)

Summary

We haven't generated a summary for this paper yet.