Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Interpretability of Machine Learning Methods Applied to Neuroimaging (2204.07005v1)

Published 14 Apr 2022 in cs.CV, cs.AI, cs.LG, q-bio.NC, and q-bio.QM

Abstract: Deep learning methods have become very popular for the processing of natural images, and were then successfully adapted to the neuroimaging field. As these methods are non-transparent, interpretability methods are needed to validate them and ensure their reliability. Indeed, it has been shown that deep learning models may obtain high performance even when using irrelevant features, by exploiting biases in the training set. Such undesirable situations can potentially be detected by using interpretability methods. Recently, many methods have been proposed to interpret neural networks. However, this domain is not mature yet. Machine learning users face two major issues when aiming to interpret their models: which method to choose, and how to assess its reliability? Here, we aim at providing answers to these questions by presenting the most common interpretability methods and metrics developed to assess their reliability, as well as their applications and benchmarks in the neuroimaging context. Note that this is not an exhaustive survey: we aimed to focus on the studies which we found to be the most representative and relevant.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Elina Thibeau-Sutre (6 papers)
  2. Sasha Collin (1 paper)
  3. Ninon Burgos (18 papers)
  4. Olivier Colliot (36 papers)
Citations (4)

Summary

We haven't generated a summary for this paper yet.