Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Incorporating Biological Knowledge with Factor Graph Neural Network for Interpretable Deep Learning (1906.00537v1)

Published 3 Jun 2019 in q-bio.GN and cs.LG

Abstract: While deep learning has achieved great success in many fields, one common criticism about deep learning is its lack of interpretability. In most cases, the hidden units in a deep neural network do not have a clear semantic meaning or correspond to any physical entities. However, model interpretability and explainability are crucial in many biomedical applications. To address this challenge, we developed the Factor Graph Neural Network model that is interpretable and predictable by combining probabilistic graphical models with deep learning. We directly encode biological knowledge such as Gene Ontology as a factor graph into the model architecture, making the model transparent and interpretable. Furthermore, we devised an attention mechanism that can capture multi-scale hierarchical interactions among biological entities such as genes and Gene Ontology terms. With parameter sharing mechanism, the unrolled Factor Graph Neural Network model can be trained with stochastic depth and generalize well. We applied our model to two cancer genomic datasets to predict target clinical variables and achieved better results than other traditional machine learning and deep learning models. Our model can also be used for gene set enrichment analysis and selecting Gene Ontology terms that are important to target clinical variables.

Citations (14)

Summary

We haven't generated a summary for this paper yet.