Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mixed Pooling Multi-View Attention Autoencoder for Representation Learning in Healthcare (1910.06456v1)

Published 14 Oct 2019 in cs.LG, cs.AI, and stat.ML

Abstract: Distributed representations have been used to support downstream tasks in healthcare recently. Healthcare data (e.g., electronic health records) contain multiple modalities of data from heterogeneous sources that can provide complementary information, alongside an added dimension to learning personalized patient representations. To this end, in this paper we propose a novel unsupervised encoder-decoder model, namely Mixed Pooling Multi-View Attention Autoencoder (MPVAA), that generates patient representations encapsulating a holistic view of their medical profile. Specifically, by first learning personalized graph embeddings pertaining to each patient's heterogeneous healthcare data, it then integrates the non-linear relationships among them into a unified representation through multi-view attention mechanism. Additionally, a mixed pooling strategy is incorporated in the encoding step to learn diverse information specific to each data modality. Experiments conducted for multiple tasks demonstrate the effectiveness of the proposed model over the state-of-the-art representation learning methods in healthcare.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shaika Chowdhury (8 papers)
  2. Chenwei Zhang (60 papers)
  3. Philip S. Yu (592 papers)
  4. Yuan Luo (127 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.