Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Predicting Mood Disorder Symptoms with Remotely Collected Videos Using an Interpretable Multimodal Dynamic Attention Fusion Network (2109.03029v1)

Published 7 Sep 2021 in cs.LG

Abstract: We developed a novel, interpretable multimodal classification method to identify symptoms of mood disorders viz. depression, anxiety and anhedonia using audio, video and text collected from a smartphone application. We used CNN-based unimodal encoders to learn dynamic embeddings for each modality and then combined these through a transformer encoder. We applied these methods to a novel dataset - collected by a smartphone application - on 3002 participants across up to three recording sessions. Our method demonstrated better multimodal classification performance compared to existing methods that employed static embeddings. Lastly, we used SHapley Additive exPlanations (SHAP) to prioritize important features in our model that could serve as potential digital markers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Tathagata Banerjee (19 papers)
  2. Matthew Kollada (2 papers)
  3. Pablo Gersberg (1 paper)
  4. Oscar Rodriguez (7 papers)
  5. Jane Tiller (1 paper)
  6. Andrew E Jaffe (1 paper)
  7. John Reynders (1 paper)
Citations (5)

Summary

We haven't generated a summary for this paper yet.