Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences (2010.11985v2)

Published 22 Oct 2020 in cs.CL, cs.CV, cs.LG, and cs.MM

Abstract: Human communication is multimodal in nature; it is through multiple modalities such as language, voice, and facial expressions, that opinions and emotions are expressed. Data in this domain exhibits complex multi-relational and temporal interactions. Learning from this data is a fundamentally challenging research problem. In this paper, we propose Modal-Temporal Attention Graph (MTAG). MTAG is an interpretable graph-based neural model that provides a suitable framework for analyzing multimodal sequential data. We first introduce a procedure to convert unaligned multimodal sequence data into a graph with heterogeneous nodes and edges that captures the rich interactions across modalities and through time. Then, a novel graph fusion operation, called MTAG fusion, along with a dynamic pruning and read-out technique, is designed to efficiently process this modal-temporal graph and capture various interactions. By learning to focus only on the important interactions within the graph, MTAG achieves state-of-the-art performance on multimodal sentiment analysis and emotion recognition benchmarks, while utilizing significantly fewer model parameters.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jianing Yang (21 papers)
  2. Yongxin Wang (21 papers)
  3. Ruitao Yi (2 papers)
  4. Yuying Zhu (18 papers)
  5. Azaan Rehman (5 papers)
  6. Amir Zadeh (36 papers)
  7. Soujanya Poria (138 papers)
  8. Louis-Philippe Morency (123 papers)
Citations (11)