Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 91 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 16 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 212 tok/s Pro
GPT OSS 120B 471 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Hybrid CNN-Mamba Enhancement Network for Robust Multimodal Sentiment Analysis (2507.23444v1)

Published 31 Jul 2025 in cs.MM

Abstract: Multimodal Sentiment Analysis (MSA) with missing modalities has recently attracted increasing attention. Although existing research mainly focuses on designing complex model architectures to handle incomplete data, it still faces significant challenges in effectively aligning and fusing multimodal information. In this paper, we propose a novel framework called the Hybrid CNN-Mamba Enhancement Network (HCMEN) for robust multimodal sentiment analysis under missing modality conditions. HCMEN is designed around three key components: (1) hierarchical unimodal modeling, (2) cross-modal enhancement and alignment, and (3) multimodal mix-up fusion. First, HCMEN integrates the strengths of Convolutional Neural Network (CNN) for capturing local details and the Mamba architecture for modeling global contextual dependencies across different modalities. Furthermore, grounded in the principle of Mutual Information Maximization, we introduce a cross-modal enhancement mechanism that generates proxy modalities from mixed token-level representations and learns fine-grained token-level correspondences between modalities. The enhanced unimodal features are then fused and passed through the CNN-Mamba backbone, enabling local-to-global cross-modal interaction and comprehensive multimodal integration. Extensive experiments on two benchmark MSA datasets demonstrate that HCMEN consistently outperforms existing state-of-the-art methods, achieving superior performance across various missing modality scenarios. The code will be released publicly in the near future.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.