Multimodal Guidance Network for Missing-Modality Inference in Content Moderation (2309.03452v2)
Abstract: Multimodal deep learning, especially vision-LLMs, have gained significant traction in recent years, greatly improving performance on many downstream tasks, including content moderation and violence detection. However, standard multimodal approaches often assume consistent modalities between training and inference, limiting applications in many real-world use cases, as some modalities may not be available during inference. While existing research mitigates this problem through reconstructing the missing modalities, they unavoidably increase unnecessary computational cost, which could be just as critical, especially for large, deployed infrastructures in industry. To this end, we propose a novel guidance network that promotes knowledge sharing during training, taking advantage of the multimodal representations to train better single-modality models to be used for inference. Real-world experiments in violence detection shows that our proposed framework trains single-modality models that significantly outperform traditionally trained counterparts, while avoiding increases in computational cost for inference.
- Zhuokai Zhao (21 papers)
- Harish Palani (1 paper)
- Tianyi Liu (58 papers)
- Lena Evans (1 paper)
- Ruth Toner (1 paper)