MMSD2.0: Towards a Reliable Multi-modal Sarcasm Detection System (2307.07135v1)
Abstract: Multi-modal sarcasm detection has attracted much recent attention. Nevertheless, the existing benchmark (MMSD) has some shortcomings that hinder the development of reliable multi-modal sarcasm detection system: (1) There are some spurious cues in MMSD, leading to the model bias learning; (2) The negative samples in MMSD are not always reasonable. To solve the aforementioned issues, we introduce MMSD2.0, a correction dataset that fixes the shortcomings of MMSD, by removing the spurious cues and re-annotating the unreasonable samples. Meanwhile, we present a novel framework called multi-view CLIP that is capable of leveraging multi-grained cues from multiple perspectives (i.e., text, image, and text-image interaction view) for multi-modal sarcasm detection. Extensive experiments show that MMSD2.0 is a valuable benchmark for building reliable multi-modal sarcasm detection systems and multi-view CLIP can significantly outperform the previous best baselines.
- Libo Qin (77 papers)
- Shijue Huang (14 papers)
- Qiguang Chen (44 papers)
- Chenran Cai (1 paper)
- Yudi Zhang (19 papers)
- Bin Liang (115 papers)
- Wanxiang Che (152 papers)
- Ruifeng Xu (66 papers)