Memes in the Wild: Assessing the Generalizability of the Hateful Memes Challenge Dataset (2107.04313v1)
Abstract: Hateful memes pose a unique challenge for current machine learning systems because their message is derived from both text- and visual-modalities. To this effect, Facebook released the Hateful Memes Challenge, a dataset of memes with pre-extracted text captions, but it is unclear whether these synthetic examples generalize to memes in the wild'. In this paper, we collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset. We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, injecting noise and diminishing performance of multimodal models, and 2) Memes are more diverse than
traditional memes', including screenshots of conversations or text on a plain background. This paper thus serves as a reality check for the current benchmark of hateful meme detection and its applicability for detecting real world hate.
- Hannah Rose Kirk (33 papers)
- Yennie Jun (4 papers)
- Paulius Rauba (6 papers)
- Gal Wachtel (1 paper)
- Ruining Li (10 papers)
- Xingjian Bai (8 papers)
- Noah Broestl (2 papers)
- Martin Doff-Sotta (8 papers)
- Aleksandar Shtedritski (13 papers)
- Yuki M. Asano (63 papers)