Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest (2209.06293v2)
Abstract: Large neural networks can now generate jokes, but do they really "understand" humor? We challenge AI models with three tasks derived from the New Yorker Cartoon Caption Contest: matching a joke to a cartoon, identifying a winning caption, and explaining why a winning caption is funny. These tasks encapsulate progressively more sophisticated aspects of "understanding" a cartoon; key elements are the complex, often surprising relationships between images and captions and the frequent inclusion of indirect and playful allusions to human experience and culture. We investigate both multimodal and language-only models: the former are challenged with the cartoon images directly, while the latter are given multifaceted descriptions of the visual scene to simulate human-level visual understanding. We find that both types of models struggle at all three tasks. For example, our best multimodal models fall 30 accuracy points behind human performance on the matching task, and, even when provided ground-truth visual scene descriptors, human-authored explanations are preferred head-to-head over the best machine-authored ones (few-shot GPT-4) in more than 2/3 of cases. We release models, code, leaderboard, and corpus, which includes newly-gathered annotations describing the image's locations/entities, what's unusual in the scene, and an explanation of the joke.
- Miriam Amin and Manuel Burghardt. 2020. A survey on approaches to computational humor generation. In The 4th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature.
- Issa Annamoradnejad and Gohar Zoghi. 2020. ColBERT: Using BERT sentence embedding for humor detection. arXiv preprint arXiv:2004.12765.
- Salvatore Attardo. 2008. A primer for the linguistics of humor. The primer of humor research, 8:101–55.
- Michael Billig. 2005. Laughter and ridicule: Towards a social critique of humour. Sage.
- Kim Binsted and Graeme Ritchie. 1994. An implemented model of punning riddles. In AAAI.
- Large dataset and language model fun-tuning for humor recognition. In ACL.
- Language models are few-shot learners. NeurIPS.
- Towards multimodal sarcasm detection (an _Obviously_ perfect paper). In ACL.
- FLUTE: figurative language understanding and textual explanations. In EMNLP.
- Punny captions: Witty wordplay in image descriptions. In NAACL.
- We are humor beings: Understanding and predicting visual humor. In CVPR.
- PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
- An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR.
- Analyzing humor in newspaper comic strips using verbal-visual analysis. Lingua Cultura, 12(4):383–388.
- Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books.
- Sigmund Freud. 1905. Jokes and their Relation to the Unconscious, volume 8 of The Standard Edition of the Complete Psychological Works of Sigmund Freud. Hogarth, London.
- William F. Fry. 1963. Sweet madness: A study of humor. Pacific Books, Palo Alto.
- Charles R. Gruner. 1978. Understanding laughter: The workings of wit & humor. Nelson-Hall, Chicago.
- Kilem Gwet. 2014. Handbook of Inter-Rater reliability: The Definitive Guide to Measuring the Extent of Agreement Among Raters, 4th edition edition. Advanced Analytics, LLC.
- Humor knowledge enriched transformer for understanding multimodal humor. In AAAI.
- UR-FUNNY: a multimodal language dataset for understanding humor. In EMNLP.
- The curious case of neural text degeneration. In ICLR.
- The New Yorker cartoon caption contest dataset.
- NEXT: A system for real-world development, evaluation, and application of active learning. In NeurIPS.
- Random walk factoid annotation for collective discourse. In ACL.
- Large language models are zero-shot reasoners. In NeurIPS.
- Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text summarization branches out.
- Microsoft COCO: Common objects in context. In ECCV.
- Tania Lombrozo. 2006. The structure and function of explanations. Trends in Cognitive Sciences, 10(10):464–470.
- Few-shot self-rationalization with natural language prompts. In Findings of NAACL.
- Rada Mihalcea and Stephen Pulman. 2009. Characterizing humour: An exploration of features in humorous texts. In Proceedings of the 8th International Conference on Computational Linguistics and Intelligent Text Processing, page 337–347, Berlin, Heidelberg. Springer-Verlag.
- Rada Mihalcea and Carlo Strapparava. 2005. Making computers laugh: Investigations in automatic humor recognition. In EMNLP.
- Rada Mihalcea and Carlo Strapparava. 2006. Technologies that make you smile: Adding humor to text-based applications. IEEE Intelligent Systems, 21(5):33–39.
- Harvey Mindess. 1971. Laughter and Liberation. Nash.
- The pudding caption contest experiments. https://pudding.cool/projects/caption-contest/. Accessed: 2022-04-01.
- Matthijs P. Mulder and Antinus Nijholt. 2002. Humour research: State of the art. Centre for Telematics and Information Technology, University of Twente.
- Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
- OpenAI. 2023. Gpt-4 technical report.
- BLEU: a method for automatic evaluation of machine translation. In ACL.
- Pytorch: An imperative style, high-performance deep learning library. In NeurIPS.
- Multimodal humor dataset: Predicting laughter tracks for sitcoms. In WACV.
- Hypothesis only baselines in natural language inference. In *SEM.
- Matt Post. 2018. A call for clarity in reporting BLEU scores. In WMT.
- Humor in collective discourse: Unsupervised funniness detection in the New Yorker cartoon caption contest. In LREC.
- Learning transferable visual models from natural language supervision. In ICML.
- Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR.
- Victor Raskin. 1979. Semantic mechanisms of humor. In Annual Meeting of the Berkeley Linguistics Society, volume 5, pages 325–335.
- Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In KDD.
- Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In EMNLP.
- Arthur Schopenhauer. 1818. The world as will and idea, volume 1.
- Inside jokes: Identifying humorous cartoon captions. In KDD.
- News2meme: An automatic content generator from news based on word subspaces from text and image. In Conference on Machine Vision Applications.
- Thomas R Shultz. 1976. A cognitive-developmental analysis of humour. Transaction Publishers.
- Oliviero Stock and Carlo Strapparava. 2003. Getting serious about the development of computational humor. In IJCAI.
- Rajesh Shanmuga Sundaram. 2018. Generation of Humorous Caption for Cartoon Images Using Deep Learning. Ph.D. thesis, Texas A&M University-Commerce.
- Chenhao Tan. 2022. On the diversity and limits of human explanations. In NAACL.
- A KL-LUCB algorithm for large-scale crowdsourcing. In NeurIPS.
- How reasonable are common-sense reasoning tasks: A case-study on the Winograd schema challenge and SWAG. In EMNLP.
- Villy Tsakona. 2009. Language and image interaction in cartoons: Towards a multimodal theory of humor. Journal of Pragmatics, 41(6):1171–1188.
- Alessandro Valitutti. 2011. How many jokes are really funny? In Human-Machine Interaction in Translation: Proceedings of the 8th International NLPCS Workshop.
- Attention is all you need. NeurIPS.
- David Wallace. 2022. Lecture notes for MIT 2.00b toy product design: Innovation and associations.
- Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In ICML.
- William Yang Wang and Miaomiao Wen. 2015. I can has cheezburger? a nonparanormal approach to combining textual and visual information for predicting and generating popular meme descriptions. In NAACL.
- Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS.
- White, E. B. 1941. Preface. In E. B. White and Katherine S. White, editors, A Subtreasury Of American Humor, page xvii. The original version of this quote appeared as a preview in The Saturday Review (1941), credited to both Whites. But, the quote appears in the preface to A Subtreasury (1941) with authorship solely credited to E.B.. We thus credited the quote itself to E.B., and credited both E.B. and K.S. as editors of the anthology in which it appears in non-preview form.
- Reframing human-AI collaboration for generating free-text explanations. In NAACL.
- Hannah Wilson. 2019. Project four - nobody knows you’re a bot.
- Transformers: State-of-the-art natural language processing. In EMNLP: System Demonstrations.
- Neural joking machine: Humorous image captioning. In CVPR Language & Vision Workshop.
- Michael Zelenko and Frank Bi. 2015. On the internet, nobody knows you’re a machine.
- Socratic models: Composing zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598.