Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture (2406.11030v2)

Published 16 Jun 2024 in cs.CL

Abstract: Food is a rich and varied dimension of cultural heritage, crucial to both individuals and social groups. To bridge the gap in the literature on the often-overlooked regional diversity in this domain, we introduce FoodieQA, a manually curated, fine-grained image-text dataset capturing the intricate features of food cultures across various regions in China. We evaluate vision-LLMs (VLMs) and LLMs on newly collected, unseen food images and corresponding questions. FoodieQA comprises three multiple-choice question-answering tasks where models need to answer questions based on multiple images, a single image, and text-only descriptions, respectively. While LLMs excel at text-based question answering, surpassing human accuracy, the open-sourced VLMs still fall short by 41% on multi-image and 21% on single-image VQA tasks, although closed-weights models perform closer to human levels (within 10%). Our findings highlight that understanding food and its cultural implications remains a challenging and under-explored direction.

An Analysis of "FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture"

The paper "FoodieQA: A Multimodal Dataset for Fine-Grained Understanding of Chinese Food Culture" by Wenyan Li et al. introduces FoodieQA, a novel dataset aimed at advancing the understanding of Chinese food culture through multimodal question-answering tasks. This dataset fills a crucial gap in current literature, as it emphasizes the intricacies of regional food culture in China, often overlooked in generalized studies. Specifically, the dataset focuses on multiple-choice question-answering tasks across multi-image, single-image, and text-only formats, addressing a breadth of attributes including visual presentation, ingredients, culinary techniques, and regional associations.

Key Contributions and Findings

  1. Dataset Structure and Diversity: FoodieQA is composed of manually curated data sourced from native Chinese speakers, ensuring authenticity and regional relevance. The dataset encompasses 14 distinct Chinese cuisine types, each rich in regional differences, reflecting the nuanced diversity within Chinese culinary traditions.
  2. Evaluation of Vision-LLMs (VLMs) and LLMs: The dataset was tested on a selection of state-of-the-art VLMs and LLMs. A notable finding is the substantial gap between model performance and human-level accuracy, particularly in tasks requiring visual input. For instance, open-weights VLMs lagged significantly, showing a 41% deficit on multi-image and 21% on single-image VQA tasks compared to human accuracy. This highlights current models' limitations in visual cultural context integration and fine-grained reasoning tasks.
  3. Text-Based Question Answering: Interestingly, LLMs demonstrated superior abilities in text-only tasks, even surpassing human performance by leveraging extensive text-based knowledge. This suggests that while models can encapsulate and process vast text data efficiently, the integration of visual cultural cues remains a significant hurdle.
  4. Analysis by Question Type: Performance analyses reveal that models can handle tasks related to cooking techniques and ingredient identification relatively better. However, they struggle severely with understanding regional and taste-related information, evidencing a limited cultural adaptability in these domains.
  5. Challenges in Visual Understanding and Cultural Context: The multi-image VQA posed the greatest challenge to models, particularly in scenarios that resemble real-world complexities such as browsing menus. This underscores the need to enhance current models' capacities in discerning and utilizing visual contexts in culturally nuanced settings.

Implications and Future Directions

The introduction of FoodieQA underscores the necessity for datasets that capture cultural specificity, beyond the monolithic representations often seen in general datasets. The significant disparity between model performance and human-level understanding, especially in visual tasks, indicates an urgent need for advancements in models' multimodal comprehension capabilities. Enhanced model architectures that better integrate visual inputs with contextual, culturally-inclined information could bridge this gap.

Moreover, the paper suggests potential expansions of the dataset to include dishes from other countries or regions, broadening the paper of cultural food understanding across global contexts. Such expansions could not only enhance model robustness but also contribute to a richer understanding of cultural dynamics in AI interpretations.

In conclusion, "FoodieQA" offers a pivotal step toward addressing the complex challenge of integrating cultural nuances into AI systems. As the field progresses, research inspired by this work will likely catalyze more culture-specific datasets, improving models' applicability in diverse cultural landscapes and moving closer to comprehensive AI-based cultural understanding in multimodal frameworks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Wenyan Li (8 papers)
  2. Xinyu Zhang (296 papers)
  3. Jiaang Li (15 papers)
  4. Qiwei Peng (8 papers)
  5. Raphael Tang (32 papers)
  6. Li Zhou (215 papers)
  7. Weijia Zhang (52 papers)
  8. Guimin Hu (11 papers)
  9. Yifei Yuan (37 papers)
  10. Anders Søgaard (120 papers)
  11. Daniel Hershcovich (50 papers)
  12. Desmond Elliott (53 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com