Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
167 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

StackOverflowVQA: Stack Overflow Visual Question Answering Dataset (2405.10736v1)

Published 17 May 2024 in cs.CV

Abstract: In recent years, people have increasingly used AI to help them with their problems by asking questions on different topics. One of these topics can be software-related and programming questions. In this work, we focus on the questions which need the understanding of images in addition to the question itself. We introduce the StackOverflowVQA dataset, which includes questions from StackOverflow that have one or more accompanying images. This is the first VQA dataset that focuses on software-related questions and contains multiple human-generated full-sentence answers. Additionally, we provide a baseline for answering the questions with respect to images in the introduced dataset using the GIT model. All versions of the dataset are available at https://huggingface.co/mirzaei2114.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (12)
  1. VQA: visual question answering. CoRR, abs/1505.00468.
  2. Vqa therapy: Exploring answer differences by visually grounding answers.
  3. Nat Friedman. 2021. Introducing github copilot: your ai pair programmer. URL https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer.
  4. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR).
  5. mplug: Effective and efficient vision-language learning by cross-modal skip-connections.
  6. Mike. 2023. Mikex86/stackoverflow-posts · datasets at hugging face.
  7. Fawaz Sammani and Nikos Deligiannis. 2023. Uni-nlx: Unifying textual explanations for vision and vision-language tasks.
  8. Generate answer to visual questions with pre-trained vision-and-language embeddings. WiNLP Workshop at EMNLP.
  9. The color of the cat is gray: 1 million full-sentences visual question answering (fsvqa). arXiv preprint arXiv:1609.06657.
  10. StackExchangeCommunity. 2023. Stack exchange data dump.
  11. GIT: A generative image-to-text transformer for vision and language. Transactions on Machine Learning Research.
  12. Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. CoRR, abs/2111.02358.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com