StackOverflowVQA: Stack Overflow Visual Question Answering Dataset (2405.10736v1)
Abstract: In recent years, people have increasingly used AI to help them with their problems by asking questions on different topics. One of these topics can be software-related and programming questions. In this work, we focus on the questions which need the understanding of images in addition to the question itself. We introduce the StackOverflowVQA dataset, which includes questions from StackOverflow that have one or more accompanying images. This is the first VQA dataset that focuses on software-related questions and contains multiple human-generated full-sentence answers. Additionally, we provide a baseline for answering the questions with respect to images in the introduced dataset using the GIT model. All versions of the dataset are available at https://huggingface.co/mirzaei2114.
- VQA: visual question answering. CoRR, abs/1505.00468.
- Vqa therapy: Exploring answer differences by visually grounding answers.
- Nat Friedman. 2021. Introducing github copilot: your ai pair programmer. URL https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer.
- Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR).
- mplug: Effective and efficient vision-language learning by cross-modal skip-connections.
- Mike. 2023. Mikex86/stackoverflow-posts · datasets at hugging face.
- Fawaz Sammani and Nikos Deligiannis. 2023. Uni-nlx: Unifying textual explanations for vision and vision-language tasks.
- Generate answer to visual questions with pre-trained vision-and-language embeddings. WiNLP Workshop at EMNLP.
- The color of the cat is gray: 1 million full-sentences visual question answering (fsvqa). arXiv preprint arXiv:1609.06657.
- StackExchangeCommunity. 2023. Stack exchange data dump.
- GIT: A generative image-to-text transformer for vision and language. Transactions on Machine Learning Research.
- Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. CoRR, abs/2111.02358.