Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PlotQA: Reasoning over Scientific Plots (1909.00997v3)

Published 3 Sep 2019 in cs.CV, cs.AI, and cs.CL

Abstract: Existing synthetic datasets (FigureQA, DVQA) for reasoning over plots do not contain variability in data labels, real-valued data, or complex reasoning questions. Consequently, proposed models for these datasets do not fully address the challenge of reasoning over plots. In particular, they assume that the answer comes either from a small fixed size vocabulary or from a bounding box within the image. However, in practice, this is an unrealistic assumption because many questions require reasoning and thus have real-valued answers which appear neither in a small fixed size vocabulary nor in the image. In this work, we aim to bridge this gap between existing datasets and real-world plots. Specifically, we propose PlotQA with 28.9 million question-answer pairs over 224,377 plots on data from real-world sources and questions based on crowd-sourced question templates. Further, 80.76% of the out-of-vocabulary (OOV) questions in PlotQA have answers that are not in a fixed vocabulary. Analysis of existing models on PlotQA reveals that they cannot deal with OOV questions: their overall accuracy on our dataset is in single digits. This is not surprising given that these models were not designed for such questions. As a step towards a more holistic model which can address fixed vocabulary as well as OOV questions, we propose a hybrid approach: Specific questions are answered by choosing the answer from a fixed vocabulary or by extracting it from a predicted bounding box in the plot, while other questions are answered with a table question-answering engine which is fed with a structured table generated by detecting visual elements from the image. On the existing DVQA dataset, our model has an accuracy of 58%, significantly improving on the highest reported accuracy of 46%. On PlotQA, our model has an accuracy of 22.52%, which is significantly better than state of the art models.

Analyzing PlotQA: A Dataset for Reasoning over Scientific Plots

The paper "PlotQA: Reasoning over Scientific Plots" offers a significant contribution to the field of visual question answering (VQA) with the introduction of the PlotQA dataset. PlotQA addresses the limitations of existing datasets, such as FigureQA and DVQA, by providing a more realistic and challenging environment for reasoning over scientific plots. This paper proposes a dataset containing 28.9 million question-answer pairs over 224,377 plots sourced from real-world data and structured questions based on crowd-sourced templates.

Existing VQA datasets for plots tend to simplify the problem by assuming answers originate from a fixed vocabulary or within the image itself. In real-world applications, this assumption fails as questions often require reasoning to yield real-valued answers not present in either a fixed vocabulary or the image. PlotQA aims to bridge this gap by incorporating a large fraction (80.76%) of answers that are out-of-vocabulary (OOV).

The authors highlight the inadequacy of current models like SAN-VQA, BAN, and LoRRA when applied to PlotQA, revealing their limitations in handling OOV questions. This is evidenced by their low overall accuracy on the dataset. To address this, the paper proposes a hybrid approach that combines elements from traditional VQA models with a table-based QA engine. For questions necessitating a direct answer from a fixed vocabulary, the model uses traditional image classification strategies. For more complex reasoning questions that require OOV answers, the model leverages a structured table created by detecting visual elements from the image and applying a QA engine.

The paper reports that this hybrid model significantly outperformed existing models, achieving an accuracy of 22.52% on PlotQA compared to single-digit accuracies from other models. Furthermore, the model also demonstrated substantial improvement on the DVQA dataset, with an accuracy of 58%, surpassing the best-reported accuracy of 46%.

Strong Numerical Results:

  • Performance Improvement on DVQA: The proposed hybrid model achieves 58% accuracy, improving significantly upon existing best-reported results of 46%.
  • Accuracy on PlotQA: The model achieves 22.52% accuracy on PlotQA, a substantial improvement over existing models.

Implications and Future Developments:

The introduction of PlotQA raises several implications for both the theoretical understanding and practical applications of VQA technologies:

  • Enhanced AI Training: With PlotQA, AI systems can be trained to better handle complex reasoning questions, especially those requiring real-world context and OOV answers. This increases their applicability in fields needing advanced data interpretation, such as scientific research, data journalism, and business analytics.
  • Need for Improved Models: The low performance of existing models emphasizes the need for developing architectures capable of deeper semantic understanding and reasoning. This includes better object detection, OCR capabilities, and reasoning over semi-structured data.
  • Table-Based Reasoning Models: The success of the table-based QA component suggests that integrating structured data representation could enhance future models. Additionally, improving the visual element detection accuracy remains a critical task.

Future research could focus on refining these hybrid approaches, enhancing the accuracy of visual element detection in structured images, and developing models that can better utilize the data extracted from these processes. The evaluation of the proposed model against human performance benchmarks further underscores the complexity and challenges inherent in scientific plot reasoning tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Nitesh Methani (2 papers)
  2. Pritha Ganguly (2 papers)
  3. Mitesh M. Khapra (79 papers)
  4. Pratyush Kumar (44 papers)
Citations (169)