Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering (1809.02789v1)

Published 8 Sep 2018 in cs.CL

Abstract: We present a new kind of question answering dataset, OpenBookQA, modeled after open book exams for assessing human understanding of a subject. The open book that comes with our questions is a set of 1329 elementary level science facts. Roughly 6000 questions probe an understanding of these facts and their application to novel situations. This requires combining an open book fact (e.g., metals conduct electricity) with broad common knowledge (e.g., a suit of armor is made of metal) obtained from other sources. While existing QA datasets over documents or knowledge bases, being generally self-contained, focus on linguistic understanding, OpenBookQA probes a deeper understanding of both the topic---in the context of common knowledge---and the language it is expressed in. Human performance on OpenBookQA is close to 92%, but many state-of-the-art pre-trained QA methods perform surprisingly poorly, worse than several simple neural baselines we develop. Our oracle experiments designed to circumvent the knowledge retrieval bottleneck demonstrate the value of both the open book and additional facts. We leave it as a challenge to solve the retrieval problem in this multi-hop setting and to close the large gap to human performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Todor Mihaylov (23 papers)
  2. Peter Clark (108 papers)
  3. Tushar Khot (53 papers)
  4. Ashish Sabharwal (84 papers)
Citations (1,160)

Summary

  • The paper introduces the OpenBookQA dataset, which challenges models to combine explicit science facts with broader common knowledge for advanced reasoning.
  • It comprises 6,000 multiple-choice questions supported by 1,326 curated science facts generated through a rigorous multi-stage crowdsourcing process.
  • Evaluations reveal that current QA models lag significantly behind human performance, emphasizing the need for improved knowledge integration and multi-hop reasoning strategies.

A New Dataset for Open Book Question Answering

This paper introduces OpenBookQA, an innovative dataset modeled after open book exams to assess a deeper level of human understanding. Unlike conventional QA datasets that generally focus on document or knowledge base queries, OpenBookQA necessitates the integration of explicitly provided science facts with broad common knowledge. This unique requirement positions OpenBookQA as a crucial benchmark for evaluating multi-hop reasoning and knowledge integration in QA systems.

Dataset Composition and Characteristics

OpenBookQA consists of approximately 6,000 multiple-choice questions, each supported by a "book" containing 1,326 science facts. These questions probe the understanding and application of these core facts in a variety of novel scenarios. To answer the questions correctly, one must combine these core facts with supplementary common knowledge not explicitly provided in the dataset (e.g., knowing that a suit of armor is metal and metals conduct electricity).

The dataset is generated through a rigorous multi-stage crowdsourcing process, ensuring both the complexity and clarity of the questions. This process involves creating questions that existing retrieval-based models cannot trivially solve, confirming the answerability through crowd validation, and shuffling answers to avoid positional biases.

Evaluation of QA Systems

The paper provides an extensive evaluation of state-of-the-art QA systems on OpenBookQA. Key findings include:

  • Pre-trained Models: Models such as PMI, TableILP, TupleInference, and DGEM, which are effective on other science QA datasets, perform poorly on OpenBookQA, barely surpassing the random baseline.
  • Neural Baselines: Simple neural models trained on OpenBookQA achieve significantly higher scores. These include models like Question Match and Odd-One-Out, highlighting that neural architectures, even without external knowledge, can approximate the required reasoning to some extent.
  • Incorporation of Core Facts: When provided with the core science fact (labeled as ff), models show marginal improvement, indicating that while beneficial, core facts alone are insufficient.
  • Enhanced Model with External Knowledge: A knowledge-enhanced reader incorporating additional facts from ConceptNet and WordNet shows further improvement, albeit not enough to bridge the notable gap with human performance.

Implications and Future Research

The massive gap between the human baseline (approximately 92%) and the best-performing models (around 50%) underscores the complexity of OpenBookQA. The results suggest that the main challenges are not just simple retrieval or common sense reasoning but effective multi-hop reasoning with partial context. Practical implications include developing more sophisticated models capable of better knowledge retrieval and more nuanced reasoning strategies.

While current advancements hint at the potential of integrating multiple knowledge sources, significant work remains to develop models that can seamlessly combine disparate pieces of information into coherent and correct answers.

Conclusion

OpenBookQA presents a substantial challenge and a valuable resource for the NLP and AI research community, advocating for advancements in reasoning and knowledge integration. It highlights both the capabilities and limitations of existing models and paves the way for future research aimed at developing more holistic and reasoning-centric QA systems. The large performance gap observed serves as a call to continue exploring innovative approaches for closing this gap, possibly by leveraging advances in multi-modal data integration and deeper contextual understanding.