Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Simple Entity-Centric Questions Challenge Dense Retrievers (2109.08535v3)

Published 17 Sep 2021 in cs.CL and cs.IR

Abstract: Open-domain question answering has exploded in popularity recently due to the success of dense retrieval models, which have surpassed sparse models using only a few supervised training examples. However, in this paper, we demonstrate current dense models are not yet the holy grail of retrieval. We first construct EntityQuestions, a set of simple, entity-rich questions based on facts from Wikidata (e.g., "Where was Arve Furset born?"), and observe that dense retrievers drastically underperform sparse methods. We investigate this issue and uncover that dense retrievers can only generalize to common entities unless the question pattern is explicitly observed during training. We discuss two simple solutions towards addressing this critical problem. First, we demonstrate that data augmentation is unable to fix the generalization problem. Second, we argue a more robust passage encoder helps facilitate better question adaptation using specialized question encoders. We hope our work can shed light on the challenges in creating a robust, universal dense retriever that works well across different input distributions.

Analysis of "Simple Entity-Centric Questions Challenge Dense Retrievers"

The paper "Simple Entity-Centric Questions Challenge Dense Retrievers" presents a methodological critique of current dense retrieval models, demonstrating crucial limitations when dealing with simple, entity-rich questions. It constructs an evaluation set named EntityQuestions, derived from simple queries related to entities extracted from Wikidata, and uncovers that dense retrievers underperform compared to sparse retrieval models like BM25 on these types of questions.

The authors outline a series of targeted investigations to identify the roots of this observed discrepancy. They explore how dense retrieval models, such as the Dense Passage Retriever (DPR), generalize poorly when tasked with retrieving passages for questions involving uncommon entities. The findings highlight a distinct inadequacy in dense models' ability to handle questions unless the patterns are encountered during training. This performance gap is particularly stark for questions involving person-related entities.

To address the noted deficiencies, the paper evaluates potential solutions, such as data augmentation and the design of specialized question encoders. Although data augmentation shows some potential to bridge performance gaps within single domains, it is generally ineffective in extending improvements to new, unseen domains. Consequently, the authors focus on refining passage encoders to achieve more memory-efficient and effective question adaptation. They propose using a fixed passage index along with various finetuning strategies to bolster the capabilities of question encoders.

The empirical results demonstrate the challenges dense models face as they are juxtaposed with conventional sparse retrieval benchmarks. On the EntityQuestions dataset, sparse retrievers consistently surpass dense retrievers by substantial margins—49.7% versus 72.0% on average using top-20 retrieval accuracy. This disparity suggests inherent biases in dense retrievers towards more frequent or previously encountered patterns during training, posing significant hurdles for rare entity questions.

The paper also scrutinizes generalization problems further, identifying a correlation between an entity's presence in frequently asked questions and the model's retrieval accuracy. Dense retrievers perform well on common entities but show a marked decline on less frequent ones, indicating a popularity bias. Furthermore, the paper finds that models can better generalize to new, unobserved entities when familiar question patterns are seen during training.

This exploration into retrieval issues translates into broader implications for the development and deployment of robust and universal dense retrievers that can cope with diverse input distributions. As researchers look to refine artificial intelligence, these insights underscore the importance of entity and pattern recognition and memory in enhancing dense retrieval model performance.

Looking forward, the research indicates potential pathways for ameliorating dense retrieval models, such as incorporating entity memory into these networks or leveraging entity-aware embedding models. The refinements in modeling, especially in the context of unique and rare entities, are pivotal in closing the performance gaps with their sparse counterparts. This paper thus serves as a significant contribution to understanding and addressing the limitations inherent in current dense passage retrieval systems, providing a foundation for future work to build upon.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Christopher Sciavolino (2 papers)
  2. Zexuan Zhong (17 papers)
  3. Jinhyuk Lee (27 papers)
  4. Danqi Chen (84 papers)
Citations (137)
Youtube Logo Streamline Icon: https://streamlinehq.com