Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 171 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 60 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 437 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Defending Against Disinformation Attacks in Open-Domain Question Answering (2212.10002v3)

Published 20 Dec 2022 in cs.CL and cs.IR

Abstract: Recent work in open-domain question answering (ODQA) has shown that adversarial poisoning of the search collection can cause large drops in accuracy for production systems. However, little to no work has proposed methods to defend against these attacks. To do so, we rely on the intuition that redundant information often exists in large corpora. To find it, we introduce a method that uses query augmentation to search for a diverse set of passages that could answer the original question but are less likely to have been poisoned. We integrate these new passages into the model through the design of a novel confidence method, comparing the predicted answer to its appearance in the retrieved contexts (what we call Confidence from Answer Redundancy, i.e. CAR). Together these methods allow for a simple but effective way to defend against poisoning attacks that provides gains of nearly 20% exact match across varying levels of data poisoning/knowledge conflicts.

Citations (3)

Summary

  • The paper presents novel defense strategies leveraging data redundancy to mitigate disinformation attacks in ODQA systems.
  • It employs query augmentation to diversify information retrieval and achieves a nearly 20% boost in exact match scores under adversarial conditions.
  • The study highlights the effectiveness of combining query diversification with redundant answer validation to enhance system reliability.

Introduction

Open-Domain Question Answering (ODQA) systems, designed to fetch information from extensive corpora, face significant challenges due to adversarial attacks, especially misinformation. These attacks hijack the integrity of responses by poisoning the data sources these systems rely on. As ODQA systems are increasingly deployed in real-world scenarios, securing them against such vulnerabilities has become paramount.

Challenge of Data Poisoning in ODQA

Recent findings have demonstrated the susceptibility of ODQA systems to adversarial poisoning, causing notable accuracy declines in production environments. These adversarial interferences typically manipulate the source documents or introduce fake information, misleading the systems to generate incorrect answers. Despite the gravity of this issue, defending against such manipulation has not been extensively explored until now.

Novel Defense Mechanisms

A groundbreaking approach introduced by Johns Hopkins University researchers leverages the redundancy inherent in large datasets to counteract misinformation. The defense mechanism comprises two innovative methods:

  1. Query Augmentation: This technique diversifies the information retrieval process by generating alternative queries that cover a broader context yet aim for the same information piece. These augmented queries are less susceptible to being tainted by poisoned data, thereby increasing the chance of retrieving accurate information.
  2. Confidence from Answer Redundancy (CAR): A novel confidence assessment method that evaluates the reliability of an answer based on its recurrence in the retrieved documents. This method assumes that a correct answer is likely to appear across multiple sources, adding an extra layer of validation.

Performance and Evaluation

The proposed methods exhibited remarkable performance improvements across various levels of data poisoning. Through extensive experiments involving query augmentation and the CAR strategy, the researchers reported nearly a 20% increase in exact match scores, even in heavily poisoned environments. This advancement not only showcases the potential of leveraging data redundancy and query diversification but also marks a significant step forward in defending ODQA systems against misinformation attacks.

Conclusion

The findings from Johns Hopkins University provide a promising avenue for enhancing the robustness of ODQA systems against data poisoning. The introduction of query augmentation and the CAR method offers a simple yet effective framework for safeguarding information integrity, underscoring the critical role of innovative defense strategies in the era of advanced AI and machine learning technologies. As these systems continue to evolve, developing robust defenses against adversarial attacks will be crucial in ensuring their reliability and trustworthiness in real-world applications.

Future Directions

While this research marks a significant stride in defending ODQA systems against misinformation, it primarily focuses on entities and information that are widely represented in data sources. Future investigations could extend these defense mechanisms to less popular entities, further enhancing the resilience of ODQA systems. As adversarial tactics continue to advance, continuous efforts in fortifying these systems against emerging threats will be essential in maintaining their efficacy and reliability in providing accurate information.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 3 likes.

Upgrade to Pro to view all of the tweets about this paper:

Youtube Logo Streamline Icon: https://streamlinehq.com