Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 92 TPS
Gemini 2.5 Pro 51 TPS Pro
GPT-5 Medium 27 TPS
GPT-5 High 30 TPS Pro
GPT-4o 87 TPS
GPT OSS 120B 379 TPS Pro
Kimi K2 185 TPS Pro
2000 character limit reached

GASLITEing the Retrieval: Exploring Vulnerabilities in Dense Embedding-based Search (2412.20953v1)

Published 30 Dec 2024 in cs.CR and cs.CL

Abstract: Dense embedding-based text retrieval$\unicode{x2013}$retrieval of relevant passages from corpora via deep learning encodings$\unicode{x2013}$has emerged as a powerful method attaining state-of-the-art search results and popularizing the use of Retrieval Augmented Generation (RAG). Still, like other search methods, embedding-based retrieval may be susceptible to search-engine optimization (SEO) attacks, where adversaries promote malicious content by introducing adversarial passages to corpora. To faithfully assess and gain insights into the susceptibility of such systems to SEO, this work proposes the GASLITE attack, a mathematically principled gradient-based search method for generating adversarial passages without relying on the corpus content or modifying the model. Notably, GASLITE's passages (1) carry adversary-chosen information while (2) achieving high retrieval ranking for a selected query distribution when inserted to corpora. We use GASLITE to extensively evaluate retrievers' robustness, testing nine advanced models under varied threat models, while focusing on realistic adversaries targeting queries on a specific concept (e.g., a public figure). We found GASLITE consistently outperformed baselines by $\geq$140% success rate, in all settings. Particularly, adversaries using GASLITE require minimal effort to manipulate search results$\unicode{x2013}$by injecting a negligible amount of adversarial passages ($\leq$0.0001% of the corpus), they could make them visible in the top-10 results for 61-100% of unseen concept-specific queries against most evaluated models. Inspecting variance in retrievers' robustness, we identify key factors that may contribute to models' susceptibility to SEO, including specific properties in the embedding space's geometry.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces GASLITE, a new attack demonstrating how minimal adversarial content can effectively manipulate search rankings in dense embedding retrieval systems.
  • The GASLITE attack showed high success, over 140% vs baselines, by injecting <0.0001% content to impact top-10 rankings for many queries across tested models.
  • The findings highlight the urgent need for enhanced security in dense retrieval systems, particularly those using public data, suggesting future work on defense mechanisms, model robustness, and hybrid retrieval techniques.

Analyzing Vulnerabilities in Dense Embedding-based Retrieval Systems through the Lens of Adversarial SEO Attacks

The paper explores significant vulnerabilities inherent in dense embedding-based retrieval systems, with specific focus on adversarial SEO attacks. This research provides a strategic assessment of dense retrieval's susceptibility to malicious content optimization, specifically using the proposed gradient-based GASLITE attack.

Dense embedding-based retrieval, a powerful approach harnessing deep learning representations to rank text passages, underlies many state-of-the-art systems such as Retrieval Augmented Generation (RAG). Despite their efficacy, these systems are open to manipulation by adversaries. This research highlights how malicious actors can promote specific content by injecting adversarial passages into the corpus without altering the model or accessing the corpus content.

Methodology

The researchers introduce a meticulously designed attack named GASLITE—Gradient-based Approximated Search for maLIcious Text Embeddings. Unlike prior attempts, this attack effectively generates adversarial passages through mathematical optimization of embedding space geometry. By cleverly leveraging gradient-based methods similar to HotFlip, GASLITE ensures the adversarial content attains a high retrieval ranking across varied query distributions. The attack was extensively evaluated against nine advanced retrieval models under different threat models, focusing on scenarios involving queries related to specific concepts, such as public figures.

Key Findings

The GASLITE attack demonstrates significant potency by achieving a remarkably high success rate, often exceeding 140% compared to existing baselines. Notably, adversaries needed to inject a minuscule fraction of the content corpus—less than 0.0001%—to influence top-10 search rankings for 61-100% of concept-specific queries across the models tested.

Further scrutiny of the models revealed insights into their varying resilience against SEO attacks. This variability is attributed to factors such as embedding space geometry and chosen similarity metrics. For instance, the efficacy of the attack suggests that embedding models with denser vector distributions are more prone to manipulation. These insights provide a foundation for future enhancements in model robustness against adversarial attacks.

Practical and Theoretical Implications

From a practical standpoint, this paper raises alarms about the security of systems relying on public corpora like Wikipedia. The attack's success in altering retrieval results with minimal effort points to the urgent need for developing more robust detection and defense mechanisms. Incorporating defense strategies such as filtering based on text perplexity or similarity measures, although partly mitigative, often comes at the cost of retrieval effectiveness.

Theoretically, the contribution sheds light on the embedding models' susceptibility to adversarial optimization, paving the path for further research into defense mechanisms against poisoning attacks. This insight is crucial as the field progresses toward deploying large-scale, realistic retrieval applications where the consequences of adversarial attacks could be dire.

Future Directions

Future explorations may focus on enhancing the mathematical frameworks of retrieval embedding models to mitigate vulnerabilities exposed by attacks like GASLITE. Additionally, investigating the interplay between embedding anisotropy and adversarial attack robustness could provide significant breakthroughs in architecting secure retrieval systems. A promising direction includes developing hybrid retrieval techniques that balance dense and sparse retrieval characteristics, making systems less susceptible to targeted attacks.

In conclusion, this paper extensively outlines a clear method for elucidating and exploiting vulnerabilities in dense retrieval systems via adversarial SEO attacks, offering both an intriguing research challenge and a call to action for improving AI security practices.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube