Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 188 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 57 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 431 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

I Know What You Said: Unveiling Hardware Cache Side-Channels in Local Large Language Model Inference (2505.06738v3)

Published 10 May 2025 in cs.CR

Abstract: LLMs that can be deployed locally have recently gained popularity for privacy-sensitive tasks, with companies such as Meta, Google, and Intel playing significant roles in their development. However, the security of local LLMs through the lens of hardware cache side-channels remains unexplored. In this paper, we unveil novel side-channel vulnerabilities in local LLM inference: token value and token position leakage, which can expose both the victim's input and output text, thereby compromising user privacy. Specifically, we found that adversaries can infer the token values from the cache access patterns of the token embedding operation, and deduce the token positions from the timing of autoregressive decoding phases. To demonstrate the potential of these leaks, we design a novel eavesdropping attack framework targeting both open-source and proprietary LLM inference systems. The attack framework does not directly interact with the victim's LLM and can be executed without privilege. We evaluate the attack on a range of practical local LLM deployments (e.g., Llama, Falcon, and Gemma), and the results show that our attack achieves promising accuracy. The restored output and input text have an average edit distance of 5.2% and 17.3% to the ground truth, respectively. Furthermore, the reconstructed texts achieve average cosine similarity scores of 98.7% (input) and 98.0% (output).

Summary

Analyzing Hardware Cache Side-Channel Vulnerabilities in Local LLM Inference

This paper addresses a significant and unexplored vulnerability in local LLM inference, specifically through hardware cache side-channel attacks. The research primarily focuses on the risk posed by these channels in leaking sensitive input and output data during LLM inference processes performed locally. The analysis identifies two key leakage mechanisms: token value leakage derived from cache access patterns and token position leakage stemming from timing signals across autoregressive decoding phases.

Key Findings and Methodology

The paper introduces a novel framework to exploit these vulnerabilities, demonstrating an eavesdropping attack capable of reconstructing both input and output text without direct interaction with the victim's LLM. Notably, the paper finds that local LLMs like Llama, Falcon, and Gemma, popular in privacy-sensitive applications, are not immune to these threats. The authors achieve an average cosine similarity of 98.7% for output reconstructions and 98.0% for inputs, indicating substantial leakage potential with an average edit distance of 5.2% and 17.3%, respectively, from their original text.

The research identifies distinct features within LLM inference that contribute to side-channel vulnerabilities. Firstly, the token embedding required for transforming text into model-compatible representations inadvertently reveals token values through its pattern of data access. Secondly, the autoregressive nature of generation stages introduces a temporal dimension that could be exploited to discern token sequence positions.

Challenges and Solutions

The authors articulate two primary challenges faced during the implementation of their attack. The presence of noise in cache measurements, leading to false positives and negatives, complicates the accurate retrieval of token values. Additionally, the inherent parallel processing nature of LLMs can obfuscate the sequence of input tokens, presenting a challenge in reconstructing the original text order.

Addressing these issues, the paper proposes innovative solutions leveraging advanced signal processing techniques and deep learning models. By employing a novel text reconstruction algorithm that incorporates Power Spectral Density analysis, the authors mitigate noise impacts. They further enhance accuracy through fine-tuning of LLMs on synthesized datasets, which emulate expected cache access and timing patterns, enabling more precise text reconstruction.

Implications and Future Prospects

The implications of this research are multifaceted. Practically, it underscores the importance of re-evaluating local LLM deployments, especially where security and privacy are paramount. Theoretically, it expands the discourse on how side-channel vulnerabilities can impact machine learning models, particularly those deploying in edge scenarios.

The paper also lays the groundwork for future research aimed at enhancing model resilience against such vulnerabilities. Proposals for future developments may include integrating more robust mitigations against cache-based side channels and exploring alternative architectures or processing strategies that further obfuscate potential leakage paths. Additionally, the exploration of other machine learning paradigms' susceptibility to similar attacks could yield broader insights into secure model deployment practices.

In conclusion, this research highlights a critical area of concern for LLM security, demonstrating the potential for significant privacy breaches if proper safeguards are not implemented. This paper offers a thorough investigation into the risks and provides a robust framework that can serve as a basis for both remediation and further inquiry.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.