Dice Question Streamline Icon: https://streamlinehq.com

Cause of gpt-4o-mini’s limited benefit from token batching mitigation

Determine why the OpenAI gpt-4o-mini deployment exhibits substantially smaller reductions in Whisper Leak attack effectiveness under token batching (e.g., batch sizes of five or more tokens) compared to most other provider models, despite token batching otherwise mitigating the attack for the majority of tested systems.

Information Square Streamline Icon: https://streamlinehq.com

Background

Token batching aggregates multiple tokens per network event to obscure fine-grained size and timing signals. Across many models, batching notably reduces attack effectiveness, but gpt-4o-mini shows a markedly smaller improvement.

The authors explicitly state that the reason for gpt-4o-mini’s relative resistance to batching-based mitigation is unknown, identifying a concrete unresolved question relevant to understanding and improving defenses.

References

Notably openai-gpt-4o-mini did not observe nearly as large of a benefit for an unknown reason.

Whisper Leak: a side-channel attack on Large Language Models (2511.03675 - McDonald et al., 5 Nov 2025) in Section 6.2 (Token batching mitigation)