Cause of gpt-4o-mini’s limited benefit from token batching mitigation
Determine why the OpenAI gpt-4o-mini deployment exhibits substantially smaller reductions in Whisper Leak attack effectiveness under token batching (e.g., batch sizes of five or more tokens) compared to most other provider models, despite token batching otherwise mitigating the attack for the majority of tested systems.
References
Notably openai-gpt-4o-mini did not observe nearly as large of a benefit for an unknown reason.
— Whisper Leak: a side-channel attack on Large Language Models
(2511.03675 - McDonald et al., 5 Nov 2025) in Section 6.2 (Token batching mitigation)