New WhisperLeak side-channel attack lets eavesdrop on sensitive LLM conversations

 

New WhisperLeak side-channel attack lets eavesdrop on sensitive LLM conversations

A new type of side-channel attack can allow adversaries to infer conversations with large language models (LLMs), even when protected by strong encryption such as Transport Layer Security (TLS). The attack, dubbed “Whisper Leak,” exploits subtle metadata patterns in network traffic to deduce a user’s discussion topics.

According to Microsoft, the issue poses significant real-world risks, particularly in environments under heavy surveillance by oppressive regimes. Attackers could use Whisper Leak to monitor conversations on politically sensitive topics such as protests, elections, banned materials, or journalistic reports.

The attack does not decrypt the content of encrypted messages, instead, it leverages packet size and timing information to infer what an LLM is discussing with the user. Since large language models generate responses token by token, the structure of the outgoing traffic can inadvertently reveal patterns about the underlying text.

In their tests, Microsoft researchers simulated an adversary who could only observe encrypted network traffic. By training a binary classifier to detect when a user discussed the “legality of money laundering,” the attack achieved over 98% accuracy across 17 of 28 tested models, with some exceeding 99.9%. The researchers say this allows attackers to reliably identify one in 10,000 targeted conversations with virtually no false positives.

The findings suggest that all LLMs are potentially vulnerable, affecting services used for legal, medical, or personal advice.

Microsoft has proposed several mitigations, including random padding, token batching, and packet injection to obscure data patterns. Until widespread mitigations are adopted, researchers urge users to avoid discussing sensitive topics over untrusted networks, use VPNs, and opt for non-streaming LLM models when possible.


Back to the list