Whisper it - Microsoft uncovers sneaky new attack targeting top LLMs to gain access to encrypted traffic
Date:
Tue, 11 Nov 2025 20:04:00 +0000
Description:
Microsoft warns AI chat traffic can leak conversation topics through data patterns, exposing privacy flaws even under full encryption.
FULL STORY ======================================================================Microsof t finds Whisper Leak shows privacy flaws inside encrypted AI systems
Encrypted AI chats may still leak clues about what users discuss Attackers
can track conversation topics using packet size and timing
Microsoft has revealed a new type of cyberattack it has called "Whisper
Leak", which is able to expose the topics users discuss with AI chatbots,
even when conversations are fully encrypted.
The companys research suggests attackers can study the size and timing of encrypted packets exchanged between a user and a large language model to
infer what is being discussed.
"If a government agency or internet service provider were monitoring traffic to a popular AI chatbot, they could reliably identify users asking questions about specific sensitive topics," Microsoft said. Whisper Leak attacks
This means "encrypted" doesnt necessarily mean invisible - with the vulnerability lies in how LLMs send responses.
These models do not wait for a complete reply, but transmit data incrementally, creating small patterns that attackers can analyze.
Over time, as they collect more samples, these patterns become clearer, allowing more accurate guesses about the nature of conversations.
This technique doesnt decrypt messages directly but exposes enough metadata
to make educated inferences, which is arguably just as concerning.
Following Microsofts disclosure, OpenAI, Mistral, and xAI all said they moved quickly to deploy mitigations.
One solution adds a, random sequence of text of variable length to each response, disrupting the consistency of token sizes that attackers rely on.
However, Microsoft advises users to avoid sensitive discussions over public Wi-Fi, using a VPN, or sticking with non-streaming models of LLMs.
The findings come alongside new tests showing that several open-weight LLMs remain vulnerable to manipulation, especially during multi-turn
conversations.
Researchers from Cisco AI Defense found even models built by major companies struggle to maintain safety controls once the dialogue becomes complex.
Some models, they said, displayed a systemic inability to maintain safety guardrails across extended interactions.
In 2024, reports surfaced that an AI chatbot leaked over 300,000 files containing personally identifiable information, and hundreds of LLM servers were left exposed , raising questions about how secure AI chat platforms
truly are.
Traditional defenses, such as antivirus software or firewall protection, cannot detect or block side-channel leaks like Whisper Leak, and these discoveries show AI tools can unintentionally widen exposure to surveillance and data inference.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the
Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
======================================================================
Link to news story:
https://www.techradar.com/pro/whisper-it-microsoft-uncovers-sneaky-new-attack- targeting-top-llms-to-gain-access-to-encrypted-traffic
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)