top of page

Whisper Leak: Side-Channel Attack Exposes AI Chat Topics From Encrypted Traffic

November 11th, 2025

High

Our Cyber Threat Intelligence Unit is monitoring “Whisper Leak,” a recently disclosed privacy-threat technique identified by Microsoft researchers. This side-channel attack allows an on-path adversary to infer the subject of an AI conversation, even when the traffic is fully protected by TLS encryption. The attack takes advantage of streaming behavior in large language model (LLM) APIs, where token-by-token outputs generate distinct packet sizes and timing intervals. By analyzing encrypted traffic patterns, an observer, such as an ISP, compromised router, or malicious Wi-Fi access point, can accurately determine whether a user is discussing sensitive topics (e.g., legal, political, financial, or medical). Microsoft’s coordinated disclosure (November 2025) confirms collaboration with vendors, including OpenAI, Mistral, Microsoft, and xAI, who have implemented mitigations for affected endpoints. No evidence of active exploitation has been reported to date, though the underlying risk remains for unmitigated or future LLM deployments.

Technical Details

  • Attack Type: Passive side-channel attack on encrypted streaming LLM traffic using packet-size and timing analysis.

  • Severity: High.

  • Delivery Method: Requires only passive network visibility into encrypted packets (e.g., ISP-level, compromised router, corporate proxy, malicious Wi-Fi AP); attackers do not need to intercept, modify, or decrypt data.

  • Attack Chain:

    • A user interacts with an AI model (e.g., ChatGPT, Claude, Gemini) over standard HTTPS/TLS.

    • As the model streams responses, encrypted packets reveal distinctive size and timing patterns.

    • An on-path observer records only metadata (no plaintext content).

    • A trained machine-learning classifier (LightGBM, Bi-LSTM, or BERT-based) maps those metadata features to specific topic categories (e.g., medical, legal, financial, cybersecurity, political).

    • The result allows real-time or batch inference of conversation themes despite full encryption.

  • Techniques Observed:

    • Packet-size fingerprinting: Exploiting characteristic payload lengths during streaming output.

    • Timing analysis: Measuring inter-packet intervals that mirror model response cadence.

    • Metadata classification: Using machine learning models to distinguish sensitive vs. benign topics with > 98% accuracy in PoC tests.

    • Multi-provider exposure: Proven effective against multiple LLM vendors’ APIs with similar streaming architectures; resistance varies based on batching and obfuscation controls.

Image by ThisisEngineering

Impact

  • Data Privacy & Security:Adversaries can infer whether encrypted AI chats concern specific sensitive or regulated topics. This undermines user and enterprise confidentiality even when TLS is intact.

  • Operational Risk: Organizations relying on AI for confidential workflows may face reputational and operational impacts if end users lose trust in privacy protections.

  • Regulatory Exposure: Topic inference may constitute a data leak under frameworks such as GDPR, HIPAA, or financial privacy regulations, since metadata itself can reveal protected information.

Detection Method

  • Network Controls:

    • Detect and investigate unauthorized packet capture, TLS interception, or sniffing tools on internal networks.

    • Monitor for unusual traffic patterns that may indicate data collection or eavesdropping activities.

  • Policy & Configuration:

    • Maintain an inventory of applications using streaming LLM endpoints; restrict such behavior in sensitive workflows.

    • Verify that Whisper Leak mitigations (e.g., randomized padding or token-output obfuscation) are active on all enterprise AI integrations.

Indicators of Compromise

There are No Indicators of Compromise (IOCs) for this Advisory.

mix of red, purple, orange, blue bubble shape waves horizontal for cybersecurity and netwo

Recommendations

For Organizations:

  • Confirm with AI providers that mitigations for Whisper Leak have been implemented across all production APIs.

  • Enforce controls to limit streaming output in high-sensitivity contexts (legal, healthcare, finance).

  • Audit and restrict administrative access to network devices capable of capturing encrypted traffic.

  • Integrate LLM traffic privacy reviews into vendor risk assessments and compliance checks.

For Users:

  • Avoid discussing sensitive or regulated information over AI services on untrusted or public networks.

  • Use a reputable VPN to reduce exposure to on-path observation.

  • Prefer non-streaming or mitigated inference modes when confidentiality is paramount.

Conclusion

The “Whisper Leak” disclosure highlights how encrypted AI communications can still expose valuable metadata through side-channel analysis. Although major providers have implemented mitigations and no active exploitation is known, this research shows that AI traffic must be protected as rigorously as any other sensitive workload. We urge organizations to confirm vendor mitigations, harden network monitoring policies, and treat LLM streaming channels as potential vectors for privacy leakage.

bottom of page