Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction (CVE-2025-32711)
June 17th, 2025
Severity Level: Medium

Technical Details
Component Affected: Microsoft 365 Copilot AI integration (Outlook, Word, Teams, etc.).
Vulnerability ID: CVE-2025-32711.
Severity: Medium.
Vulnerability Type: Zero-click data exposure via background AI query processing.
Exploitability: Remote (no user interaction required).
Vector: Malicious Microsoft 365 content (email, document, or Teams message).
The attack scenario involves an adversary embedding malicious payloads within email bodies, shared Word or Excel files, or Teams messages. When these items are received, Copilot automatically reads and analyzes their content to generate previews or suggestions, which may include data leakage from prior sessions or cached context.
Researchers have demonstrated that external entities can exploit Copilot’s summarization features to extract sensitive or extraneous information, even when the target user does not open the email or file. This risk is amplified in enterprise environments, where Copilot may retain user interaction history, context windows, or sensitive cached information.
Our Cyber Threat Intelligence Unit has identified a newly discovered zero-click vulnerability in Microsoft 365's AI-powered Copilot feature that exposes sensitive user data without requiring any user interaction. Identified by cybersecurity researchers at WithSecure, this vulnerability raises significant concerns regarding the privacy and security of how AI assistants manage cached and preloaded content in enterprise environments. The vulnerability allows attackers to silently extract private responses generated by Microsoft 365 Copilot through malicious email previews or collaborative document sharing, leveraging AI processing mechanisms that operate in the background.

Impact
Successful exploitation of this vulnerability can result in:
Unauthorized access to Copilot-generated responses, including insights derived from earlier context.
Inadvertent exposure of sensitive internal data through automated summaries.
Exfiltration of enterprise metadata or confidential discussions from AI previews.
Bypassing user awareness or consent mechanisms typically required for data access.
Risk of compliance violations (e.g., GDPR, HIPAA) due to unapproved data processing.
Expansion of traditional attack vectors to include invisible AI-triggered content management.
The zero-click nature of this vulnerability makes it especially dangerous, as it requires no action from the user, unlike phishing or social engineering attacks.
Detection Method
To determine exposure or signs of exploitation:
Audit Microsoft 365 Copilot logs: Look for AI-generated summaries tied to emails or files that the recipient never opened.
Review audit trails in Outlook, Teams, and SharePoint: Cross-reference timestamps for Copilot access events with user interaction history.
Analyze external message triggers: Identify summarization actions initiated by external users or suspicious collaboration invites.
Security teams should also monitor:
Automated summarization requests from non-standard or anonymous users.
Background on Copilot operations processing content from new senders or first-time collaborators.
Unusual AI API activity patterns associated with inactive sessions or inactive user accounts.
Indicators of Compromise
There are no Indicators of Compromise (IOCs) for this advisory.

Recommendations
Restrict AI Preprocessing Scope: Limit Copilot’s ability to process emails, documents, or messages from unknown or external sources.
Implement Conditional Access Rules: Use Microsoft Defender for Cloud Apps and Microsoft Purview to restrict Copilot activity to trusted zones and verified users.
Disable Preview Features for High-Risk Users: In regulated environments, turn off Copilot’s automatic summarization for specific user groups.
Review and Clean Context Histories: Purge cached memory or session histories from Copilot, where feasible, to prevent leakage of contextual data.
Enhance User Awareness: Educate users and administrators about potential vulnerabilities and promote cautious collaboration practices.
Engage Microsoft Support: Collaborate with Microsoft to implement mitigation steps, configuration adjustments, and patch advisories.
Conclusion
The emergence of a zero-click vulnerability in Microsoft 365 Copilot highlights the growing complexity and potential risks associated with integrating generative AI into everyday enterprise workflows. As these systems develop a deeper understanding of context and memory, they also create new opportunities for silent and indirect data exposure. We urge organizations to reassess how they deploy AI assistants, particularly in environments where privacy, compliance, and information integrity are of paramount importance. Until formal fixes or architectural changes are implemented, the most effective defenses are strict access controls, behavior auditing, and careful configuration of Copilot.