top of page

Google Gemini for Workspace Vulnerability Lets Attackers Hide Malicious Scripts in Emails

July 15th, 2025

Severity Level: Medium

Technical Details

  • Affected Component: Google Gemini for Workspace – Email Summarization and AI Context Processing.

  • Severity: Medium.

  • Affected Services: Gmail, Google Docs, Google Slides, Google Drive (Gemini integration points).

  • Vulnerability Type: Indirect Prompt Injection (IPI) via Hidden HTML/CSS Instructions.

  • Attack Vector: Specially crafted email messages with hidden HTML/CSS directives processed by   Gemini’s “Summarize this email” feature.

The attack leverages indirect prompt injection (IPI) by inserting hidden directives into email content using crafted HTML and CSS. Key characteristics include:

  • Malicious instructions are embedded within <Admin> or similar tags, using CSS techniques such as white-on-white text or zero font-size to render them invisible to the recipient.

  • No links, attachments, or scripts are required. Only formatted HTML is used.

  • When the user activates the Gemini summarization feature, the AI model processes the hidden prompt and includes deceptive security warnings or calls-to-action in its summary output.

  • The vulnerability affects multiple Google Workspace products, including Gmail, Docs, Slides, and Drive, expanding the attack surface.

  • Researchers categorized the attack within the 0DIN taxonomy under “Stratagems → Meta-Prompting → Deceptive Formatting”, with a moderate social impact score.

  • A proof-of-concept showed how hidden spans can inject commands that make Gemini display phishing warnings, urging users to call fraudulent numbers or visit malicious websites.

Our Cyber Threat Intelligence Unit has identified that threat actors are exploiting a vulnerability in Google Gemini for Workspace, specifically its AI-driven email summarization feature, to carry out convincing phishing campaigns. This vulnerability takes advantage of Gemini's “Summarize this email” feature to display fabricated security alerts that look like legitimate warnings from Google itself. The attack can result in credential theft, social engineering attacks, and broader compromise within Google Workspace environments.

Image by ThisisEngineering

Impact

The exploitation of Gemini for Workspace represents a dangerous blend of social engineering and AI manipulation. Unlike traditional phishing, which is often identified by keyword detection or link analysis, this method leverages user trust in AI-generated summaries, which are perceived as neutral and helpful tools.

Key impacts include:

  • Credential Theft: Users might be tricked into submitting corporate credentials to attacker-controlled sites due to benign summaries masking malicious requests.

  • Malware Distribution: Malicious links or attachments may appear as "routine updates" or "meeting notes," enticing users to download infected files.

  • Bypassing Awareness Training: Most phishing training emphasizes recognizing suspicious email content. This attack circumvents that entirely by altering the visible summary.

  • Security Control Evasion: Email gateways and DLP systems that inspect content but not summaries may fail to flag dangerous emails if the summarized text appears clean.

  • Widened Attack Surface: The use of AI summaries allows attackers to reach users who typically ignore suspicious messages, significantly increasing click-through rates.

  • Erosion of Trust in AI: If such abuse persists, organizational trust in productivity AI tools like Gemini may decline, hampering further adoption and innovation.

This is not just a technical challenge but also a trust and governance issue, emphasizing the risk of deploying AI tools without robust safeguards against misuse.

Detection Method

To detect this type of phishing:

  • Check for mismatches between the Gemini summary and the full email content, either through manual review or using DLP tools.

  • Review user clicks on summarized emails, especially when coming from unknown or external sources.

  • Look for unusual access activity after email clicks, such as geographic anomalies or OAuth login attempts.

  • Log summary content if possible and compare it with the original email to identify misleading summaries.

  • Monitor login failures and password reset attempts occurring shortly after viewing email summaries.

Indicators of Compromise

There are no Indicators of Compromise (IOCs) for this Advisory.

mix of red, purple, orange, blue bubble shape waves horizontal for cybersecurity and netwo

Recommendations

  • Temporarily disable Gemini summaries in Gmail until more robust content checks are implemented.

  • Educate users about the risks of misleading summaries and review full email content before acting.

  • Improve email filtering to prioritize full content inspection over AI summaries.

  • Enable URL rewriting and sandboxing for all email links.

  • Conduct phishing simulations, including misleading summaries, to raise awareness.

  • Collaborate with Google Workspace Admins to monitor AI feature usage and update trust policies as needed.

Conclusion

This vulnerability highlights the emerging risks posed by AI integrations within productivity platforms. Google Gemini’s vulnerability to prompt injection through hidden email content expands the traditional phishing landscape into AI-assisted exploitation. Organizations should implement proactive security measures, monitor AI output closely, and update user training to combat this growing threat vector. Continuous monitoring and strengthening of AI security measures are crucial to counter these new attack methods.

bottom of page