Critical Vulnerability in OpenAI ChatGPT Atlas Allows Persistent Memory Injection and Code Execution
October 31st, 2025
Critical

Our Cyber Threat Intelligence Unit has identified a critical vulnerability in OpenAI’s ChatGPT Atlas browser that allows attackers to inject malicious instructions into ChatGPT’s Memory and execute remote code under the user’s privileges. The vulnerability was first publicly reported shortly after the browser’s release in late October 2025. The vulnerability stems from improper input handling in the omnibox and authenticated session workflows, where disguised URLs (prompt injection) and Cross-Site Request Forgery (CSRF) requests can be used to embed hidden prompts that execute automatically during subsequent ChatGPT interactions, potentially leading to unauthorized access, data exfiltration, or malware deployment. This exposure highlights the growing security risks in agentic AI browsers, where integrated large language models (LLMs) manage trusted contexts, authentication tokens, and automated workflows, broadening traditional attack surfaces. Given Atlas’s role in managing automated tasks and connected applications, organizations face an elevated risk of data exposure, credential theft, and downstream compromise when successfully exploited.
Technical Details
Attack Type: CSRF-style or crafted omnibox prompt injection delivered through malicious webpages or phishing links that target Atlas’s omnibox and persistent Memory while the victim is authenticated.
Severity: Critical
Delivery Method: Attackers host malicious pages or links that Atlas interprets as legitimate navigation or prompt input. When a logged-in user visits such content, the attacker's payload is silently written into ChatGPT’s Memory using the victim's valid authentication tokens.
Attack Chain:
Initial Vector: User interacts with a phishing link or malicious webpage while logged into ChatGPT Atlas, triggering a crafted omnibox or CSRF-style request.
Execution: Atlas misinterprets the hidden prompt as trusted input and writes it into persistent Memory.
Payload Delivery: During subsequent sessions, Atlas executes those stored instructions, which perform attacker-defined actions such as data collection, external fetches, or code execution within the user’s context.
Persistence: Injected instructions remain in Memory and sync across all devices tied to the same account, ensuring they are repeatedly activated during future sessions.
Post-Exploitation: Attackers can use the stored logic or stolen tokens for data exfiltration, account takeover, or malware deployment.

Impact
Browser Session Control: Attackers can assume control of the browser session and perform actions as a legitimate user, facilitating unauthorized access to business applications and exposure of sensitive data.
Persistent Undetected Access: Malicious prompts stored in Memory may persist across browsing sessions and synchronized devices, allowing long-term monitoring and manipulation of organizational activity without detection.
Data Exfiltration and Compromise: Compromised sessions can expose confidential business information and connected accounts, creating regulatory, contractual, and reputational risks for organizations.
Malware Delivery and Phishing Exposure: Injected instructions can silently redirect users to attacker-controlled websites, leading to device compromise and secondary phishing campaigns.
Operational and Reputation Damage: Disruption of browser-based automation or AI-driven workflows can result in financial loss, regulatory exposure, and reduced trust in AI-powered business tools.
Detection Method
Monitor Memory Modifications: Track Atlas/ChatGPT Memory for new or altered entries that appear inconsistent with normal user behaviour, indicating possible CSRF-driven or cross-site prompt injection.
Detect Cross-Origin Requests: Analyze browser, proxy, and WAF logs for forged POST requests from untrusted domains targeting ChatGPT or Atlas endpoints.
Validate Omnibox Inputs: Identify malformed or instruction-laden URL strings that Atlas may misinterpret as trusted prompts in the omnibox.
Review Outbound Connections: Correlate agent-initiated or script-driven network activity with sudden outbound fetches to unknown external infrastructure.
Inspect Hidden Prompts: Detect concealed instructions within HTML comments, white-on-white text, or OCR-derived page content.
Correlate Phishing Events: Atlas has been reported to block only ~5.8 % of phishing attempts, making it 90% more vulnerable to phishing than mainstream browsers; correlate AI session data with recent user visits to suspicious or credential harvesting sites.
Monitor Session Tokens: Watch for Memory changes or authenticated actions performed during user-idle periods, which may indicate CSRF activity exploiting authenticated sessions.
Indicators of Compromise
There are no Indicators of Compromise (IOCs) for this Advisory.

Recommendations
To mitigate the risks of malicious Memory injection and session abuse in ChatGPT Atlas and OpenAI environments:
Apply OpenAI’s latest security patches and verify successful deployment.
Enable MFA and conditional access controls for all ChatGPT and OpenAI accounts.
Regularly clear Memory, cookies, and session tokens to reduce potential persistence.
Restrict access to untrusted sites and monitor logs for anomalous authenticated requests to ChatGPT endpoints originating from untrusted sources.
Deploy external anti-phishing and content filtering, since Atlas lacks mainstream phishing protections that other browsers deploy.
Educate users on avoiding unverified Atlas links and suspicious AI-generated content.
Conclusion
This vulnerability in ChatGPT Atlas reveals a dangerous overlap between web navigation and prompt execution in AI-integrated browsers. By crafting malicious input that Atlas mishandles as trusted Memory, attackers can achieve persistent compromise, data theft, and arbitrary code execution under legitimate user privileges. Since Atlas functions as an always-authenticated, AI-powered browser, we urge organizations to patch immediately, audit Memory for anomalies, harden phishing defenses, and enforce strict access controls to contain this emerging agentic-browser threat.